WURT Rapid Response Surveys Two Techniques for Rapid

  • Slides: 19
Download presentation
WURT + Rapid Response Surveys Two Techniques for Rapid Feedback Andy Rowe ARCeconomics November

WURT + Rapid Response Surveys Two Techniques for Rapid Feedback Andy Rowe ARCeconomics November 2009

Agenda 2 Techniques in 4 Phases Setting & Approach Information Gathering Program Response Reflection

Agenda 2 Techniques in 4 Phases Setting & Approach Information Gathering Program Response Reflection 2

Evaluators Arrive at the Assignment 3

Evaluators Arrive at the Assignment 3

Setting for WURT “While U R There” • WURT developed formative settings when evaluator

Setting for WURT “While U R There” • WURT developed formative settings when evaluator is part of a project team – M&E part of the Madhya Pradesh Urban Services for the Poor programme http: //www. mpurban. gov. in/mpusp/MPUSPHOme. aspx – WURTing subteams are primary observers and users – baseline and updates – Designed to be important input for project elements where evaluation did not have resources or authority – Intent was to promote evaluative thinking amongst teams of Indian & European staff 4

Setting for Rapid Response Surveys • EPA Office of Conflict Prevention and Resolution (CPRC)

Setting for Rapid Response Surveys • EPA Office of Conflict Prevention and Resolution (CPRC) – One of three main evaluation assignments with CPRC • Process and Results of mediation at EPA • Evaluation of training supported by CPRC (26 to date in 8 core areas) – Goal to provide rapid feedback to CPRC and trainers to improve training – Method limited, Success Case Method would be better 5

Summary - Settings • Both formative • WURT – Evaluator supports use through informal

Summary - Settings • Both formative • WURT – Evaluator supports use through informal technical assistance to teams – Had strong political capital from project leadership to implement • Rapid Response Surveys – Standardized survey with tailored question on learning outcomes specific to training – Evaluator developed survey with input – Evaluator administers and reports – CPRC reviews reports, communicates report to trainers 6

Evaluator Deploys Advanced Direct Observation Techniques 7

Evaluator Deploys Advanced Direct Observation Techniques 7

Applying WURT • Project leadership strongly advocated WURT • WURT built into usual work

Applying WURT • Project leadership strongly advocated WURT • WURT built into usual work of team – Undertaken by team – Baseline from initial stocktaking (e. g. capacity of Bhopal municipal engineering department) – WURT records key changes needed, mechanisms for achieving these, what success will look like and how and when to update • Technical assistance from M&E for WURT to build evaluative thinking, support use for reflective practice and promote quality • WURT results to be part of overall reporting to donors including Governments of India and Madhya Pradesh 8

Applying Rapid Response Surveys • Topics include what you learned, achievement of training goals,

Applying Rapid Response Surveys • Topics include what you learned, achievement of training goals, rating inputs, process and results, major positive and negative factors, how to improve training, communicating training opportunities • Web survey starts 8 AM day after training, 4 reminders spaced 1 to 1. 5 days apart - 4. 8 minutes to complete 74% response • Brief individual training reports provided day after final survey (5 working days after training) • Annual report synthesizes training evaluations for the year and reviewed with CPRC 9

Summary – Information Gathering • WURT applied by the primary users of the information

Summary – Information Gathering • WURT applied by the primary users of the information – evaluation function provides technical assistance – Shades of collaborative and empowerment approach • Rapid Response Surveys applied by the evaluation function – primary users have limited design input and receive reports – Milk toast method 10

Program Experiences Evaluation Use 11

Program Experiences Evaluation Use 11

Basic Tenants of Use • A key success indicator of evaluation includes use or

Basic Tenants of Use • A key success indicator of evaluation includes use or influence • Evaluation as a social process and involving coproduction of knowledge with potential decision makers and stakeholders is more likely to be used (see Johnson et al, Mitchell et al) • Evaluative thinking a relative newcomer to evaluator agendas – Linked to use – Collaboration assumed to promote 12

Use of WURT • None, क ई नह , ar bith • Despite strong

Use of WURT • None, क ई नह , ar bith • Despite strong political capital and direction from project management • Likely TA insufficient at outset • Culture a contributing factor • But other applications showing evaluative thinking is seriously difficult • Conclusion – European and Indian team members viewed WURT as redundant since I already know 13

Use of Rapid Response Surveys • Could not envision stronger use • Based on

Use of Rapid Response Surveys • Could not envision stronger use • Based on reports CPRC continuously adapts training – Training inputs such as better adapting training materials to EPA settings – Getting trainers to provide more opportunity for practicing key concepts; reducing number of messages and repeating key messages – Promotion of CPRC training • Evidence of reports from high response surveys is compelling to CPRC and trainers • Conclusion – Surveys with good response rates and tailored to training viewed as salient, credible and legitimate 14

Let’s Gather to Reflect 15

Let’s Gather to Reflect 15

WURT and Rapid Response Surveys • Alignment with tenants of use – WURT aligns

WURT and Rapid Response Surveys • Alignment with tenants of use – WURT aligns with tenants of use: coproduction, social process – Rapid Response Surveys are only reasonably good information, do not align with tenants of use • Efficacy ? – WURT provides a feasible evaluative approach looking directly at the results and attribution – Rapid Response Surveys do not look at use of training – process only 16

WURT and Rapid Response Surveys ! • WURT application in India confirms need and

WURT and Rapid Response Surveys ! • WURT application in India confirms need and the challenges of culture (within and between) • WURT is a better technique (feasible, useful, credible) providing systematic observation of the mechanisms and results • Surveys have appearance of objective knowledge, self assessment is not regarded as good knowledge • Someone else did most of the work • Similar to preference of decision makers for results of simulation models even when clearly wrong and limited utility, over assessments of cultural anthropologists even when clearly right and useful 17

Rapid Response Techniques Face Same Generic Challenges as Evaluation in General YES to Surveys

Rapid Response Techniques Face Same Generic Challenges as Evaluation in General YES to Surveys NO to WURT There is a deeper and more powerful force behind the avoidance of evaluation in science, which I call value phobia. This is the desire to avoid explicitly stated evaluation because of the threat, sometimes economic, that evaluation poses to others and oneself and hence the backlash that it generates. Professional evaluators — whether from the personnel, program, or even the product area — never cease to be amazed by the extent and severity of this threat and the extraordinary lengths to which people, including professionals and teachers of professionals, will go to avoid it. (Scriven p. 80) However, in this discussion I am attacking the core of value-free doctrine by arguing that science is absolutely and essentially evaluative. It is an astonishing demonstration of the intellectual acceptance of positivism that the very term value judgment has come to mean a mere expression of taste or an unsupportable preference; but it has no such pejorative connotation in its literal meaning. Value judgments, like factual judgments and theoretical analyses, are of two kinds — the well-supported and the poorly supported. No scientist can avoid making them, although it is certainly possible to avoid making good ones. (Scriven pp. 80 -81) 18

Photo Credits and References • Photo Credits – – • Evaluator’s Arrive (from Maxim

Photo Credits and References • Photo Credits – – • Evaluator’s Arrive (from Maxim in Washington Post, around Easter 2005) Program Experiences Use (Maxim in Washington Post, Let’s Gather (Dancing Sifaka Lemur) - www. jon-atkinson. com/Lemurs. html Evaluator Deploys (Staring Sifaka Lemur) www. mongabay. com/mad_sifaka_lemurs. htm References – Johnson, Kelli, Lija O. Greenseid, Stacie A. Toal, Jean A. King, Frances Lawrenz, Boris Volkov (2009), Research on Evaluation Use: A Review of the Empirical Literature From 1986 to 2005, American Journal of Evaluation 30(3), pp. 377 -410 – Mitchell RB, W. C. , William C. Clark, David W. Cash, Nancy W. Dickson (2006). Global Environmental Assessments: Information and Influence. Cambridge: MIT Press. – Scriven, Michael (1983): The Evaluation Taboo, New Directions for Program Evaluation #19, pp. 75 -82 19