Designing and Building a ResultsBased Monitoring and Evaluation
Designing and Building a Results-Based Monitoring and Evaluation System: A Tool for Public Sector Management A Workshop for Government Officials and Their Development Partners © 2000 The International Bank for Reconstruction and Development / THE WORLD BANK 1818 H Street N. W. Washington, DC 20433 1
Introduction to the Workshop 2
Designing and Building a Results-Based Monitoring and Evaluation System A Tool for Public Sector Management Table of Contents 1 Introduction to Workshop 2 Introduction to Monitoring and Evaluation 3 Step 1 – Conducting a “Readiness Assessment” 4 Step 2 – Agreeing on Outcomes to Monitor and Evaluate 5 Step 3 – Selecting Key Indicators to Monitor Outcomes 3
Designing and Building Results-Based Monitoring and Evaluation System (Cont. ) Table of Contents 6 Step 4 – Baseline Data on Indicators— Where Are We Today? 7 Step 5 – Planning for Improvement— Setting Results Targets 8 Step 6 – Monitoring for Results 9 Step 7 – The Role of Evaluations 10 Step 8 – Reporting Your Findings 11 Step 9 – Using Your Findings 12 Step 10 – Sustaining the Monitoring and Evaluation System within Your Organization 4
Goals for This Workshop 5 • To prepare you to plan, design, and implement a results-based monitoring and evaluation system within your organization • To demonstrate how an M&E system is a valuable tool to support good public management
Workshop Overview 6 • This workshop focuses on ten steps that describe how results-based monitoring and evaluation systems are designed and built • These steps begin with conducting a “Readiness Assessment” and on through designing and managing your monitoring and evaluation system • We will be discussing these steps, the tasks needed to complete them, and the tools available to help along the way
Ten Steps to Designing, Building and Sustaining a Results-Based Monitoring and Evaluation System Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment 1 2 Agreeing on Outcomes to Monitor and Evaluate 7 Planning for Improvement — Selecting Results Targets 3 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring for Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
Introduction to Results-Based Monitoring and Evaluation 8
The Power of Measuring Results • • • 9 If you do not measure results, you can not tell success from failure If you can not see success, you can not reward it If you can not reward success, you are probably rewarding failure If you can not see success, you can not learn from it If you can not recognize failure, you can not correct it If you can demonstrate results, you can win public support Adapted from Osborne & Gaebler, 1992
Introduction to Results-Based Monitoring and Evaluation What Are We Talking About? 10 • Results-based monitoring and evaluation measures how well governments are performing • Results-based monitoring and evaluation is a management tool! • Results-based monitoring and evaluation emphasizes assessing how outcomes are being achieved over time
Who Are Stakeholders That Care About Government Performance? 11 • • • Government officials/Parliament • Donors Program managers and staff Civil society (Citizens, NGOs, Media, Private Sector etc. )
Remember 12 • Monitoring and evaluation are two separate, but interrelated strategies to collect data and report the findings on how well (or not) the public sector is performing • During this workshop, we will be discussing: – – – Monitoring as a tool – The ten steps to build a results-based monitoring and evaluation system to measure government performance Evaluation as a tool How the two interrelate to support good public management
Reasons to Do Results-Based M&E 13 • Provides crucial information about public sector performance • Provides a view over time on the status of a project, program, or policy • Promotes credibility and public confidence by reporting on the results of programs • • Helps formulate and justify budget requests Identifies potentially promising programs or practices
Reasons to Do Results-Based M&E (cont. ) 14 • Focuses attention on achieving outcomes important to the organization and its stakeholders • • • Provides timely, frequent information to staff • Supports a development agenda that is shifting towards greater accountability for aid lending Helps establish key goals and objectives Permits managers to identify and take action to correct weaknesses
Important… • It takes leadership commitment to achieve a better-performing organization • Plus redeployment of resources to build monitoring and evaluation systems • Plus individuals committed to improve public sector performance So…it comes down to a combination of institutional capacity and political will. 15
Definition Results-Based Monitoring (what we will call “monitoring”) is a continuous process of collecting and analyzing information to compare how well a project, program or policy is performing against expected results 16
Major Activities Where Results Monitoring Is Needed 17 • • Setting goals and objectives • Managing projects, programs and policies • • Reporting to donors Reporting to Parliament and other stakeholders Allocating resources
A New Emphasis on Both Implementation and Results-Based Monitoring • 18 Traditional monitoring focuses on implementation monitoring – This involves tracking inputs ($$, resources, strategies), activities (what actually took place) and outputs (the products or services produced) – This approach focuses on monitoring how well a project, program or policy is being implemented – Often used to assess compliance with workplans and budget
A New Emphasis on Both Implementation and Results-Based Monitoring 19 • Results-based monitoring involves the regular collection of information on how effectively government (or any organization) is performing • Results-based monitoring demonstrates whether a project, program, or policy is achieving its stated goals
Results Based Monitoring Requires Attention to Causal Logic ---or Theory of Change 20 • What is the “ logic” of the overall project, program or policy design? • How do each of the components of the program help to establish an If-Then relation • Is there a theory behind the change expected or seen? In other words does the change follow the logic proposed? • Does this theory or logic hold during implementation?
Implementation Results-Based Monitoring 21 • Long-term, widespread improvement in society Outcomes • Intermediate effects of outputs on clients Outputs • Products and services produced • Tasks personnel undertake to transform inputs to outputs • Financial, human, and material resources Goal (Impacts) Activities Inputs
Results-Based Monitoring: Oral Re-hydration Therapy Goal (Impacts) Outcomes Outputs Activities 22 Inputs • Child mortality from diarrhea reduced • Improved use of ORT in management of childhood diarrhea • Increased maternal knowledge of and access to ORT services • Media campaigns to educate mothers, health personnel trained in ORT, etc. • Funds, ORT supplies, trainers, etc.
Results-Based Monitoring: Adult Literacy • Higher income levels; increase access to higher skill jobs Outcomes • Increased literacy skill; more employment opportunities Outputs • Number of adults completing literacy courses Activities • Literacy training courses Inputs • Facilities, trainers, materials Goal (Impacts) 23
Exercise: Identify the Theory of Change in this example • Goal: Fewer deaths of children under five from malaria –Information is made available for parents about the importance of using bed nets for all children, but particularly children under five years –Bed net distribution increased by 50 % –Increased numbers of parents report that their children under five years sleep under a bed net –Fewer children contracting malaria –New funds available to communities for impregnated bed nets ( IPN) –Knowledge among parents grows about the importance of putting kids to bed under nets 24
Exercise: Identify the Theory of Change in this example • 25 Goal: Create economically viable women-owned microenterprises – Government makes available funds for micro-enterprise loans – Government approves 61 applications from program graduates – 90% of successful applicants begin operating new businesses after government approves application – – – 15 qualified course trainers available – 100 women attend training in micro-enterprise business management 72 women complete training Income of graduates increases 25% in first year after course completion
Some Examples of Results Monitoring 26 Infant Health Girls Education Policy Monitoring Decreasing Infant Mortality Rates Increasing girls education attainment Program Monitoring Clinic-based pre-natal care is being used by pregnant women # of girls in secondary schools completing math and science courses Project Monitoring Information on good prenatal care provided in 6 targeted villages # of girls in four urban neighborhoods completing primary education
Definition Results-Based Evaluation An assessment of a planned, ongoing, or completed intervention to determine its relevance, efficiency, effectiveness, impact and sustainability. The intent is to incorporate lessons learned into the decision-making process. 27
Evaluation Addresses 28 “Why” Questions – What caused the changes we are monitoring “How” Questions – What was the sequence or processes that led to successful (or not) outcomes “Compliance/ Accountability Questions” – Did the promised activities actually take place and as they were planned? Process/ Implementation Questions Was the implementation process followed as anticipated, and with what consequences
Designing Good Evaluations • • • 29 Getting the questions right is critical Answering the questions is critical Supporting public sector decision-making with credible and useful information is critical
Designing Good Evaluations “Better to have an approximate answer to the right question, than an exact answer to the wrong question. ” Paraphrased from statistician John W. Tukey 30
Designing Good Evaluations “Better to be approximately correct than precisely wrong. ” Paraphrased from Bertrand Russell 31
Some Examples of Evaluation Privatizing Water Systems 32 Resettlement Policy Evaluations Comparing model approaches to privatizing public water supplies Comparing strategies used for resettlement of rural villages to new areas Program Evaluations Assessing fiscal management of government systems Assessing the degree to which resettled village farmers maintain previous livelihood Project Evaluations Assessing the improvement in water fee collection rates in 2 provinces Assessing the farming practices of resettled farmers in one province
Complementary Roles of Results-Based Monitoring and Evaluation Monitoring ü ü ü 33 Clarifies program objectives Links activities and their resources to objectives Translates objectives into performance indicators and set targets Routinely collects data on these indicators, compares actual results with targets Reports progress to managers and alerts them to problems Evaluation ü ü ü Analyzes why intended results were or were not achieved Assesses specific causal contributions of activities to results Examines implementation process Explores unintended results Provides lessons, highlights significant accomplishment or program potential, and offers recommendations for improvement
Summary 34 • Results-based monitoring and evaluation are generally viewed as distinct but complementary functions • Each provides a different type of performance information • Both are needed to be able to better manage policy, program, and project implementation
Summary 35 • Implementing results-based monitoring and evaluation systems can strengthen public sector management • Implementing results-based monitoring and evaluation systems requires commitment by leadership and staff alike • We are discussing a political process with technical dimensions – not the reverse
Ten Steps to Designing, Building and Sustaining a Results-Based Monitoring and Evaluation System Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment 1 2 Agreeing on Outcomes to Monitor and Evaluate 36 Planning for Improvement — Selecting Results Targets 3 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring for Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
Step 1 Conducting a “Readiness Assessment” 37
Step One: Conducting a Readiness Assessment 1 1 Selecting Key Indicators to Monitor Outcomes 2 Agreeing on Outcomes to Monitor and Evaluate 38 Planning for Improvement — Selecting Results Targets 3 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring for Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
What is a Readiness Assessment? • A systematic approach to determine the capacity and willingness of a government or organization to construct a results-based M&E system – 39 The approach focuses on: presence or absence of champions, incentives, roles and responsibilities, organizational capacity, and barriers to getting started
Why Do a Readiness Assessment? 1. To understand what incentives (or lack there-of) exist to effectively monitor and evaluate goals? development 2. To understand the roles and responsibilities of those organizations and individuals involved in monitoring and evaluating government policies, programs, and projects? E. g. 40 – – Supreme Audit Office 3. To identify issues related to the capacity ( or lack of) to monitor and evaluate government programs Ministry of Finance Parliament Ministry of Planning
Incentives Help Drive The Need For A Results System • 41 First examine whether incentives exist in any of these four areas to begin designing and building an M&E system? – – – Political (citizen demand) – Economic ( donor requirement) Institutional (legislative/legal framework) Personal ( desire to improve government= champions)
Champions Can Help Drive A Results System • 42 Who are the champion(s) and what is motivating them? – – – Government (social reforms) – – – Donors (PRSP) Parliament (effective expenditures) Civil society (holding government accountable) Others Note: who will not benefit?
Roles and Responsibilities 43 • Assess the roles and responsibilities and existing structures to monitor and evaluate development goals - What is the role of central and line ministries? - What is the role of civil society? What is the role of Parliament? What is the role of the Supreme Audit Agency? What is the role of statistical groups/agencies?
Roles and Responsibilities • Who in the country produces data? – National Government: • Central ministries (MOF, MOP) • Line ministries • Specialized units/offices (National Audit Office) • Census Bureau • National Statistics Office 44
Role and Responsibilities (Cont. ) • Who in the country produces data? – Sub-national/regional government: • Provincial ministries • Other? – – 45 Local government NGO’s Donors Others
Roles and Responsibilities (Cont. ) • Where in the government are data used? – – – – 46 Preparing the budget Resource allocation Program policy making Parliament/legislation & accountability Planning Fiscal management Evaluation and oversight
Capacity • Assess current capacity to monitor and evaluate: – – – 47 Technical skills Managerial skills Existing data systems and their quality Technology available Fiscal resources available Institutional experience
Barriers • Do any of these immediate barriers now exist to getting started in building an M&E system? – – – 48 Lack of fiscal resources Lack of political will Lack of champion Lack of expertise & knowledge Lack of strategy Lack of prior experience
Key Elements of Success • Assess the Country’s Capacity Against the Following: – Does a clear mandate exist for M&E? • 49 PRSP? , Law? Civil Society? Other? – Is there the presence of strong leadership at the most senior level of the government? – Are resource and policy decisions linked to the budget? – How reliable is information that may be used for policy and management decision making? – How involved is civil society as a partner with government, or voice with government? – Are there pockets of innovation that can serve as beginning practices or pilot programs?
Step 2 Choosing Outcomes to Monitor & Evaluate 50
Agreeing on Outcomes to Monitor and Evaluate Conducting a Readiness Assessment 1 22 Agreeing on Outcomes to Monitor and Evaluate 51 Planning for Improvement — Selecting Results Targets Selecting Key Indicators to Monitor Outcomes 3 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring for Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
Why an Emphasis on Outcomes? • Makes explicit the intended objectives of government action (“Know where you are going before you get moving”) • • 52 Outcomes are what produce benefits They tell you when you have been successful or not
Why Is It Important to Choose a Set of Key Goals or Outcomes? “If you don’t know where you’re going, any road will get you there. ” Paraphrased from Lewis Carroll’s Alice in Wonderland 53
Issues to Consider in Choosing Outcomes to Monitor and Evaluate 54 • • Are there stated national/sectoral goals? • Do citizen polling data indicate specific concerns? • • • Is authorizing legislation present? Have political promises been made that specify improved performance of the government? Other? (Millennium Development Goals) Is aid lending linked with specific goals?
Note: When Choosing Outcomes, Remember – “Do Not Go It Alone!” • 55 Develop a participative approach that includes the views and ideas of key stakeholder groups
Choosing Outcomes—who Needs to be at the Table? Who – Government Civil Society Donors Why – To build consensus for the process 56
Why Building Consensus Is Important “The new realities of governance, globalization, aid lending, and citizen expectations require an approach that is consultative, cooperative and committed to consensus building. ” 57
Developing Outcome Statements Reformulate the concerns identified by stakeholders into positive, desirable outcomes From To Rural Crops are spoiling before getting to the market Improve Farmers Access to Markets Children are dropping out of School Create Incentives For Families To Keep Kids In School No Longer safe to go out after dark 58 Improve crime prevention programs
Outcomes Statements Need Disaggregation Outcome: Increase the percentage of employed people In order to know when we will be successful in achieving this outcome, we need to disaggregate the outcome to answer the following: – – 59 For whom? Where? How much? By when?
Outcome Statements are Derived from identified problems or issues Policy Area: Education From 60 To School buildings are made from poor materials Improve school structures to meet standards of market economy. Children of rural families are unable to travel long distances to school Schools are utilized equally by children living close and those living far from schools Schools are not teaching our youth the content they need for the market economy. Improved curricula meets market-based economy standards. The poor and vulnerable are falling behind and not getting a decent education. Children most in need are receiving educational assistance
Outcome Statements Should Capture Only One Objective Why? Consider this Outcome Statement: - Students in rural areas improve learning and gain better quality of life. What are the measurement issues? 61
Developing Outcomes for One Policy Area: Example: Education 62
In Summary: Why an Emphasis on Outcomes? • Makes explicit the intended objectives of government action (“Know where you are going before you get moving”) 63 • Outcomes are the results governments hope to achieve • Clear setting of outcomes is key to results-based M&E system • Note: Budget to outputs, manage to outcomes!
Outcomes Summary Continued Outcomes are usually not directly measured—only reported on Outcomes must be translated to a set of key indicators 64
Step 3 Selecting Key Indicators to Monitor Outcomes 65
Selecting Key Performance Indicators to Monitor Outcomes Conducting a Readiness Assessment 1 Selecting Key Indicators to Monitor Outcomes 2 Agreeing on Outcomes to Monitor and Evaluate 66 33 Planning for Improvement — Selecting Results Targets 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring for Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
Selecting Key Performance Indicators to Monitor Outcomes • Outcome indictors are not the same as outcomes • Each outcome needs to be translated into one or more indicators – 67 An outcome indicator identifies a specific numerical measurement that tracks progress (or not) toward achieving an outcome Urban Institute 1999
An Outcome Indicator Answers the question: “How will we know success when we see it? ” 68
Selecting Outcome Indicators The “CREAM” of Good Performance A good performance indicator must be: 69 Clear (Precise and unambiguous) Relevant (Appropriate to subject at hand) Economic (Available at reasonable cost) Adequate (Must provide a sufficient basis to assess performance) Monitorable (Must be amenable to independent validation) Salvatore-Schiavo-Campo 2000
When Selecting Your Project, Program, or Policy Indicators 70 • Select several for any one outcome • Make sure the interest of multiple stakeholders are considered • Know that over time, it is ok (and expected) to add new ones and drop old ones • Have at least three points of measurement before you consider changing your indicator
How Many Indicators Are Enough? The minimum number that answers the question: “Has the outcome been achieved? ” 71
Performance Indicators • A variable that tracks the changes in the development intervention or shows results relative to what was planned • The cumulative evidence of a cluster of indicators is used to see if an initiative is making progress 72
Why Use Proxy Indicators? • Only use indirect measures (proxies) when data for direct indicators are not available or feasible to collect at regular intervals • Example… – 73 Number of new tin roofs or televisions as a proxy measure of increased household income
Outcome: Increased annual revenue of farmers Indicators - Outcome or not? • • • 74 An Example % of new rural roads % in annual amount of spoiled crops % annual rainfall % of total agricultural employment % rural to urban migration % adoption of new crop cultivation techniques
Outcome: Reduction in Childhood Morbidity From Communicable Diseases Indicators – Outcome or not? 75 • • % in missed school days due to illness • • • Number of children immunized An Example % annual hospital admissions due to illness More medical doctors hired Numbers of monthly reports of communicable diseases from district hospitals % working days missed by parents % childhood gastrointestinal diseases compared with all childhood reported diseases
Developing A Set of Outcomes 1. 2. 76 Fewer children seeking treatment for malaria Learning outcomes for primary school children improve Indicators 1. % children sleeping under bed nets 2. Morbidity rates of malaria in kids under 15 1. % of Grade 6 students scoring 70% or better on standardized math and science tests Baselines Targets
Checklist for Assessing Proposed Indicators Outcome to be measured: _______________ Indicator selected: __________________ ü Is the Indicator… 1 As direct as possible a reflection of the outcome itself? 2 Sufficiently precise to ensure objective measurement? 3 Calling for the most practical, cost-effective collection of data 4 Sensitive to change in the outcome, but relatively unaffected by other changes? 5 Disaggregated as needed when reporting on the outcome? 77 United Way of America
Using Pre-Designed Indicators * A number of development agencies have created indicators to track development goals, including 78 • • Millennium Development Goals (MDGs) • • World Bank – Rural Development Handbook UNGAS ( United Nation General Assembly On HIV) IMF- Public finance /expenditure index * A pre-defined list of indicators are those indicators established independent of the context of any individual country or organization
Using Pre-Designed Indicators: Pros and Cons Pros – • Can be aggregated across similar types of projects/programs/policies • Reduces costs of building multiple unique measurement systems • Creates greater harmonization of donor requirements Cons – • Often does not address country specific goals • Often viewed as imposed—coming from the top down • Does not promote key stakeholder participation and ownership • Multiple competing indicators 79
In Summary: Developing Indicators 80 • You will need to develop your own indicators to meet your own needs. • Developing good indicators often takes more than one try! • Arriving at the final indicators you will use will take time! • Pilot to establish protocol for measurement of indicator ( denominator/numerator)
Step 4 Baseline Data on Indicators – Where Are We Today 81
Baseline Data on Indicators – Where Are We Today Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment 1 2 Agreeing on Outcomes to Monitor and Evaluate 82 Planning for Improvement — Selecting Results Targets 3 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring for Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
“If you do not know where you are, you will have difficulty determining where you need to go. ” Harry Hatry Urban Institute, 1999 83
Establishing Baseline Data on Indicators A performance baseline is… • 84 Information (quantitative or qualitative) that provides data at the beginning of, or just prior to, the monitoring period. The baseline is used to: – Learn about recent levels and patterns of performance on the indicator; and to – Gauge subsequent policy, program, or project performance
The challenge now is to think about how to obtain baseline information for results indicators selected for each outcome 85
Identify Data Sources for Your Indicators 86 • Sources are who or what provide data – not the method of collecting data • What types of data sources can you think of for performance indicators in Highway Transportation Safety?
Building Baseline Information 84 84 84 87
Data Sources May Be Primary or Secondary • PRIMARY data are collected directly by your organization, for example, through surveys, direct observation, and interviews. • SECONDARY data have been collected by someone else, initially for a purpose other than yours. Examples include survey data collected by another agency, a Demographic Health Survey, or data from a financial market. – 88 Secondary data often can save you money in acquiring data you need, but be careful!
Sources of Data • • • 89 Written records (paper and electronic) Individuals involved with the program General public Trained observers Mechanical measurements and tests
Design Data Collection Methods 1. Decide how to obtain the data you need from each source 2. Prepare data collection instruments 3. Develop procedures for use of the data collection instruments 90
Data Collection Methods Panel Surveys Key informant interviews Conversation with concerned individuals Community Interviews Field visits Focus Group Interviews Participant Observation Direct observation Reviews of official records (MIS and admin data) Informal/Less Structured Methods 91 One-Time Survey Census Field experiments Questionnaires More Structured/Formal Methods
Practicality • • 92 Are the data associated with the indicator practical? Ask whether… – – Quality data are currently available – Primary data collection, when necessary, is feasible and cost-effective The data can be procured on a regular and timely basis
Comparison of Major Data Collection Methods Date Collection Method 93 Characteristic Review of Program Records Self. Administered Questionnaire Interview Rating by Trained Observer Cost Low Moderate to High Depends on Availability of Low. Cost Observers Amount of Training Required for Data Collectors Some None to Some Moderate to High Completion Time Depends on Amount of Data Needed Moderate Short to Moderate Response Rate High, if Records Contain Needed Data Depends on How Distributed Generally Moderate to Good High United Way of America
Developing Baseline Data : Outcomes 1. Fewer children seek treatment for malaria Indicators 1. % if children sleeping under bed nets Baselines 40 2. Morbidity rates of malaria in children under 15 years 1. Learing outcomes for primary school children improve 94 1. %grade six students scoring 70% or better on standardized math and science tests 1. Targets
In Summary: Establishing Baseline Data on Indicators A baseline is… • 95 Information (quantitative or qualitative) that provides data at the beginning of, or just prior to, the monitoring period. The baseline is used to: – Learn about recent levels and patterns of performance on the indicator; and to – Gauge subsequent policy, program, or project performance
Step 5 Planning for Improvement – Selecting Results Targets 96
Planning for Improvement – Selecting Results Targets Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment 1 2 Agreeing on Outcomes to Monitor and Evaluate 97 3 Planning for Improvement — Selecting Results Targets 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring for Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
Definition Targets are the quantifiable levels of the indicators that a country or organization wants to achieve at a given point in time— For Example, Agricultural exports will increase by 20% in the next three years over the baseline 98
Identifying Expected or Desired Level of Project or Program or Policy Results Requires Selecting Performance Targets Baseline Indicator Level + Desired Level of Improvement Assumes a finite and expected level of inputs, activities, and outputs 99 = Target Performance Desired level of performance to be reached within a specific time
Examples of Targets Related to Development 1. Goal: Improved Health Outcome target: Reduce by 20% the prevalence of HIV/AIDS in urban populations in eastern African countries by 2012 2. Goal: Social Development Outcome target: Improve by 30% the Primary Education enrollment rates in Kyrgyz Republic by 2008 against the baseline 3. Goal: Environmental Sustainability Outcome target: Reduce the effects of carbon emission on polar bear habitats by 40% by 2011 100
Factors to Consider When Selecting Indicator Targets 101 • Clear understanding of baseline starting point (e. g. average of last 3 years, last year, average trend, etc. ) • Funding and level of personnel resources expected throughout the target period • Amount of outside resources expected to supplement the program’s resources • • Political concerns Institutional capacity
Additional Considerations in Setting Indicator Targets 102 • • Only one target is desirable for each indicator • Most targets are set yearly, but some could be set quarterly; others set for longer periods (not more than 5 years) • It takes time to observe the effects of improvements; therefore, be realistic when setting targets If the indicator is new (not previously used) be careful on setting firm targets (use a range) Adapted from the Urban Institute, 1999
Caution: • It takes time to observe the effects of improvements, therefore: – – 103 Be realistic when setting targets Avoid promising too much and thus programming yourself to fail
Additional Considerations When Setting Indicator Targets 104 • A target does not have to be one single numerical value; it can be a range • • • Consider previous performance Take your baseline seriously Targets should be feasible, given all the resource (input) considerations Adapted from the Urban Institute, 1999
“Games Sometimes Played When Setting Targets” 105 • Set targets so modest (easy) that they will surely be met • Move the target (as needed) to fit performance • Pick targets that are not politically sensitive
106
Targets Support Public Accountability • “Whether they concern the time someone waits for treatment for cancer or the number of police officers on the beat, targets can help ensure that attention is focused and energy concentrated in the right directions. Targets challenge low expectations and give the public a clear benchmark against which they can measure progress. ” David Miliband Financial Times (October 9, 2003) 107
Developing Targets Outcomes 1. 2. 108 Fewer children seek treatment for malaria Learning outcomes for primary school children improve Indicators Baselines Targets 1. % of children sleeping under bed nets 1. 40% urban children ages 3 -5 in 2007 1. 85% urban children ages 3 -5 by 2010 2. Morbidity rates of malaria in children under 15 years 2. 40% rural children in 2007 2. 10% rural children aqes 3 -5 in 2010 1. % of Grade 6 students scoring 70% or better on standardized math and science tests 1. 75% in 2002 scored 70% or better in math. 1. 80% scoring 70% or better in math by 2006. 61% in 2002 scored 70% or better in science 67% scoring 70% or better in science by 2006.
Now We Have A Results Framework Note: This completed matrix becomes your results framework! – 109 It defines your outcomes and gives you a plan for how you will know if you have been successful (or not) in achieving these outcomes
In Summary… Baseline Indicator Level + Desired Level of Improvement Assumes a finite and expected level of inputs, activities, and outputs 110 = Target Performance Desired level of performance to be reached within a specific time
Step 6 Monitoring For Results 111
Building a Monitoring System Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment 1 2 Agreeing on Outcomes to Monitor and Evaluate 112 Planning for Improvement — Selecting Results Targets 3 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring for Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
Monitoring for Results 113 • A results-based monitoring system tracks both implementation (inputs, activities, outputs) and results (outcomes and goals) • Implementation monitoring is supported through the use of management tools – budget, staffing plans, and activity planning
Monitoring for Results (cont. ) 114 • Implementation monitoring tracks the means and strategies used by the organization • Means and strategies are found in annual and multiyear workplans • Do not forget: Results framework is not the same as a work plan • Do not forget: Budget to outputs, manage to outcomes
Developing A Results Plan 115 • Once a set of outcomes are identified, it is time to develop a plan to assess how the organization will begin to achieve these outcomes • In the traditional approach to developing a plan, the first thing a manager usually did was to identify activities and assign responsibilities • But the shortcoming in this approach is that completing all the activities does not mean the same as reaching the outcome goal
Results Key Types of Monitoring Impact Results Monitoring Outcome Implementation Output 116 Activity Input Implementation Monitoring (Means and Strategies)
Translating Outcomes to Action 117 • Note: Activities are crucial! They are the actions you take to manage and implement your programs, use your resources, and deliver the services of government • But the sum of these activities may or may not mean you have achieved your outcomes • Question is: How will you know when you have been successful?
Implementation Monitoring Links to Results Monitoring Outcome 118 Target 1 Target 2 Target 3 Means and Strategies (Multi-Year and Annual Work Plans)
Linking Implementation Monitoring to Results Monitoring Goal Outcome Target 119 Means and Strategies Maternal mortality reduced Increased use of district health centers by pregnant women 80% of pregnant women have babies in health care centers • • • Provide free pre-natal visits Provide vitamin and other nutritional supplements Reduce doctor’s monetary incentives for c-sections
Achieving Results Through Partnership Goal Outcome Means & Strategy Partner 2 Target 1 Partner 1 120 Partner 3 Partner 1 Partner 3
Building a Monitoring System: A Group Exercise Take this chart and complete the information requirements for Year 1 and Year 2: Impact 121 Increase educational opportunities for children Outcome Increase availability of pre-school education for poor children Target Increase by 25% the number of poor children ages 2 -5 attending pre-school by 2005 Means and Strategies Year 1 Year 2
Key Principles in Building a Monitoring System 1. There are results information needs at the project, program, and policy levels 2. Results information needs to move both horizontally and vertically in the organization 3. Demand for results information at each level needs to be identified 122
Key Principles in Building a Monitoring System (cont. ) 4. Responsibility at each level needs to be clear for: – What data are collected (source) – When data are collected (frequency) – How data are collected (methodology) – Who collects the data – Who analyzes the data – For whom the data are collected – Who reports the data 123
Every Monitoring System Needs: Ownership Management Maintenance Credibility 124
Managing for Results Calls for Analysis of Performance Data… ID 27902 Published in the New Yorker 5/16/1994 120 125 A bird, in a suit, notices charts which compare ‘hour of rising’ with ‘worm acquisition. ’ Refers to the saying, “The early bird catches the worm. ”
Performance Monitoring System Framework • For each outcome/goal you need: Indicator 126 Baseline Target Data Collection Strategy Data Analysis Reporting Plan
Monitoring System Strategy Should Include a Data Collection and Analysis Plan The plan should cover: • • 127 Units of analysis Sampling procedures Data collection instruments to be used Frequency of data collection Expected methods of data analysis Who collects the data For whom the data are being collected
Key Criteria for Collecting Quality Performance Data Reliability Validity 128 Timeliness
The Data Quality Triangle Reliability The extent to which the data collection approach is stable and consistent across time and space 129
The Data Quality Triangle Validity Extent to which data clearly and directly measure the performance we intend to measure 130
The Data Quality Triangle Timeliness • • • 131 Frequency (how often are data collected? ) Currency (how recently have data been collected? ) Relevance (data need to be available on a frequent enough basis to support management decisions)
Quality Assurance Challenges • 132 What will be collected, and by what methods, are tempered by what is practical and realistic in the country and program context – How much existing data relevant to our project, program, or policy are already available? – How much of the available data are good enough to meet your organization’s needs?
Pretest Your Data Collection Instruments and Procedures 133 • You will never really know how good your data collection approach is until you test it • Pretesting is learning how to improve your instruments or procedures, before your data collection is fully under way • Avoiding pretesting probably will result in mistakes. The mistake could cost your organization a lot of wasted time and money, and maybe its valued reputation with the public.
In Summary…. • For each outcome/goal you need: Indicator 134 Baseline Target Data Collection Strategy Data Analysis Reporting Plan
Step 7 The Role of Evaluations 135
The Role of Evaluations Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment 1 2 Agreeing on Outcomes to Monitor and Evaluate 136 Planning for Improvement — Selecting Results Targets 3 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring For Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
Definition Evaluation An assessment of planned, ongoing or completed intervention to determine its relevance, efficiency, effectiveness, impact and sustainability. The intent is to incorporate lessons learned into the decisionmaking process. 137
Uses of Evaluation 138 • • • To make resource decisions • • • Decision-making on best alternatives To re-think the causes of a problem To identify issues around an emerging problem, i. e. children dropping out of school Support of public sector reform / innovation To help build consensus among stakeholders on how to respond to a problem
Evaluation Means Information on: Strategy • – Rationale/justification – Clear theory of change Operation • Whether we are doing things right – Effectiveness in achieving expected outcomes – Efficiency in optimizing resources – Client satisfaction • Learning 139 Whether we are doing the right things Whethere are better ways of doing it – Alternatives – Best practices – Lessons learned
Characteristics of Quality Evaluations Impartiality Technical adequacy Feedback/ dissemination 140 Usefulness Stakeholder involvement Value for money
Eight Types of Questions Answered by Evaluation 141 • Descriptive: Describe the content of the information campaign in country X for HIV/ AIDS prevention • Normative/compliance: How many days during the year were national drinking water standards met? ( looks for how a project, program or policy met stated criteria) • Correlational: What is the relation between the literacy rate and number of trained teachers in locality? ( shows the link between two situations, or conditions, but does not specify causality
Eight Types of Questions Answered by Evaluation 142 • Cause and Effect: Has the introduction of a new hybrid seed caused increased crop yield? (establishes a causal relation between two situations or conditions) • Program Logic: Is the sequence/strategy of planned activities likely to increase the number of years girls stay in school? (used to assess whether the design has correct causal sequence) • Implementation/process: Was a project, program or policy to improve the quality of water supplies in an urban area implemented as intended? (establishes if proposed activities are conducted)
Eight Types of Questions Answered by Evaluation 143 • Performance: Are the planned outcomes and impacts from a policy being achieved? (establishes links between inputs, activities, outputs, outcomes and impacts) • Appropriate use of policy tools : Has the government made use of the right policy tool in providing subsidies to indigenous villagers who need to be resettled due to the construction of a new dam? ( establishes whether government selected appropriate instrument to achieve its aims)
When Is It Time to Make Use of Evaluation? When regular results measurement suggests actual performance diverges sharply from planned performance Planned Actual 144
When Is it Time to Make Use of Evaluation? When you want to determine the roles of both design and implementation on project, program, or policy outcomes Strength Of Design Hi Lo Hi Strength of Implementation 1. 2. 3. 4. Lo 145
When Is it Time to Make Use of Evaluation? (cont. ) When: 146 • Resource and budget allocations are being made across projects, programs, or policies • A decision is being made whether to (or not) expand a pilot • There is a long period with no evidence of improvement in the problem situation • Similar projects, programs or policies are reporting divergent outcomes • There are conflicting political pressures on decision-making in ministries or parliament • • Public outcry over a governance issue To identify issues around an emerging problem, I. e. children dropping out of school
Six Types Of Evaluation Performance Logic Chain Process Implementation Impact Evaluation 147 Pre-Implementation Assessment Case Study Meta-Evaluation
1) Performance Logic– Chain Assessment 148 • Asks questions about the basic causal logic of the project, program, or policy (cause and effect assumptions) • Asks about the rationale for the sequence of activities of the project, program, or policy • Asks about the plausibility of achieving intended effects based on research and prior experience
2) Pre-Implementation Assessment Preliminary evaluation of a project, program, or policy’s implementation strategy to assure that three standards are met: • • • 149 Objectives are well defined Implementation plans are plausible Intended uses of resources are well defined and appropriate to achievement of objectives
3) Process Implementation Evaluation 150 • Provides detailed information on whether the program is operating as it ought ( are we doing things right? ) • Provides detailed information on program functioning to those interested in replication or scaling up a pilot • Provides continuous feedback loops to assist managers
4) Case Study A case study is a method for learning about a complex situation and is based on a comprehensive understanding of that situation. 151
Six Basic Types of Case Study Illustrative Critical instance Program effects 152 Exploratory Program implementation Cumulative
5) Impact Evaluation • Provides information on how and why intended (and un-intended) project, program, or policy outcomes and impacts were achieved (or not) 153
6) Meta-Evaluation 154 • Pulls together known studies on a topic to gain greater confidence in findings and generalizability • Addresses where there are credible supportable evaluation findings on a topic • Compares different studies with disparate findings about a topic against a common set of criteria
In Summary: Evaluation Means Information on Strategy Operation Learning 155 • Whether we are doing the right things – Rationale/justification – Clear theory of change • Whether we are doing things right – Effectiveness in achieving expected outcomes – Efficiency in optimizing resources – Client satisfaction • Whethere are better ways of doing it – Alternatives – Best practices – Lessons learned
Reporting Your Findings Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment 1 2 Agreeing on Outcomes to Monitor and Evaluate 156 Planning for Improvement — Selecting Results Targets 3 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring for Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
“If You Do Not Measure Results, You Can Not Tell Success From Failure” Analyzing and Reporting Data: 157 • Gives information on the status of projects, programs, and policies • • Provides clues to problems • Provides important information over time on trends and directions • Helps confirm or challenge theory of change Creates opportunities to consider improvements in the (projects, programs, or policy) implementation strategies
Analyzing Your Results Data Examine changes over time Compare present to past data to look for trends and other changes – The more data points you have, the more certain you are of your trends 158 ? Time Improving access to rural markets Access – Access • Time Improving access to rural markets
Reporting Your Results Data �Report results data in comparison to earlier data and to your baseline (Remember—Comparisons over time are critical!) �You can report your data by: 159 – Expenditure/income – Organizational units – Raw numbers – Geographical locations – Percentages – Demographics – Statistical tests – Client satisfaction scales (high, medium, low)
Present Your Data in Clear and Understandable Form 160 • • Present most important data only • Use visual presentations (charts, graphs, maps) to highlight key points • Avoid “data dumps” Use an appendix or a separate report to convey detailed data
When Reporting Your Finding Use Explanatory Notes Suggestions: • • Combine qualitative information along with quantitative • Report internal explanatory notes When comparisons show unexpected trends or values, provide explanations, if known – • Report external explanatory notes, – • 161 e. g. loss of program personnel or other resources e. g unexpected natural disaster, or political changes Summarize important findings The Urban Institute, 1999
What Happens If the Results News Is Bad? 162 • A good results measurement system is intended to surface problems (early warning system) • Reports on performance should include explanations about poor outcomes and identify steps taken or planned to correct problems • Protect the messenger Adapted from The Urban Institute, 1999
Outcomes Reporting Format Actual Outcomes Versus Targets Baseline Current Target Difference (%) (%) Rates of hepatitis (N=6000) 30 25 20 -5 Percentage of children with improved overall health status (N=9000) 20 20 24 -4 Percentage of children who show 4 out of 5 positive scores on physical exams (N=3500) 50 65 65 0 80 85 83 +2 Outcome Indicator Percentage of children with improved nutritional status (N = 14, 000) Source: Made-up data, 2003 163
In Summary: Analyzing and Reporting Data: 164 • Gives information on the status of projects, programs, and policies • • Provides clues to problems • Provides important information over time on trends and directions Creates opportunities to consider improvements in the (projects, programs, or policy) implementation strategies
Step 9 Using Your Findings 165
Using Your Findings Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment 1 2 Agreeing on Outcomes to Monitor and Evaluate 166 Planning for Improvement — Selecting Results Targets 3 4 Baseline Data on Indicators— Where Are We Today? 5 Using Your Findings The Role of Evaluations 6 Monitoring for Results 7 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
Performance Measurement is Resisted Because: 1 Wrong measures: Wrong Questions are asked, Wrong Information is collected and thus not used 2 Wrong purpose: Performance measurement is set up for compliance instead of improvement 3 167 Wrong Use: Too many systems set up around “Blame and Shame” when targets are not metinstead of learning and focusing on how to improve
Using Your Findings 10 Uses of Results Findings 1 Responds to elected officials’ and the public’s demands for accountability 2 Helps formulate and justify budget requests 3 Helps in making operational resource allocation decisions 4 Triggers in-depth examinations of what performance problems exist and what corrections are needed 168
Using Your Findings (cont. ) 10 Uses of Results Findings 5 Helps motivate personnel to continue making program improvements 6 Monitors the performance of contractors and grantees 7 Provides data for special, in-depth program evaluations 8 Helps provide services more efficiently 9 Supports strategic and other long-term planning efforts (by providing baseline information and later tracking progress) 10 Communicates better with the public to build public trust 169
Nine Strategies for Sharing Information 170 • • Empower the Media • • • Publish annual budget reports Enact “Freedom of Information” legislation Institute E-government Add information on internal and external internet sites Engage civil society and citizen groups Strengthen parliamentary oversight Strengthen the Office of the Auditor General Share and compare results findings with development partners
Credible Information Strengthens Public Accountability “In the National Health Service it is not always clear that the board asks the right questions, ” because “inadequate information reduces the clarity behind decision-making that is necessary to achieve effective accountability”. Nicole Timmins Financial Times (October 14, 2003) 171
Step 10 Sustaining the M&E System Within Your Organization 172
Sustaining the M&E System Within Your Organization Selecting Key Indicators to Monitor Outcomes Conducting a Readiness Assessment 1 2 Agreeing on Outcomes to Monitor and Evaluate 173 Planning for Improvement — Selecting Results Targets 3 4 Baseline Data on Indicators— Where Are We Today? 5 The Role of Evaluations 6 Monitoring for Results 7 Using Your Findings 8 Reporting Your Findings 9 10 Sustaining the M&E System Within Your Organization
6 Critical Components of Sustaining Monitoring & Evaluation Systems 1. 2. 3. 4. 5. 6. 174 Demand Clear Roles and Responsibilities Trustworthy and Credible Information Accountability Capacity Incentives
Critical Component One: Demand 175 • Structured requirements for reporting on results e. g. European Union Accession or national legislation • The results from M&E system are sought and available for the government, civil society, and for donors • Officials want evidence on their own performance • Organizations seek better accountability
Critical Component Two: Clear Roles and Responsibilities • • • 176 Establish formal organizational lines of authority (that are clear) for collecting, analyzing, and reporting of performance information Build a system that links the central planning and finance ministries to line/sector ministries (internal coordination) Issue clear guidance on who is responsible for which components of the M&E system and procedures
Critical Component Two: Clear Roles and Responsibilities (cont. ) • • 177 Build a system that goes beyond national government to other levels of government for data collection and analysis Build a system that has demand for results information at every level where information is collected analyzed, i. e. there is no level in the system that is only a “pass through” of the information
Critical Component Three: Trustworthy and Credible Information 178 • The system has to be able to produce results information that brings both good and bad news • The producers of results information need protection from political reprisals • The information produced by the M&E system should be transparent and subject to independent verification • The data collection and analysis procedures should be subject to review by national audit office and/or Parliament
The Blame Game “Stop whimpering and spin the wheel of blame, Lipton!” Cartoon by Scott Arthur Masear, Harvard Business Review, November 2003. 179
Critical Component Four: Accountability 180 • Civil society organizations play a role by encouraging transparency of the information • The media, private sector, and the Parliament all have roles to ensure that the information is timely, accurate, and accessible • • Failure is not rewarded Problems are acknowledged and addressed
Critical Component Five: Capacity 181 • Sound technical skills in data collection and analysis • Managerial skills in strategic goal setting and organizational development • Existing data collection and retrieval systems • Ongoing availability of financial resources • Institutional experience
Critical Component Six: Incentives 182 • Incentives need to be introduced to encourage use of performance information: • • • Success is acknowledged and rewarded Problems are addressed Messengers are not punished Organizational learning is valued Budget savings are shared Others?
Last Reminders! 183 • The demand for capacity building never ends! The only way an organization can coast is downhill… • • Keep your champions on your side and help them! • Look for every opportunity to link results information to budget and resource allocation decisions. • Begin with pilot efforts to demonstrate effective resultsbased monitoring: Begin with an enclave strategy (e. g. islands of innovation) as opposed to a whole-ofgovernment approach. • Monitor both implementation progress and results achievements. • Complement performance monitoring with evaluations to ensure better understanding of public sector results. Establish the understanding with the Ministry of Finance and the Parliament that an M&E system needs sustained resources.
Concluding Comments • The demand for capacity building never ends! The only way an organization can coast is downhill • • Keep your champions on your side and help them! • Look for every opportunity to link results information to budget and resource allocation decisions Establish the understanding with the Ministry of Finance and the Parliament that an M&E system needs sustained resources (continued on next slide) 184
Concluding Comments (cont. ) 185 • Begin with pilot efforts to demonstrate effective results-based monitoring and evaluation • Begin with an enclave strategy (e. g. , islands of innovation) as opposed to a whole-of-government approach. • Monitor both implementation progress and results achievements • Complement performance monitoring with evaluations to ensure better understanding of public sector results
A Final Note…. “We are what we repeatedly do. Excellence, then, is not an act, but a habit. ” -- Aristotle Questions? 186
- Slides: 186