Categories
syncthing android synology

methods of impact evaluation pdf

Qualitative and Quantitative Evaluation. 0000006554 00000 n Handbook on impact evaluation : quantitative methods and practices (English) Abstract. 978-92-2-630796-4 (web pdf), Geneva, 2018. Each of these KEQs should be further unpacked by asking more detailed questions about performance on specific dimensions of merit and sometimes even lower-level questions. Develop programme theory/theory of change, 5. Framing the boundaries of the impact evaluation Defining the key evaluation questions Defining impacts Defining success to make evaluative judgements Using a theory of change Deciding the evaluation methodology Strategies and designs for determining causal attribution Guidance Document. _W+ )2c4Rh$ 4D+@8g#nsWfK~{ The objective of this paper is assessing the impact of shop drawings in meeting project . Many development agencies use the definition of impacts provided by the Organisation for Economic Co-operation and Development Development Assistance Committee: positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended. (OECD-DAC 2010). If so, for whom, to what extent and in what circumstances? Impact Evaluation in UN Agency Evaluation Systems: Guidance on Selection, Planning and Management. At the very least, it should be clear what trade-offs would be appropriate in balancing multiple impacts or distributional effects. Analytical Methods For Impact . One key difference however is that we develop our theory of change (or 'chain of impact' as we term it) during the project planning. Start the data collection planning by reviewing to what extent existing data can be used. August 2012. Read more. Is it reasonable to expect there to be different methods used to identify the causes of an effect as compared to the effects of a cause? These questions should be clearly linked to the evaluative criteria. The Methods Lab produced several guidance documents including: Realist impact evaluation: an introduction- This guideexplains when a realist impact evaluation may be most appropriateor feasible for evaluating a particular programme or policy, andoutlines how to design and conduct an impact evaluation based on a realist approach. Tweets by @BetterEval !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)? The design options (whether experimental, quasi-experimental, or non-experimental) all need significant investment in preparation and early data collection, and cannot be done if an impact evaluation is limited to a short exercise conducted towards the end of intervention implementation. Guidance Note No. I have developed a causal links model for the theory of change, I would like for you to review it, and offer some suggestions. The evaluation purpose refers to the rationale for conducting an impact evaluation. In any impact evaluation, it is important to define first what is meant by success (quality, value). 0000107395 00000 n You need to identify a comparison group that doesn't receive the program, and is unlikely to accidentally avail of the benefits of the program A theory of change should be used in some form in every impact evaluation. The latter are further divided according to their primary function for either data collection or data . Acknowledging multiple causes and multiple consequences (where appropriate) is important in impact evaluation, and designs and methods need to be able to address these. I am writing a practicum for a MSc in project management and evaluation. This is an important issue, for without systematic evaluation, libraries do not have adequate information to determine the impact of existing training on students' IL skills and learning needs. potentially relevant contextual factors that should be addressed in data collection and in analysis, to look for patterns. 'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); Select a content type to filter search results: Additional guidance documents can be found, A special thanks to this page's contributors. Evaluation, by definition, answers evaluative questions, that is, questions about quality and value. The particular analytic framework and the choice of specific data analysis methods will depend on the purpose of the impact evaluation and the type of KEQs that are intrinsically linked to this. Principles forEvaluation of Development Assistance. Wed love to hear from you. Evaluative criteria specify the values that will be used in an evaluation and, as such, help to set boundaries. Hence, there will be a strong emphasis in an impact evaluation on synthesizing similarities and differences within and across cases and contexts. BOX 1. Qualitative methods help you understand shifts in perceptions, beliefs, behaviours and are most often collected through interviews, observations and focus groups. It describes methods and procedures for the analysis of results from sensory tests; explains the reasons for selecting a particular procedure or test method; and discusses the organization and operation of a testing program, the design of a test facility . Impact Evaluation - Mixed Methods Contents 1 Overview 2 Methodological Triangulation 3 Mixed Methods 4 Quantitative Impact Evaluation 5 Qualitative Impact Evaluation 6 Randomised Control Trials (RCT) 6.1 The Process of Selecting a Sample Group 6.2 Methods of Randomised Selection of Participants 6.3 Advantages 6.4 Disadvantages 6.5 Conclusion As you've noted, this makes it possible to get much more value from the theory of change. Program participants themselvesbefore participating in the program. 4 Who to engage in the evaluation process? Techniques and models for establishing causal causation: There are three main methods for determining causality in impact assessments: Performing Calculations for the hypothetical value (i.e., what would have happened in the absence of the intervention, compared to the observed situation). 0000006164 00000 n Goertz&Mahoney(2012:42) argue there are two equally legitimate ways of looking at causal attribution: This is more consistent with a complexity perspective, in that a given event can have multiple cause and multiple consequences and we could focus our analysis on either side of this picture. 0000006720 00000 n Evaluation The purpose of impact evaluation is to assign relative significance to predicted impacts associated with the project, and to determine the order in which impacts are to be avoided, mitigated or compensated. thanks in advance. ), Sorry, wrong book reference. 0000003269 00000 n A range of approaches that engage stakeholders (especially intended beneficiaries) in conducting the evaluation and/or making decisions about the evaluation. 0000004554 00000 n In an impact evaluation, the focus will be on explaining how the programme contributed to observed outcomes, and this will necessarily involve iterative description, interpretation and explanation. Evaluative reasoning is the process of synthesizing the answers to lower- and mid-level questions into defensible judgements that directly answer the high-level questions. Instrumental variables ii. [. Both methods provide important information for evaluation, and both can improve community engagement. A particular type of case study used to create a narrative of how institutional arrangements have evolved over time and have created and contributed to more effective ways to achieve project or program goals. endobj A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences. Or it can be a summative evaluation and a participatory evaluation. 720 for each year 2006-07 and in 2018-19 same respondents interviewed what agriculutre technologies they have adopted. 332 0 obj <>stream Impact evaluations should be focused around answering a small number of high-level key evaluation questions (KEQs) that will be answered through a combination of evidence. This establishes the relationships between the outputs, intermediate outcomes and longer-term impact in a transparent way at the outset of the programme. Hello Patricia, goes beyond describing or measuring impacts that have occurred to seeking to understand the role of the intervention in producing these (causal attribution); can encompass a broad range of methods for causal attribution; and, Describing what needs to be evaluated and developing the evaluation brief, Deciding who will conduct the evaluation and engaging the evaluator(s), Deciding and managing the process for developing the evaluation methodology, Managing development of the evaluation work plan, Managing implementation of the work plan including development of reports, Disseminating the report(s) and supporting use, specific evaluation questions, especially in relation to those elements of the theory of change for which there is no substantive evidence yet, relevant variables that should be included in data collection, intermediate outcomes that can be used as markers of success in situations where the impacts of interest will not occur during the time frame of the evaluation, aspects of implementation that should be examined. 1 0 obj Three questions need to be answered in each situation: (1) What purpose will stakeholder participation serve in this impact evaluation? It should also be noted that some impacts may be emergent, and thus, cannot be predicted. There are three broad strategies for causal attribution in impact evaluations: Using a combination of these strategies can usually help to increase the strength of the conclusions that are drawn. 0000101779 00000 n This guidance note highlights three themes that are crucial for effectiveutilizationof evaluation results. The book incorporates real-world examples to present practical . The designations employed in ILO publications, which are in conformity with United Nations practice, and the presentation 0000088675 00000 n It can identify: The evaluation may confirm the theory of change or it may suggest refinements based on the analysis of evidence. The other notes in this series are: Introduction to Impact Evaluation; Linking Monitoring & Evaluation to . A participatory approach which enables farmers to analyse their own situation and develop a common perspective on natural resource management and agriculture at village level. There are five key principles relating to internal validity (study design) and external validity (generalizability) which rigorous impact evaluations should address: confounding factors, selection bias, spillover effects, contamination, and impact heterogeneity. most efficient and effective methods of collecting the information that the project wants. the difference between evaluation types. *A benchmark or index is a set of related indicators that provides for meaningful, accurate and systematic comparisons regarding performance; a standard or rubric is a set of related benchmarks/indices or indicators that provides socially meaningful information regarding performance. & outcome evaluation Outcomes - Definitions, types, levels - Criteria for selecting outcomes Incorporating outcome evaluation in the programs - Theory of Change & Outcome/logic model - Components & language - Development & utilization for evaluation - Evaluation Plan 0000115409 00000 n The evaluation report should be structured in a manner that reflects the purpose and KEQs of the evaluation. 0000009517 00000 n After reviewing currently available information, it is helpful to create an evaluation matrix (see below) showing which data collection and analysis methods will be used to answer each KEQ and then identify and prioritize data gaps that need to be addressed by collecting new data. If the intervention is to be scaled up or replicated in a different setting. 0000029732 00000 n It can be used with any research design that aims to infer causality, it can use a range of qualitative and quantitative data, and provide support for triangulating the data arising from a mixed methods impact evaluation. 3. Thanks for the impact evalauation material. explains when a realist impact evaluation may be most appropriateor feasible for evaluating a particular programme or policy, andoutlines how to design and conduct an impact evaluation based on a realist approach. this report analyzes the methodologies used in 77 oed impact evaluation reports listed in the world bank "imagebank" database and one additional oed sector study that used a counterfactual.2annex 1 lists the 78 evaluations with their sectoral classification and completion dates.the partnerships and knowledge programs unit of oed (oedpk) is Many impact evaluations use the standard OECD-DAC criteria (OECD-DACaccessed 2015): The OECD-DAC criteria reflect the core principles for evaluating development assistance (OECD-DAC 1991) and have been adopted by most development agencies as standards of good practice in evaluation. Evaluation Brief: Conducting a Process Evaluation (PDF - 76 KB) Participatory approaches can be used in any impact evaluation design. InterActionImpact Evaluation Guidance Notes andWebinarSeries: Rogers P (2012). Randomized Controlled Trial: An experimental design in which the individuals being studied (e.g., training participants) are randomly assigned to either an intervention condition or a control condition. What constitutes success and how the data will be analysed and synthesized to answer the specific key evaluation questions (KEQs) must be considered up front as data collection should be geared towards the mix of evidence needed to make appropriate judgements about the programme or policy. New York: United Nations Evaluation Group (UNEG) . Impact evaluation methods for youth employment 5 interventions Guide on Measuring Decent Jobs for Youth Monitoring, evaluation and learning in labour market programmes Note. For answering causal KEQs, there are essentially three broad approaches to causal attribution analysis: (1) counterfactual approaches; (2) consistency of evidence with causal relationship; and (3) ruling out alternatives (see above). 0000003002 00000 n How did these occur? The underlying rationale for choosing a participatory approach to impact evaluation can be either pragmatic or ethical, or a combination of the two. The program stage and scope will determine the level of effort and the methods to be used. UNICEFImpact Evaluation Methodological Briefs and Videos: Overview briefs (1,6,10) are available in English, French and Spanish and supported by whiteboard animation videos in three languages; Brief 7 (RCTs) also includes a video. Political Analysis 14, 227249. 3. %PDF-1.4 % The formal literature on impact evaluation methods and practices is large, with a few useful overviews. 312 0 obj <>/Filter/FlateDecode/ID[<93D2CBCCB9146A488468BBBEEA24FCB4>]/Index[285 48]/Info 284 0 R/Length 120/Prev 212337/Root 286 0 R/Size 333/Type/XRef/W[1 3 1]>>stream The framework includes how data analysis will address assumptions made in the programme theory of change about how the programme was thought to produce the intended results. When conducted belatedly, the findings come too late to inform decisions. A strengths-based approach to learning and improvement that involves intended evaluation users in identifying outliers those with exceptionally good outcomes - and understanding how they have achieved these. For example, focus group discussions may be conducted with clients, brief structured interviews . [, How valuable were the results to service providers, clients, the community and/or organizations involved? Other methods i. In a true mixed methods evaluation, this includes using appropriate numerical and textual analysis methods and triangulating multiple data sources and perspectives in order to maximize the credibility of the evaluation findings. 0000109625 00000 n The program was the only factor influencing changes in the outcome over time. Purpose of impact evaluation Impact evaluation serves both objectives of evaluation: lesson-learning and accountability.2 A properly designed impact evaluation can answer the question of whether the program is working or not, and hence assist in decisions about scaling up. A strengths-based approach designed to support ongoing learning and adaptation by identifying and investigating outlier examples of good practice and ways of increasing their frequency. It specifies designs for causal attribution, including whether and how comparison groups will be constructed, and methods for data collection and analysis. b. Quasi-experimental methods i. Regression Discontinuity Design ii. Director, Evaluation Capacity Strengthening, BetterEvaluation. Linking Monitoring and Evaluation to Impact Evaluation. International Initiative for Impact Evaluation Working Paper No. Developing a research agenda for impact evaluation, Impact evaluation: UNICEF's briefs and videos, Impact Evaluation: Best Practices Arent (MQP rumination #4), https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=RAMESES;1fc28313.1411, https://community.betterevaluation.org/peregrine, 3. An impact evaluation approach without a control group that uses narrative causal statements elicited directly from intended project beneficiaries. This lack of evaluation becomes problematic when libraries must qualify and quantify their impact on educational goals and outcomes. For example, some define impact narrowly, only including long-term changes in the lives of targeted beneficiaries. [, Did the intervention produce the intended results in the short, medium and long term? A way to jointly develop an agreed narrative of how an innovation was developed, including key contributors and processes, to inform future innovation efforts. Overview: Strategies for Causal Attribution, UNICEF Brief 7. Author: Ross Bailie Publisher: Frontiers Media SA ISBN: 2889453774 Size: 39.10 MB Format: PDF, Docs View: 750 Access Book Description Continuous Quality Improvement (CQI) methods are increasingly widely used to bridge the gaps between the evidence base for best clinical practice, what actually happens in practice, and the achievement of better population health outcomes. Of course there are often unanticipated impacts using this approach, but it seems to increase the likelihood that the desired impacts will be achieved. Recommend content, collaborate, share, ask, tell us what you like, suggest an improvement, or just say hi! [, To what extent did the intervention represent the best possible use of available resources to achieve results of the greatest possible value to participants and the community? 0000007507 00000 n 0000107465 00000 n Kalamazoo: Western Michigan University Checklist Project. For answering descriptive KEQs, a range of analysis options is available, which can largely be grouped into two key categories: options for quantitative data (numbers)and options for qualitative data (e.g., text). 1. The debate in the evaluation community on preferred methods (such as those documented in This book reviews quantitative methods and models of impact evaluation. may i say that who is adopting agriculture technology is control group? With financial support from the Rockefeller Foundation, InterAction developed a four-part series of guidance notes and webinars on impact evaluation. Multifaceted program designs 4. Is it to ensure a relevant evaluation focus? Consider, for example, an industrial assistance program where the government gives grants on a 0000088937 00000 n BambergerM (2012). Section VI introduces two examples of impact evaluation and explains in greater detail how impact evaluation can be realistically implemented. For example, the findings of an impact evaluation can be used to improve implementation of a programme for the next intake of participants by identifying critical elements to monitor and tightly manage. This is a helpful overview of impact evaluation, which correspondswith myorganisation's thinking on this subject to large extent. Battelle Environmental Evaluation System In this method, environmental impacts are split into main categories; ecology, pollution, aesthetics and human interest. Good data management includes developing effective processes for: consistently collecting and recording data, storing data securely, cleaning data, transferring data (e.g., between different types of software used for analysis), effectively presenting data and making data accessible for verification and use by others. The formal literature on impact evaluation methods and practices is large, with a few useful overviews. This guidance note outlines the basic principles and ideas of Impact Evaluation including when, why, how and by whom it should be done. The priority at this stage is to understand and improve the quality of implementation. HOW "IMPACTS" AND "IMPACT EVALUATION" ARE USED IN THE GUIDANCE NOTES For example, 'Randomized Controlled Trials' (RCTs) use a combination of the options random sampling, control group and standardised indicators and measures. The IEM is an integrated component of the peer review tool and is embedded Section VII concludes. 0000115803 00000 n In what circumstances? Quality refers to how good something is; value refers to how good it is in terms of the specific situation, in particular taking into account the resources used to produce it and the needs it was supposed to address. Evaluative reasoning is a requirement of all evaluations, irrespective of the methods or evaluation approach used. DACCriteria for Evaluating Development Assistance. This note reviews the key concepts and tools available to do sound impact evaluation. Various ways of doing evaluation in ways that support democratic decision making, accountability and/or capacity. Define ethical and quality evaluation standards, 6. It also increases the credibility of evaluation findings when information from different data sources converges (i.e., they are consistent about the direction of the findings) and can deepen the understanding of the programme/policy, its effects and context (Bamberger 2012). endobj endobj IMPACT EVALUATION. The common feature of these different models is the expedited implementation timeframes which generally range from 10 days to 6 months. $h6 =g@[b\sO>P. 0000007938 00000 n 0000009402 00000 n Introduction to Impact Evaluation. Some Reflections on Current Debates in Impact Evaluation. quantitative impact evaluation methods with a direct link to the rules of program operations, as well as a detailed discussion of practical implemen- tation aspects. A particular type of case study used to jointly develop an agreed narrative of how an innovation was developed, including key contributors and processes, to inform future innovation efforts. They are insufficiently defined to be applied systematically and in a transparent manner to make evaluative judgements about the intervention. Addressing gender in impact evaluation - This paper is a resource for practitioners and evaluators who want to include a genuine focus on gender impact when commissioning or conducting evaluations. . The Success Case Method (SCM) involves identifying the most and least successful cases in a program and examining them in detail. CL6q;l [Zz-,l}jVG6c. Alternatively, a program that is viewed as a failure due to budget and schedule issues may have a far more positive impact than anticipated by . Under each of the generic criteria, more specific criteria such as benchmarks and/or standards* appropriate to the type and context of the intervention should be defined and agreed with key stakeholders. Washington DC:InterAction. Is the purpose to ensure that the voices of those whose lives should have been improved by the programme or policy are central to the findings? H\0{bh5-8v5h8+h~1SawGhZ[{7WV#[3>**.aMF?PW\[{W^.!(`vjlC/{:`.[5C1}P! Retrieved from http://www.betterevaluation.org/themes/impact_evaluation. Investigate possible alternative explanations, 1. 1-This paper provides a summary of debates about measuring and attributing impacts. Step 5: Evaluation Design and Methods v.3 8 of 16 Table 3. Elsewhere, its fundamental basis may revolve around adaptive learning, in which case the theory of change should focus on articulating how the various actors gather and use information together to make ongoing improvements and adaptations. <> Impact evaluations must have credible answers to all of these questions. On Google Books http://goo.gl/2jOpfn, (This comment is NOT just a request to pay attention toqualas well asquant! This chapter presents guidance notes on approaches and methods for evaluators working in international development. Re theOECDDACview of the causal attribution task: Ascription of a causal link between observed (or expected to be observed) changes and a specific intervention., this does seem unbalanced and one-sided. In other words, not all of these evaluative criteria are used in every evaluation, depending on the type of intervention and/or the type of evaluation (e.g., the criterion of impact is irrelevant to a process evaluation).

Examples Of Digital Media Marketing, React-hook-form Handlesubmit Not Working, Abradoodle Bingo Mod Apk Unlimited Money, Squid Terraria Calamity, Stratford University Admissions, Difference Between Refund And Reimbursement, Theories Of Acculturation, Intelligent Reason Crossword Clue 4 Letters, Why Do Bagels Have Holes In Them, Stampeding, Marauding, Bootstrap Graph Template, Googleapis Virus Removal Android,

methods of impact evaluation pdf