How to design and use a process evaluation

How to design and use a process evaluation

3ie’s recently-published working paper ‘Incorporating process evaluation into impact evaluation – What, why and how’ by Senior Research Fellows Vibecke Dixon and Michael Bamberger lays down guidelines that provide impact evaluators tools and ideas for exploring and adding relevant elements of process evaluations to experimental and quasi-experimental impact evaluation designs. This blog is the second of a two-part series that looks at the design of process evaluation.

In the first part, we discussed why a process evaluation is important and how it strengthens an impact evaluation. In part II, we describe how to design and use a process evaluation.

It must be stressed that the design of a process evaluation requires flexibility to adapt to time and budget constraints, to creatively use different kinds of information that may be available, and to adapt to changes in project implementation and the changing environment in which the project operates. For example, a general election may bring in a new government with different priorities for the project or changing migration patterns or political unrest might affect the design or implementation of the project or the attitude of the target population. The six-step approach we describe in this blog (see Figure 2) should be considered as a design framework that must be adapted to each project context.

Step 1: Define the impact evaluation scenario. There are three main impact evaluation scenarios: retrospective impact evaluations towards the end of the project; pre-test–post-test comparison group designs where baseline data is compared with end-of-project; and formative/real-time evaluation that continues throughout all, or a significant part, of project implementation [p39, PE guidelines]. In the first two scenarios, a counterfactual design is used where the project (treatment) group is compared with a matched comparison group. Where possible, a randomized control trial design is used, but in many cases where random assignment is not possible, the two groups are matched using statistical procedures such as propensity score-matching.

Step 2: Define the dimensions of implementation to be evaluated. According to the 3ie process evaluation guidelines, most process evaluations focus on one or more of the following dimensions:

Step 3: Selecting the process evaluation design. [p36, section 3, PE guidelines] There is no standard process evaluation design and flexibility is required in adapting the wide range of design options to each program context. There are at least four design considerations:

  1. Case-based methods, including qualitative comparative analysis (QCA).
  2. Qualitative interviews, including key informant and in-depth interviews.
  3. Focus groups.
  4. Observation and participant observation.
  5. Social network analysis.
  6. Self-reporting (diaries, time-use, and calendars).
  7. Analysis of documents and artefacts.
  8. Participatory group consultations (including participatory rural appraisal).
  9. Bottleneck analysis (Bamberger and Segone 2011 pp 45-50).

Often, a combination of several different methods is used.

Figure 1. The integrated process/impact evaluation design

Step 4: Designing a mixed-methods framework to strengthen the evaluation design. [p65, Section 3.6.3, PE guidelines] Mixed-method designs combine qualitative and quantitative tools to strengthen the representativity and validity of both qualitative and quantitative designs. Two of the limitations of most qualitative methods are that they tend to collect information from a relatively small sample of individuals or groups and that the samples are often not selected to ensure their representativity. The goal is to collect rich, in-depth information from subjects accessible to the interviewers. These factors make it difficult to generalize from the findings to the total project population.

Mixed methods strengthen the generalizability and validity of qualitative data in two ways. First, the selection of cases tries to utilize the sampling frames used in the impact evaluation to ensure that the cases studied in the qualitative analysis are broadly representative. Second, mixed-methods use two or more independent sources of data to compare estimates (triangulation) so as to increase validity. Mixed-methods are used to strengthen quantitative designs by using in-depth analysis (such as observation, unstructured interviews, or focus groups) to validate survey data.

Step 5: Data analysis. This addresses two dimensions: assessing how closely the process of project implementation conformed to the project design protocol (sometimes called “implementation fidelity”); and, assessing how adequately the design and implementation contributed to the achievement of broader development goals, such as the SDGs. The analysis can also be conducted at two levels: descriptive analysis; and conversion into scales and other ordinal measures that can be incorporated into the impact evaluation designs (p68, PE guidelines). While these scales are ordinal and do not permit statistical analysis such as mean or standard deviations, they provide a useful way to compare performance on different dimensions or to compare the overall performance of difference projects.

Step 6: Integrating the process evaluation data into the impact evaluation. The process evaluation findings can be incorporated into the impact evaluation in at least three main ways:

As also emphasized in the first part of the process evaluation blogs, the findings of the integrated analysis can be used in four main ways. They can help in understanding how the implementation process affects impacts. The findings may also provide recommendations on how to improve the implementation of future projects. The design of future impact evaluations can be refined based on the analysis. Finally, by identifying case studies, we can explore in more depth some aspects of the analysis.