One of the reasons to do so is that logic models provide the framework for evaluation. There are several different types of evaluation, and even some disagreement about how many types exist. Here, I will summarize three major types of evaluation: Process Evaluation, Outcome Evaluation, and Impact Evaluation, and describe how each relates to your program’s logic model. This post is based largely on Ellen Bass’s presentation on logic models.
A process evaluation looks at whether the program was implemented according to plan. A lot can change between the design and the implementation of a project and a lot of little decisions along the way of implementing a program can add up to a program that is dramatically different in real life than it is on paper.
The process evaluation addresses the following questions:
- Did we do what we said we would do?
- Did we serve the people we were seeking to serve?
- Did we serve as many people as we intended to serve?
- Did we do a good job?
- How can we improve what we do?
If there is significant difference between the program as implemented and the program that was intended, stop there. You can’t move on to looking for outcomes until you document the program that is actually running.
Once you know what program is happening, you can consider an outcome evaluation.
An outcome evaluation gathers information to respond to the following question: How did our participants change or benefit from their involvement in our program?
Here you are testing the short-term outcomes section of your logic model.
- Did the participants get the changes in knowledge, ability or attitude that we intended?
- Does the model as it is being implemented lead to the change that we want?
If the answer to the questions is no, you should stop and examine your assumptions. Why did you think that this intervention would lead to this change? Were you wrong? Why? Are there other approaches that would be more effective?
Finally, an impact evaluation looks at whether the program is effective in solving the problem that you set out to address. It answers the questions:
- Did our program work?
- Are our participants better off as a result of participating in our program?
- Are we solving the problem that we seek to address?
An impact analysis is designed to measure change caused by the program. So, it will compare participants to non-participants or to themselves before starting the program in order to estimate what their condition would have been had they never engaged with the program. I discuss various methods for impact analysis such as interrupted time series and randomized control trials elsewhere.
Outcome evaluations are generally the most expensive and the most challenging to undertake. If there is plenty of previous evidence that the short-term outcomes your program creates lead to the long term outcomes that I’ve written here about how you might not need one. I’ve written more about using other peoples’ outcome evaluation findings here.