One of the roles that the internal evaluation team or measurement, evaluation, and learning (MEL) plays is to gather data on ongoing programs. They provide this data to program managers so that they can ensure that the programs are working as intended and make corrections as they go.
Internal program evaluation is usually formative rather than summative, meaning that it is gathering data on a program before it starts or as it is running in order to make adjustments. Summative evaluation usually happens after a program has ended and looks at whether the program had its intended impact.
Formative evaluation is best thought of as a process rather than a product. In the early or planning part of a program, the formative evaluation can include elements like needs assessment, which is a comprehensive and systematic look at the community’s needs in order to understand whether the program should be implemented or how it should be implemented for the best impact. Formative evaluation will help you to identify factors in a complex implementation that might be having unintended or unexpected effects. It’s also an opportunity to collect and respond to information quickly enough to improve outcomes for your current cohort of participants.
Once the program is running, formative evaluation looks at two general questions. First: are we implementing this program as it was intended? Another way to say this is ”do we have good fidelity to the model?” In order to answer this question, we will look at our planning documents and any literature we referred to when we were designing our program to understand what to measure. Then, we will look at how the program is actually being implemented, whether we are serving as many people as we planned to, whether we are serving the type of people that we planned to, and whether we are giving them exactly the services that we planned to.
If you are thinking about this in terms of your logic model, you might be looking at the program activities and outputs here. Your intention is not just to understand whether your program is meeting its goals in terms of activities and outputs, but why or why not. This is an opportunity to document the changes your organization might have made to the model to serve your particular community or population or to integrate with other programs and services.
It is also a time to look at and document how other factors in a complex environment might be affecting the implementation of this program. The methods here will include a lot of qualitative approaches such as observation and conducting focus groups with participants, community members, and partner agencies.
The second question is whether we are getting the short term outcomes that we expected. Are the students learning the things we wanted them to learn? Are participants making progress in the areas that we expected them to? Here, we are testing whether the program has the intended effect. You might use short participant surveys or use qualitative methods like focus groups to collect this information and to understand why your program might not be reaching its goals.
The key to effective formative evaluation is to provide information that program managers can use to adjust their program as they are running. It’s responsive to the staff’s questions and adjusts as the program changes. Ultimately, you may end up selecting some of the measures that your formative evaluation looked at as ongoing “dashboard’ measures for the staff to monitor on a regular basis.