In chapter 8, “Assessing Program Impact,” aimed to explore the concept of impact assessment as a measure for producing the intended effects of a program. Impact assessments have been developed for determining which effects of a program will yield positive outcomes. A program effect refers to the change in the social conditions or target population that has been brought by a program. An impact assessment is appropriate when implementing a range of points in the course of a social program. Rigorous forms of evaluating the impact of programs are associated with a range of managerial and technical challenges. Target audiences of social programs may be linked to individuals and even families that are difficult to reach and from which outcome and follow-up data are hard to collect.
In order to determine the impact of a program, it is required that evaluators compare conditions of targets that have undergone an intervention. In a randomized field experiment, scholars usually assess causal effects between variables. Participants are selected in a random way and differentiated into two groups, such as intervention and control groups (Byrd-Bredbenner et al., 2017). Then, outcomes are observed for both groups, with any differences being linked to the influence of intervention. Non-randomized quasi-experiments also divided participants into intervention and control groups; however, due to the lack of randomization in design, they do not offer the desired level of reliability.
Randomized field experiments can be used for defining the difference between outcome targets and equivalent units. Equivalence implies identical composition, identical predisposition, and identical experiences (Rossi, Lipsey, & Freeman, 2004). The most efficient way of achieving equivalence between control groups and interventions is using randomization. To create a true random assignment, a chance-based procedure is implemented, ranging from a random number table to the roll of dice. Random number sequences are the most popular among researchers due to their convenience.
Units of analysis represent the range of measures implemented during the assessment. The choice of such units is usually based on the kind of intervention as well as target audiences to which they are delivered. The logic of randomized experiments relies on the idea that mean change on an outcome variable from before and after the intervention. Understanding the difference between control and intervention groups is essential for making conclusions on the statistical significance of the results. Randomized experiments are expected to work in cases when there are expectations of outcomes, increasing the chances of success. Such variables as political or ethical considerations can limit randomization; however, it remains the most widespread and effective way of program impact assessment.
References
Byrd-Bredbenner, C., Wu, F., Spaccarotella, K., Quick, V., Martin-Biggers, J., & Zhang, Y. (2017). Systematic review of control groups in nutrition education intervention research. The international journal of behavioral nutrition and physical activity, 14(1), 91. Web.
Rossi, P., Lipsey, M., & Freeman, H. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage Publications.