Introduction
The Millennium Development Goals embody an emphasis on results, hence the shift of focus from inputs to outcomes. The different programs established by individuals and organizations may seem worthwhile, and sound compelling, but it is impossible to know if they are serving their purpose without collecting data and conducting evaluations. Evaluation is the “systematic investigation of the merit, worth or significance of the effort put in a particular program” (Rubin, 2020). Evaluation offers a detailed conclusion, provides strategic solutions and gives recommendations for the future.
With the four questions asked in evaluation; what happened, why it happened, why the results matter and what is next, evaluators are able to optimally design a project to achieve its goals. Evaluators understand the progress of the initiative and how to maximize its success. At the end, they assess the extent of the project in realizing its goals, in addition to identifying the circumstances resulting in both high and low success levels. Normally, teams within these programs collect data to gauge their progress, which is important, but they only focus on the big picture collaborative- wide instead of the underlying causes of the problems. They also do not offer recommendations (RHIhub, 2021). In contrast, evaluation does not overlook the tiniest detail as it seeks to create an opportunity for new adaptations, to maximize success. The process does not only impact the present of a program, but its future initiatives as well. Engaging more participants and incorporating their feedback during the evaluation, has a direct positive impact on communication and logistical support. It can only be successful if the right type of evaluation and stakeholders are involved.
Each of the different types of evaluation serve a different purpose depending on the determined program activities in the logic of the model to be evaluated. For instance, process evaluation determines whether or not the process was appropriately implemented, and how successfully the project followed the strategy, as stipulated in the logic model. It emphasizes more on the inputs, activities, and outputs, and how coherently they work (Limbani et al., 2019). On the other hand, the outcome process assesses whether or not the program was effective in creating a change (Evaluation Resource Hub, 2021). It focuses more on what happened to the target group and what difference the program made to them. The constant for all types of evaluation is that they serve as a tool to improve human service programs.
Both the outcome evaluation and process evaluation are different from traditional research. They differ in terms of the methods used, goals, purpose, groups, and the desired outcomes. On the one hand, research is strictly controlled, and its main aim is to collect information and generalize on already concluded events or studies. In contrast, evaluation seeks to find feedback from groups in their natural elements, under uncontrolled conditions (Mertens, 2022). It aspires to understand the specifics of the process while traditional research explains the causes.
Key Tensions and Debates Associated with Evaluating Human Services
While evaluation is intended to have a positive impact on the participants, it has stirred debates on the conflicts of interests surrounding it. There are questions raised about the lack of critical discussions on the design, implementation, and analysis of some of the most advocated programs. As a result, the issue of conflict of interest is either overlooked, or never raised in most literature. Evaluation of most programs, and especially those in drug abuse prevention experience real or apparent conflicts of interest. The purpose of these evaluations is mainly for proving that a program is worthy of funding, or marketing the program to a bigger audience. In most cases, the evaluators are connected to the program in one way or another. That is, they are associated with an institution that has developed, and intends to market the program. This means that the outcome of the evaluation directly impacts the financial interests of the evaluators and their institutions. Therefore, there is a high likelihood of bias in the methods and results.
The financial relationships between investigators, and research sponsors in the pharmaceutical industry funding biomedical research also raise a concern. The financial relationships between the two are clear as about 25% of evaluators have industry affiliations (Research and Development in the Pharmaceutical Industry, 2021). As a result, a systematic bias emerges favoring products associated with the sponsor. This forms a linear relationship between industry sponsorship and inappropriate study designs, unmethodical analysis of data, and reporting.
Additionally, there is a debate on the undemocratic nature of impact evaluation and the hierarchy of methods involved. The way these studies are done is in a way that is not very participatory. Robert Chambers refers to this as extractive where researchers collect the data, analyze, and publish it without involving the other stakeholders (Bank et al., 2017). Recently, however, impact evaluation is increasingly incorporating more participants as it is moving more strongly to a mixed-methods approach. With this approach, the local analysis conducted informs the analysis of the causal chain. Further, the findings and interpretations are implemented among the residents in an interactive manner. People are more likely to accept change if they are involved in the entire process of the evaluation (Frölich & Sperlich, 2019). In addition, for the best results, the evaluation ought to be issues-led instead of methods-led.
Conclusion
Evaluation of human services keeps the programs in check by identifying the underlying issues and offering recommendations for change. It is therefore important that these programs allocate a budget to facilitate proper evaluation, often. One type of evaluation cannot fit all circumstances, hence the need to understand the objectives of the evaluation before indulging. Evaluation has, however, raised some concerns in terms of the conflict of interest in the parties involved, such as sponsors and the evaluators, hence creating a bias in the reporting. There is also the lack of democracy when using impact evaluation. The success of evaluation is fully dependent on the stakeholders involved and choosing the correct type of evaluation.
References
Bank, A. D., White, H., & Raitzer, D. A. (2017). Impact Evaluation of Development Interventions: A Practical Guide. Asian Development Bank.
Evaluation Resource Hub. (2021). Outcome evaluation. Education.Nsw. Web.
Frölich, M., & Sperlich, S. (2019). Impact Evaluation: Treatment Effects and Causal Analysis. Cambridge University Press.
Limbani, F., Goudge, J., Joshi, R., Maar, M. A., Miranda, J. J., Oldenburg, B., Parker, G., Pesantes, M. A., Riddell, M. A., Salam, A., Trieu, K., Thrift, A. G., van Olmen, J., Vedanthan, R., Webster, R., Yeates, K., & Webster, J. (2019). Process evaluation in the Field: Global Learnings from Seven Implementation Research Hypertension Projects in Low-and Middle-Income Countries. BMC Public Health, 19(1).
Mertens, D. M. (2022). Research and Evaluation in Education & Psychology: Integrating Diversity with Quantitative, Qualitative, and Mixed Methods (3rd Edition). Sage Publications, Inc.
Research and Development in the Pharmaceutical Industry. (2021, April 8). Congressional Budget Office.
RHIhub. (2021). Importance of Evaluation. Rural Health Infor. Web.
Rubin, A. (2020). Pragmatic program evaluation for social work. Cambridge University Press.