The evaluation of the tuberculosis surveillance program will be done to ascertain the effectiveness in reducing mortality and morbidity in the United States. The evaluation will be done systematically and procedurally to produce accurate and consistent results.
Steps in program evaluation
Step 1: Engaging stakeholders– The evaluation will involve the Centre for Disease Control, Department of Human and health services, representatives from the funding and sponsoring organizations and managers of the health care institutions. Others included clients, research and academic institutions and members of professional bodies.
Step 2: Describing the program– The program is mandated to prevent and control tuberculosis in the United States. This is designed to ensure that identification, diagnosis, treatment and cure procedures are followed with the aim of reducing the incidences and mortalities. This will encompass dealing with trained staff, information system and broad participation of all stakeholders with the aim of improving the overall health status of the general population (Joint Committee on Standards for Educational Evaluation, 1994).
Step 3: Focusing the evaluation design– The purpose of the evaluation is to ascertain the needs, the drawbacks, come up with mechanisms to enhance measuring of outcomes, improve the current practice and eventually assess the overall outcomes of the program. The assessment of modalities and extent to which the participants will be affected will be done. This will help the evaluation team fill the knowledge gap in the staff and other stakeholders.
Participation of the users and the application of the information in guiding future prevention, research and control programs will be adhered to (Patton, 1997).
- Questions: Does the prevention and control activities put in place in the program adequate?
- Do the infrastructural network well equipped and staffed to offer prevention and control activities?
- Methods: The evaluation will be done using the observational techniques that will involve review of case studies. Information from cross sectional surveys, health records and questionnaires administered to key informant interviews and clients will be vital in achievement of the evaluation objectives.
- Agreements: All the components of the evaluation design will be clearly stated. The timelines for specific activities and the budget are drawn for scrutiny by the stakeholders. Responsibilities will be given according to competency of members of the team taking into consideration the resources and time available. Wider consultations of all stakeholders to ensure the evaluation design and the questions are ethical, feasible and precise. The centre of disease control, research and academic institutions should be involved so that their opinions can be taken into consideration (Patton, 1997).
Step 4: Gathering critical information: Mechanisms will be put in place to ensure credibility of the data. The indicators of the program will be reducing incidences of tuberculosis and increased awareness of the community and other stakeholders. The sources of evidence include health records, questionnaires and observations of the trends in the health sector. The quality and the quantity of the data will be checked through well structured and tested instruments. Considerations to ensure ethical and timely collection in data will be maintained (New comer, 1994).
Step 5: Justifying conclusions-The conclusions will be based on the evidence and the standards agreed upon by the stakeholders. Adherence to the laid down standards by the stakeholders and analysis of the evaluation will reflect the evidence collected (Patton, 1994). The active participation by the stakeholders in the discussion of the findings will be considered before the final interpretation of data. Judgment will be made based on the objectives of the program and the evaluation (Shulsha &Cousins, 1997, p.198). This will encompass the views of all stakeholders and particularly the clients and community members’ opinion. Recommendations on ways to improve the delivery of control and infrastructure will be made to the relevant stakeholders (Rodgers, 1995, p.326).
Step 6: utilization of the information and lessons learnt: The information on design, process and the findings will be shared to enhance scaling up and improvements on the program and other related areas (Weiss, 1998, p.27)
Reference list
Joint Committee on Standards for Educational Evaluation. (1994). Program evaluation standards: How to assess evaluations of educational programs. 2nd ed. Thousand Oaks, CA: Sage Publications.
Newcomer, K. (1994). Using statistics appropriately. In: Wholey, J., Hatry, H., Newcomer, K (eds), Handbook of practical program evaluation. San Francisco, CA: Jossey-Bass.
Patton, M. (1997). Utilization-focused evaluation: the new century text. 3rd ed. Thousand Oaks, CA: Sage Publications.
Rogers, P. & Hough, G. (1995). Improving the effectiveness of evaluations: making the link to organizational theory. Evaluation and Program Planning, 18(4), 321-32.
Shulha, L. & Cousins, J. (1997). Evaluation use: theory, research, and practice since 1986. Evaluation Practice, 18(3), 195-208.
Weiss, C. (1998). Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1), 21-33.