Project Evaluation and Program Success Measurement Case Study

Exclusively available on IvyPanda Available only on IvyPanda

Analysis of the means of evaluation

Why the evaluation is necessary and what kinds of decisions to make as a result of the evaluation

In general use, the evaluation focuses on measurement, appraisal, or judgment of the output and result of the project about the objectives (Milakovich and George 3). This is the only way the Project Manager can determine the project’s relevance, effectiveness, and benefits to the target group. Thus, evaluation concentrates on long-term outcomes and the effects of the project objectives.

Evaluation of the Project Team Read A is necessary to help the Project Manager understand, verify, or enhance the impact of the reading process on learners. This will show that the project is changing the reading culture among its target group. This process will ensure that the Project Manager does not rely on her instinct or guesswork, trial, and error approach about the delivery of the reading project.

Evaluation is also necessary to ensure that the project improves its delivery mechanism, becomes cost-effective, and efficient. It will also help in the identification of the reading program’s strengths and weaknesses to improve the project. This is because the reading project may end up being inefficient with several related and costly activities.

The Project Manager will verify the current activities and progress of the project. Several factors may influence the reading process. In the process, the initial plans for effective delivery of reading services may end up changing considerably; thus, evaluation is necessary to determine if the project runs according to the initial plans.

Also, the project evaluation will assist in determining if the project has met its goals, generate data that the Project Manager can use for verification, support the project before sponsors, promote a reading culture in the community, and creating public relations. It will also help the Project Manager determine effective programs suitable for retention in cases of financial difficulties. Finally, evaluation results can serve purposes of duplication of similar programs in other regions (Lane 2).

Decisions to make after evaluation include reinforcing components that make the Team Read project succeed, duplicate the same program in other interested schools, increased retention of the project coaches, and reduce dropout rates among learners. The Project Manager shall initiate recommendations for future project development such as include parents and the community in the program to avoid interferences.

The program evaluation enables the Project Manager to identify the efficiency of the reading program in terms of resource usages, the effectiveness of the program through measuring performance indicators, or objectives against its goals. Program effectiveness and efficiency aid in decision-making, solve accountability issues, and program planning. It is also useful in enhancing operation, monitoring of the program, and reallocation of resources to other important areas. Thus, for Trish McKay to comprehend the value, and have knowledge of the elements of the reading program, evaluation is necessary.

Key questions to answer during the evaluation

The evaluation process should reflect questions related to factors that have contributed to increased learning (changes in knowledge, perception, attitude, and skills among learners). It should also address questions of learning conditions such as self-reliance and increased reading or literacy.

The evaluation must also answer issues of the main activities under the project. This ensures that the project evaluation does not end up justifying irrelevant activities. Also, the evaluation should look into issues of the target goals and achieved goals. This leads to the question of whether the program implementation ended up as initially planned.

The process must also clearly answer the question of how many learners have successfully gone through the program, how many of them can reliably read on their own, and have shown remarkable improvements. There are also issues of coaches and learners attrition that evaluation questions must address, including project long-term sustainability.

Indicators of success of the program, their definitions, and measurement

Program key indicators should demonstrate the efficiency and effectiveness of reading outcomes.

  • Expected increments in learning (changes in knowledge, perception, attitude, and skills among learners): the project evaluation must identify the increased levels of reading among learners in terms of specific numbers and periods. This enables the Project Manager to track the efficiency of the reading program.
  • Increased retention of coaches and learners: the evaluation must show the number of dropouts of both learners and coaches since the inception of the program and indicate positive gains.
  • Increased benefits to coaches: coaches undergo reading training to improve their skills. The process identifies the number of coaches that have benefited from training opportunities due to the reading program.
  • Enhanced efficiency and effectiveness in the use of resources: the program has a budget that guides the use of funds, and acquisition of resources.
  • Some institutions have shown interest in the program through unsolicited phone calls and surveys. There are also cases of increased community involvement.

Explanation of the proposed evaluation design, conducting the evaluation and data collection

This is an outcomes-based evaluation. This design will enable the evaluation team to collect specific, measurable, and observable characteristics among learners and coaches that reflect both achievements and drawbacks of the program. Outcomes reflect the benefits that stakeholders derive from the program. In this program, outcomes can be in the forms of observable changes in learners (enhanced learning), and acquisitions of skills among coaches, increased reading ability, and self-reliance. It must clearly define target groups with clear goals and objectives of expected outcomes.

The evaluation design shall include identification of the main outcomes the evaluation team wants to examine. This includes the overall purpose of the reading program and the impacts the program has on all stakeholders. Thus, we must evaluate the general benefits the reading program has on learners. At the same time, the team must concentrate on the main activities that the project team does to achieve such outcomes. This eliminates the chances of evaluating irrelevant activities and collecting unnecessary information.

The design will also prioritize outcomes that the evaluation team wants to examine such as increased levels of reading, benefits to coaches, usages of resources, and factors that result in dropouts among participants. This depends on available time and other resources.

The design shall also define program observable indicators. This is a critical stage in outcomes-based evaluation. The team must define indicators for intangible activities such as changes in attitude, knowledge, reading, and skills. There should also be specific indicators for concerns related to supports readers get from coaches. The evaluation team must be careful with results that present achievement of associated indicators simply because such indicators are present.

The evaluation decision should also reflect outcomes in terms of achievements against the target. For instance, the evaluation team may demonstrate dropout among coaches as shown here by the indicator.

Dropout among coaches

This also shows how many coaches went through the program in certain periods, what number completed the session reliably, and continued with the program to date.

The outcomes-based design also provides the evaluation team with the means of efficiently and realistically collecting information about the reading program, analyzing data, and presenting the findings.

The evaluation team shall use purposeful sampling to obtain in-depth knowledge of problems under examination. The evaluation team shall take careful account of measurement instruments to ensure their technical aspects, reliability, validity, and avoidance of any possible bias. At the same time, instruments of the study are also appropriate to the study i.e. they are easy to use.

The evaluation team shall collect data from focus groups. This ensures that data collected have an in-depth presentation of issues under review. At the same time, the researcher ensured the confidentiality of the participants. The discovery approach, in the program, tends to be iterative as the study is interpretative. This method ensures that the team collects only relevant data to problems. The evaluation team shall also use interviews to understand learners’ and coaches’ impressions or experiences of the reading program.

Expected problems and means to overcome them

The possible problems related to processes of evaluation. For instance, the team may experience challenges related to describing certain problems due to links, and casual relationships with other problems and cause agents. Further, such cases may present difficulties and consume a lot of time during analyses. There are also cost-related issues.

To avoid these problems, prior planning and defining what is within the range of program evaluation may help avert some of them.

A critique of the recent evaluation

Description of the method of the most recent evaluation (What was done)

The methodology used revealed strong benefits to program coaches as the results for the main target group (learners) were equivocal. Only fifth-graders showed improvement in the test scores, second-and third-grade showed that fewer learners met the reading standard about control groups, and the results for fourth-grader were not available. This led to a recommendation for improved training and reinforcement. This was the first evaluation.

The second evaluation had an adjusted evaluation approach to hone the accuracy of the results. Still, the results revealed high levels of benefits to coaches and weaker gains for the learners than the control groups. This led the Programme Manager to doubt the results.

The evaluator used statistical methods for resolving data problems and improved the analysis of the program’s impact on reading skills. In doing this, the evaluator only focused on 10 schools in the second year of the program instead of all 17 schools.

The evaluator used open-ended questionnaires for data collection. The questions entailed both a qualitative and quantitative approach with surveys and interviews as methods of data collection. This method enables the evaluator to collects varied views from respondents.

The evaluator used the statistical significance of the effect size to authenticate results. However, we must note that when performing analyses of effects by relying on small sample sizes such significance tests can mislead and present wrong results. This is because statistical significance does not directly indicate the effect size. Instead, it works as the size of the sample, p (statistical significance) level, and effect size. This means the evaluator took the easiest means of looking at the directions of the results in terms of signs (<, >, and =) to establish the statistical meaning behind the results.

We must also note that the evaluator relied on published works of Borman and D’Agostino of 1996. This work determined the effect size as an effectiveness criterion by using a statistical formula based on 30 years of evaluation studies. However, the challenge is that such information on significance can cause problems during the practical interpretation of the results. This is because significance tests do not indicate the size of variations between different indicators (practical significance). At the same time, it is difficult to compare it across different studies.

Strengths and weaknesses of the last evaluation (Reliable results)

The evaluator’s method of analysis had some weaknesses. The report did not indicate how the evaluator arrived at such a criterion. Also, the evaluator did not perform site-specific analysis in the report. These aspects make such results lack authenticity; thus, unreliable. We have also noticed that, at some points, the evaluator did not perform analysis. Instead, she relied on assumptions such as “Is it reasonable to assume that the pre-and post-tests measure the same skills?”

To answer this question, the evaluator adopted a correlation criterion. For instance, “the correlation between pre-and post-tests was near or above 0.8, as was the case for the 2nd grade where pre-and post-tests were essentially the same, the evaluator interpreted the pre-to post-test change score as a gain score. However, in cases where the correlation coefficient was less than 0.8, the evaluator performed no analysis” (Source: Team Read B).

This idea of none reporting such results, and in some cases, specific results that the evaluator considered not statistically significant usually bring substantial cases of biases in the interpretation of results. Also, the evaluator assumed that only statistically significant results were essential for analyses, and results that had not significance were not necessary for analyses. This is the main weakness of the evaluation results.

There is also a problem with the data collection approach and sample representation. The evaluator did not take into account that qualitative and quantitative designs use different methods when selecting research participants. The evaluator could have used purposive sampling to collect in-depth information from coaches (qualitative study) and apply random sampling to gain quick information about learners’ experiences with the reading program (quantitative study). This is because coaches have specific views about the program and can form part of the focus group for valuable information.

The evaluation processes have strengths in the manner the evaluator performed technical analyses of data to derive conclusions, though interpretation was a challenge.

We have also noticed that the evaluator applied a mixed research design of both qualitative and quantitative design to collect rich information. This is a suitable approach to data collection as it enabled the evaluator to adjust research questions to fit the responses under a qualitative study.

Assume that the second-grade results are correct (recommendations for making the program even better)

The second-grade results show improvement for most readers. Thus, we can make recommendations that can enhance the effectiveness and efficiency of the reading program for other graders.

  • The use of high school students to coach elementary learners is cost-effective and increases instructional time.
  • The application of a strong coaching session across the whole group is effective.
  • Foster a good relationship among coaches and learners.
  • Placing more emphasis on comprehension than phonics.
  • Enhancing good central management of the program.
  • There should be a one-on-one approach to reading rather than focusing on the group as a whole.
  • There should also be varieties of other instructional activities to support the reading culture.

How might the next evaluation be changed to make it far more useful for learning about and improving the Team Read program?

The next evaluation process must avoid the problems highlighted above. First, the evaluator must ensure that information collected is valid and reliable. This implies that the evaluator must conduct a pilot study to determine the reliability, validity, and acceptability of the study instruments and measurement. The use of focus group will also help the evaluator capture in-depth knowledge of what coaches’ opinions are about the program.

The evaluator must also take caution when deriving an effect size using a significance test. The established significance value shall guide the evaluator in making reliable assumptions about the study. At the same time, the evaluator must avoid assumptions and treatment of some tests as insignificant to the study as they have overall effects on results. Also, the evaluator must account for what the significance test fails to tell us in terms of differences among the values. This implies that such tests require a standardization of the differences and make comparisons with zero. This leads to a better interpretation and reliable results.

Works Cited

Lane, Fredrick. Current Issues In Public Administration. New York: Bedford/St Martian’s, 1999. Print.

Milakovich, Michael, and George J. Gordon. Public Administration In America. Boston: Bedford/St Martin’s, 2001. Print.

Print
Cite This paper
Select a referencing style:

Reference

IvyPanda. (2021, January 18). Project Evaluation and Program Success Measurement. https://ivypanda.com/essays/project-evaluation-and-program-success-measurement/

Work Cited

"Project Evaluation and Program Success Measurement." IvyPanda, 18 Jan. 2021, ivypanda.com/essays/project-evaluation-and-program-success-measurement/.

References

IvyPanda. (2021) 'Project Evaluation and Program Success Measurement'. 18 January.

References

IvyPanda. 2021. "Project Evaluation and Program Success Measurement." January 18, 2021. https://ivypanda.com/essays/project-evaluation-and-program-success-measurement/.

1. IvyPanda. "Project Evaluation and Program Success Measurement." January 18, 2021. https://ivypanda.com/essays/project-evaluation-and-program-success-measurement/.


Bibliography


IvyPanda. "Project Evaluation and Program Success Measurement." January 18, 2021. https://ivypanda.com/essays/project-evaluation-and-program-success-measurement/.

Powered by CiteTotal, easy referencing machine
If, for any reason, you believe that this content should not be published on our website, please request its removal.
More related papers
Updated:
Cite
Print
1 / 1