Summary
Pay-for-performance programs differ in numerous aspects, encompassing the element of responsibility, the motivation formation, the criterion for giving bonuses, performance measures and thresholds, the magnitude of additional benefits, allocation methods, and disbursement frequencies. Studies are not conclusive regarding the impacts of performance-anchored enticements on staff cooperation, and a number of the wider liability literature proposes several possible impressing and detrimental impacts on classroom performances. The New York City Schoolwide Performance Bonus Program (SPBP) denotes a school-anchored pay-for-performance plan (Marsh et al., 2011). Every school needed 55% endorsement of its United Federation of Teachers (UFT)-represented personnel for participation in the program. There was a possibility of a school earning up to 3,000 dollars for every one of its full-time UFT-constituted personnel if the school attained its yearly performance goal, as established by the New York City Department of Education (NYCDOE) responsibility plan. The program necessitated partaking schools to set up a 4-person compensation committee (CC) to decide on the distribution of bonuses to deserving personnel. SPBP established an affiliation involving the UFT and NYCDOE and a combination of their dissimilar perceptions and theories of accomplishment concerning inducement pay.
The key objective of the program was to enhance the success of students through inducements based on performance to ensure that the members of staff team up toward a common objective. 205 schools took part in the plan in its initial year, 198 in the subsequent year, and 196 in the closing (third) year (Marsh et al., 2011). 62% of schools taking part won a bonus in the first year and 84% in the second year, which translated to 20 million dollars being won in the first year and 30 million dollars in the second year. After the number of schools obtaining bonuses increasing considerably in the first two years, there was a recalibration of adeptness cut scores for the performance, which resulted in decreases of school grades thus considerably reducing the number of schools getting schoolwide bonuses. Just 13% of participating schools won bonuses in the third year, which was 4.2 million dollars. The evaluation employed a combination of qualitative and quantitative techniques in the examination of research questions that encompassed the means of executing the SPBP program, the results of the program, and the impacts of the program.
Research Method Used in Association with the Research Questions and Hypothesis
The research methods used encompassed qualitative and quantitative techniques and utilized the experimental design of SPBP to appraise the impacts of the program. The application of both qualitative and quantitative techniques greatly enhanced the evaluation of the program as it addressed the concerns of both techniques through the handling of the data in reliable and reproducible approaches by combining figures, evaluating data, and assessing the levels of change. This created room for greater precision as compared to just discussing concerning increases and decreases. With the help of three research questions, reviews, interviews, case studies, and site visits, it was possible to collect data regarding the execution of the program. Since research questions are meant to handle the concerns of the study, the three questions for this study were comprehensive as they addressed all the significant aims of the program, for instance, the execution of the program, its outcome, and the influence on performance. The hypothesis was that the opportunity to get bonuses anchored in the performance of the school could facilitate collaboration and that obtaining bonuses could heighten motivation, both of which would give rise to improved results (Marsh et al., 2011).
Implied in the hypothesis is the suggestion that a true fundamental connection will exist between the dependent and independent variables; the issuance of bonuses will lead to improvement in school performance. The description of the selection of the samples of the study (schools to engage in the program) lacks clarity. Following the announcement of the program, schools were requested to vote for participation, with every school requiring the endorsement of 55% of its UFT-comprised personnel for participation every year. The participating schools were selected from elementary, middle, kindergarten, K–8, and high schools in the New York City (Marsh et al., 2011). The use of random sampling was a poor way of selecting the schools to participate as it creates room for bias; poorly-performing schools could have been the most selected or vice versa. Suitable criteria for exclusion and inclusion ought to have been applied to eliminate any chance of bias. For instance, the selection would have been based on previous performance of the schools and an equal number selected from well-performing and poorly-performing schools for participation. Moreover, the site of conducting interviews, in addition to the confidentiality of the sessions is not tackled.
Soundness of the Study’s Findings and Conclusions
The findings did not demonstrate differential program impacts by school size and revealed no connection amid student performance and the compensation committee distribution arrangements for bonus allotments amid personnel. The lack of any statistically considerable dissimilarity regarding scores of program participants and control schools and the participating schools (irrespective of the random selection) and other qualified schools could be attributed to the short period of the program implementation and poor selection of the participants. This was evident in every component score for the Progress Reports, for instance, performance, surroundings, development, and additional acknowledgment (Marsh et al., 2011). The success of the program would have been enhanced through proper and extensive communication to all schools and their members of staff regarding the program at least a year prior to its execution. This would have allowed enough time for preparation for the program and could have resulted in more schools voting for participation, which could, as a result, have eliminated chances for bias.
Though the CC course of action was executed reasonably and efficiently, a number of schools had challenges with the decision-making progression (Marsh et al., 2011). This materialized due to the lack of an oversight commission for the program, which could have supervised and supported the undertakings of the committees in each participating school. Additionally, the program did not express overall improvement on student performance in any level. Nevertheless, the program would have been given at least five years prior to its evaluation so that the participating schools become accustomed to every detail of the program and any errors with the program execution have adequate time for rectification.
Overall Assessment
Taken as a whole, the contribution of the article lacks value with regard to the application of the SPBP program and the improvement of school performance. This indicates that though it is remarkable to discover the strength of motivation of educators and other stakeholders in ensuring improved performance, because if not motivated it would be impossible to realize excellent performance, the article is noticeably unessential in that it does not offer any generalizable results. This is because the program had no influence on motivation as intended. Hence, rather than just embarking on cash incentives for motivation enhancement, it is vital to assess the present employee anticipations that have emanated from extant incentive schemes. This will ensure the application of the most appropriate and considered award program, which will offer the utmost value with respect to motivating the personnel and improving performance.
Reference
Marsh, J. A., Springer, M. G., McCaffrey, D. F., Yuan, K., Epstein, S., Koppich, J., Kalra, N., DiMartino, C., & Peng, A. (2011). A big apple for educators: New York City’s experiment with schoolwide performance bonuses: Final evaluation report. Santa Monica, California: Rand Corporation.