An essential part of any change process is the evaluation phase. It is crucial to assess the effectiveness of the suggested initiative since the results of such an assessment will indicate how successful the proposed change has been (Hopkins, Stringfield, Harris, Stoll, & Mackay, 2014). Additionally, evaluation measures will help to predict future developments of the effort and adjust its components to satisfy the needs of the stakeholders better. In general, it is important to determine a specific goal for the evaluation which helps to set parameters concerning the data that will be collected and the methods and tools that will be used (Hallinger, Heck, & Murphy, 2014).
It is crucial to make a clear definition of each component, procedure, and tool of the evaluation. Furthermore, it is necessary to supervise and assess each of these features in order to make sure that they are functioning as intended and that they are consistent with the goals. Finally, as Shaout and Yousif (2014) mention, a variety of evaluation techniques need to be combined for a comprehensive evaluation and knowing the strengths and limitations of these techniques.
According to Fulmer, Lee, and Tan (2015), a productive approach to employ during evaluation is measuring the initiative at multiple levels rather than at a single one. For instance, it is viable to rate the program at an individual level, at a small-group level, and at grade and school levels. Each of these measurements will enable collecting crucial data from different perspectives, which is highly effective for the evaluation plan.
Research by Jonsson, Lundahl, and Holmgren (2015) indicates that the evaluation also needs to be performed through all the stages of implementing new teaching practices: the initial, the middle, and the final one. Monitoring and tracking the process will help to identify problems and make the necessary corrections early in the intervention, which will enable the avoidance of crucial mistakes in the teaching practice.
The evaluation should also include evidence about the effectiveness of the initiative. Since any school change is related to learners to a greater or smaller extent, it is essential to consider this stakeholder group’s performance as a success factor. Jaggars, Hodara, Cho, and Xu (2015) remark that when introducing any new teaching practice, student outcomes play an important role in determining the effectiveness of this practice.
A crucial component of the evaluation plan is arranging a thorough schedule. Organizing the evaluation in advance helps to guarantee gathering all the necessary information at the right time (Hopkins et al., 2014). More importantly, it is necessary to take a pre-measurement and a post-measurement test in order to understand and determine impacts and the effectiveness of this initiative better (Guyadeen & Seasons, 2015). Thus, with the help of the evaluation, the predicted outputs can be compared to the received ones (Guyadeen & Seasons, 2015). If the anticipated results meet or exceed the goals set in the beginning, the change may be considered as successful.
Furthermore, it is essential to collect data from multiple informants (Cleary, Horsfall, & Hayter, 2014). Apart from teachers and students, it will be useful to inquire about parents’ opinions on the initiative. Research indicates that collaboration and cooperation between schools and parents have a beneficial effect on the success of educational initiatives (Rodriguez, Blatz, & Elbaum, 2014). Therefore, a proper evaluation plan should include data from parents of the students participating in the school change initiative.
According to Sutton and Austin (2015), an important feature to be included in the evaluation plan is employing multiple data collection tools. Therefore, the use of quantitative and qualitative tools will help to gather data from participants and to receive different perceptions and accurate information. Moreover, it is crucial to set objectives for these data collection tools to reach the purpose of the evaluation plan.
A sample evaluation plan for applying digital game-based learning will include several steps. The main goal of this plan is to assess the teaching practices of digital games and the effectiveness of using digital games for improving teaching and learning. The evaluation committee will monitor this plan by collecting and analyzing data. The duration of the evaluation plan will be one year. The first step is aimed at evaluating and understanding students’ performance when applying digital games.
This phase will be completed in two steps: by collecting data before applying the digital games and after it (pre-test and post-test) to compare students’ performance. Also, students’ achievements will be evaluated at the end of the year to measure student learning outcomes, which will help to indicate the success of this initiative.
Another step is focused on understanding and evaluating the teaching practice of digital games and teachers’ opinions. According to Alshenqeeti (2014), an interview technique has many benefits, such as collecting data through direct communication, asking clarifying questions if needed, and gaining more detailed data. Teachers will be interviewed during the application of digital games and after it. Moreover, teachers and students will complete surveys in the middle and at the end of the year. The information that will be collected at different stages will help to obtain a comprehensive evaluation and monitor the change for timely interventions and making changes if necessary. Moreover, the data gathered will help to avoid crucial mistakes in the teaching practice.
One more element of the plan is aimed at receiving outputs and opinions from parents of students about applying digital games. A survey is a helpful tool employed for data collection since it helps to collect personal opinions (Sutton & Austin, 2015). Parents’ views play an important role in evaluating the success of any educational initiative (Goodall, 2016). The survey will be sent to parents after applying digital games. Upon gathering the necessary data from respondents, the evaluation committee will analyze it. If the results reach the goals of the change, the initiative will be considered successful. However, if the evaluation shows that the desired outcome has not been reached, it will be necessary to think of the measures aimed at improvements.
A sample evaluation plan for using VR intends to assess the teaching practices of VR and evaluate the effectiveness of using this approach for improving teaching and learning. The evaluation plan will cover the period of one year. First of all, to assess the practices of teachers and feedback about applying VR, observation and survey will be used. Observation will be employed both for teachers and students and will take place twice: during the change initiative and at the end of it.
There are several principal advantages of using observation for data collection, including direct observation and interaction with the participants, an informal setting, and regular meetings with the respondents (Sutton & Austin, 2015). To gather data through observation, many researchers take field notes, which allows being involved in the process of communication without distractions and analyzing the collected data later (Sutton & Austin, 2015). Observation also enables the researcher to obtain the most relevant information and investigate the processes and initiatives in question directly.
Additionally, teachers and principals will complete a survey about the teaching practice by using virtual reality in the classroom at the end of the year. The information obtained during this phase will help to track the progress of change initiative and make a comprehensive assessment.
The next step is aimed at evaluating and determining students’ performance after using the virtual reality method. To gain an objective assessment of this aspect, it will be necessary to measure students’ achievement before implementing the change and after it. Moreover, interviews will be conducted with parents after applying the change initiative with the purpose of getting feedback and opinions about using the virtual reality tool. Upon analyzing the collected data by the evaluation committee, the next steps will be identified, and decisions about the initiative will be made.
References
Cleary, M., Horsfall, J., & Hayter, M. (2014). Data collection and sampling in qualitative research: Does size matter? Journal of Advanced Nursing, 70(3), 473-475.
Fulmer, G. W., Lee, I. C.H., & Tan, K. H. K. (2015). Multi-level model of contextual factors and teachers’ assessment practices: An integrative review of research. Journal Assessment in Education: Principles, Policy & Practice, 22(4), 475-494.
Goodall, J. S. (2016). Technology and school–home communication. International Journal of Pedagogies and Learning, 11(2), 118-131.
Hallinger, P., Heck, R. H., & Murphy, J. (2014). Teacher evaluation and school improvement: An analysis of the evidence. Educational Assessment, Evaluation and Accountability, 26(1), 5-28.
Hopkins, D. Stringfield, S., Harris, A., Stoll, L., & Mackay, T. (2014). School and system improvement: A narrative state-of-the-art review. School Effectiveness and School Improvement: An International Journal of Research, Policy and Practice, 25(2), 257-281.
Jaggars, S. S., Hodara, M., Cho, S.-W., & Xu, D. (2015). Three accelerated developmental education programs: Features, student outcomes, and implications. Community College Review, 43(1), 3-26.
Jonsson, A., Lundahl, C., & Holmgren, A. (2015). Evaluating a large-scale implementation of assessment for learning in Sweden. Journal Assessment in Education: Principles, Policy & Practice, 22(1), 104-121.
Rodriguez, R. J., Blatz, E. T., & Elbaum, B. (2014). Parents’ views of schools’ involvement efforts. Exceptional Children, 81(1), 79-95.
Shaout, A., & Yousif, M. K. (2014). Performance evaluation – Methods and techniques survey. International Journal of Computer and Information Technology, 3(5), 966-979.
Sutton, J., & Austin, Z. (2015). Qualitative research: Data collection, analysis, and management. The Canadian Journal of Hospital Pharmacy, 68(3), 226-231.