Evaluation of Educational Programs Report (Assessment)

Exclusively available on IvyPanda Available only on IvyPanda
Updated: Apr 6th, 2024

Introduction

The United States Environmental Protection Agency (EPA) was established during the period of increased environmental concerns due to high rates of pollution in 1970 (EPA, 2013). EPA became an important agency in terms of federal research, developing standards, monitoring, and enforcing activities in order ascertain effective environmental protection. EPA aims to promote a clean and healthy environment for Americans with a strong focus on environmental results rather than money. As a result, EPA has adopted several programs with diverse goals to achieve its objectives. These programs include educational, collaborative, and environmental programs.

We will write a custom essay on your topic a custom Assessment on Evaluation of Educational Programs
808 writers online

The evaluation questions and criteria to be addressed

The purpose of the evaluation is to measure the success of the environmental education program provided by United States Environmental Protection Agency (EPA) on achievement of creating public knowledge and awareness concerning prevailing environmental issues and problems.

The evaluation design [Methodology]

This evaluation shall use a mixed methods approach that combines both qualitative and quantitative research methods in order to explore the impacts of environmental educational program among stakeholders in the US. The evaluation would use a mixed methods approach in order to enhance the following advantages. First, the evaluation shall achieve the research validity by using triangulation. Triangulation entails verification of evaluation results. For instance, the evaluator would use both qualitative and quantitative approaches in order to assess impacts of environmental education program on the target group. Second, the evaluator shall ensure validity and interpretability of the process through a mixed methods approach.

Qualitative study

A qualitative approach allows for an effective way of understanding research aspects when not much is known about the research issue (Creswell, 2008; Johnson and Christensen, 2010). A qualitative approach shall ensure that the evaluator gathers information for in-depth analysis of impacts of the program and formulate effective ways of developing appropriate programs. In other words, the evaluator would be able to provide a comprehensive view of the research participants (Steinberg, Bringle and Williams, 2010).

The qualitative research approach is an iterative method to study evaluation. Thus, the evaluator would rely on a discovery in order to establish effective instruments that can gather the needed information. In this regard, the evaluator would review the current studies in order to identify a general theme or pattern. Still, this method shall allow the program evaluator to change the study in order to provide a new point of view in the emerging themes.

This is an iterative process, which the evaluator could conduct many times before data analysis and interpretation of the findings. It ensures that the evaluator gathers only relevant data in the study as the process unfolds. Patton suggests the use of an open-ended approach in this type of study (Patton, 2002). Nonetheless, the evaluator must specify what areas of the study to focus on prior to the beginning of the study, but he must note that these areas of the study could change as the study continues and new methods of inquiry develop.

The evaluation design shall be exploratory in nature. This approach is appropriate for the evaluation because the evaluator aims to explore and understand impacts of the program in details. Thus, the evaluator must conduct a pilot study in order to be familiar with the state of program among the following participants:

1 hour!
The minimum time our certified writers need to deliver a 100% original paper
  • National environmental educators
  • K-12 students
  • Post-graduates and college students on program fellowship
  • On campus eco-ambassadors
  • Laboratory staff

This is important for developing appropriate logic model with an inclusive research.

Qualitative study accounted for non-numerical factors in environmental studies. Therefore, this approach provided descriptive and interpretative data for the study. On the other hand, quantitative approach shall address numerical data collected from the evaluation (Trochim, 2006; Howe & Eisenhardt, 1990). In addition, it shall allow the evaluator to provide high-levels of confidence in the study inferences.

Logic Model

EPA has used educational program in order to maximize impacts on environmental education and awareness by targeting institutions of learning. The evaluator shall use the logic model and its information in order to answer questions like:

  1. What are the educator’s views concerning the success of the program?
  2. Is there achievement for all students under the various identified programs?
  3. What is the effectiveness of laboratory work and assessments?
  4. How can several aspects of the program be improved?

Terms of reference

  • The educator’s views concerning the success of the program
  • Achievements for all students under the various identified programs
  • The effectiveness of laboratory work and assessments
  • Improving aspects of the program

The logic model shall assist the evaluator in defining specific activities for the evaluation. The evaluator shall be able to understand the impacts of educational program on research participants so that he can formulate different approaches that can improve environmental education program in order to realize EPA targets of clean and healthy environment.

Definitions of Logic Model Terms

  • Baseline: the current conditions of the education program with reference to its impacts or outcomes, and it shall provide a reference point from which the evaluator can judge the achievement of the program among different target groups.
  • Program challenges: any barrier, both internal and external to effective implementation of the program among the selected groups
  • Activities: these are what environmental educators engage in so that they can realize EPA mission
  • Outputs: these shall reflect results generated due to implementation of education programs on participants to create environmental awareness
  • Outcomes: these reflect the result of a given output on the environment against EPA mission. They would be both short-term and long-term elements of the program
  • Impacts: the general impact of the education program on the EPA key mission

Table 1: Term of Reference 2 – The educator’s views concerning the success of the program.

Evaluation questionsEvidenceMethod
How would you comment on the overall success of the program in the following areas:
  • Interview with educators
    1. Entrenchment in the national curriculum in the teaching institution
  • Educators’ feedback
  • Content of the educational national curriculum
  • Interview with educators
  • Review of the national curriculum
  • Focus group discussion
    1. The level of practical involvement of games and activities directed to K-12 students i.e., how many times per week, month etc
  • Educators’ feedback
  • Interview with educators
    1. Identify areas of success of the program on fellowships that targets and offered to selected post-graduates and college students from selected institutions of learning
  • Educators’ feedback
  • Interview with educators
    1. The quality of educational publications directed to researchers and public
  • Educators’ feedback
  • Interview with educators
    1. On campus eco-ambassadors who push for conservancy in their respective schools and learning places
  • Educators’ feedback
  • Interview with educators
    1. Laboratory research and assessment procedures
  • Educators’ feedback
  • Interview with educators

Table 2: Term of Reference 2 – Achievements for all students under the various identified programs.

Evaluation questionsEvidenceMethod
    1. How have students used the content on national curriculum to change the environment?
  • Educators’ feedback
  • Students’ feedback
  • General comments from other stakeholders
  • Sample of studies
  • Interview with educators
  • Interview with students
  • Review of published materials
    1. How effective have K-12 students used games and other activities to enhance public awareness concerning healthy and clean environment
  • Educators’ comments
  • Students’ comments
  • Interview with educators
  • Interview with students
    1. To what extent have fellowships targeting post-graduates and college students from selected institutions of learning been effective for EPA mission?
  • Educators’ comments
  • Students’ comments
  • Interview with educators
  • Interview with students
    1. How does campus eco-ambassadors who push for conservancy in their respective schools and learning places used their program to enhance EPA goals
  • Educators’ comments
  • Students feedback
  • Observable program impacts
  • Interview with eco-ambassadors students
  • Interview with educators
    1. Have students effectively used educational publications to change the environment?
  • Comments from educators
  • Review of publication contents
  • Students comments
  • Interviews with researchers and students
  • Review of sampled publications concerning environmental issues
    1. To what extent do students use laboratory processes or results to create environmental awareness?
  • Evidence from laboratory studies
  • Educators’ comments
  • Students’ feedback
  • Interview with educators and students
  • Analysis of laboratory evidence

Table 3: Term of Reference 3 – The effectiveness of laboratory work and assessments.

Remember! This is just a sample
You can get your custom paper by one of our expert writers
Evaluation questionsEvidenceMethod
    1. Do you believe that laboratory research and assessment procedures have been effective in enhancing environmental education among learners and educators?
  • Review of available laboratory publications
  • Laboratory staff and educators’ feedback
  • Students’ feedback on application of results in their programs
  • Interview with laboratory staff, educators, and students
    1. Do educators and learners apply findings from laboratory processes and procedures in their programs?
  • Learners’ comments
  • Feedback from laboratory staff
  • Educators’ comments
  • Interview with laboratory staff, educators, and students
    1. How have the public embraced research findings from the laboratory?
  • Evidence from published materials
  • Comments from educators and laboratory staff
  • Students feedback
  • Interview with laboratory staff, educators, and students

Table 4: Term of Reference 4 – Review of the program aspects.

Evaluation questionsEvidenceMethod
    1. Please may you suggest how we can improve the following programs of environmental education
  1. Entrenchment of environmental education program in the national curriculum
  2. Games and activities directed to K-12 students
  3. Fellowship programs for post-graduate and college students
  4. The program of campus eco-ambassadors who push for conservancy
  • Analysis of the curriculum for critical but missing aspects
  • Comments from learners, educators, and laboratory staff
  • Review of laboratory study results
  • Review of past studies in the field
  • Interview with students, laboratory staff, and educators
  • Focus group discussion

Validity

The evaluator shall safeguard against possible bias and threats that may challenge the outcomes of the evaluation. Thus, a valid indicator must evaluate what the evaluator intends to evaluate. There are a number of ways to ensure a valid study, but they all focus on threats and bias in the study (Wolcott, 1994). The evaluator shall rely on the design of the evaluation instruments, which must measure “specific purpose with a specific group of research participants” (Steinberg et al., 2010).

The evaluator shall measure impacts of environmental education on EPA programs. From this perspective, the evaluator shall ensure that the study instruments and their scores on the scale match theoretical concepts of the logic model because it promotes the concept of a multidimensional view to the program development for environmental studies. Thus, the evaluator shall ensure that evaluation tools account for the study process, advocacy among eco-ambassador students, laboratory outcomes, K-12 students’ participation in games, and other activities. These indicators must show impacts of educational programs on the environment against the EPA mission.

The evaluator would ensure that study instruments are clear, have simple terms, and avoid simplification because the objective of the evaluation is to collect thorough and specific data concerning study questions. Therefore, content validity of the study will assess the extent to which “the scale items are a representative sample of a clearly defined conceptual domain” (Steinberg et al., 2010).

The evaluator would also make sure that there is concurrent validity during the process (Trochim, 2006). He shall contrast scores on outcomes and independent assessment of the curriculum against the EPA mission. The objective is to establish the association between the validity criteria and the scale in theory. For example, the study may measure application of laboratory results and link them to practices in real achievements of learners concurrently because these two aspects relate to each other practically.

The evaluator shall systematically choose study participants from educators, learners, and laboratory staff in order to avoid bias that may take place because of bias in sampling. Researchers have established that bias exists, particularly in self-reporting. Therefore, the evaluators shall strive to minimize bias from self-reporting by using a focus group in some instances where high-level of knowledge is necessary. This can affect the final evaluation conclusion. Thus, the evaluator would only use study participants who have received professional training and qualifications in environmental studies, teaching, and training. As a result, the evaluator would reduce bias that occurs because of social desirability bias from educators and other participants.

The cyclical and emergent qualities of “qualitative research may make it difficult to differentiate between data collection and analysis” (Steinberg et al., 2010). On this challenge, Patton observes that at the course of fieldwork, ideas develop, patterns assume their directions, and possible patterns become obvious (Patton, 2002). The evaluators may rely on the following stages to corroborate the earlier discovered patterns in the evaluation.

The evaluator shall show the credibility of the evaluation outcomes by making sure that findings are consistent with responses from study participants. The evaluator would also avoid personal bias and assert his credibility by clarifying any personal or professional interests that would affect the findings of the study.

We will write
a custom essay
specifically for you
Get your first paper with
15% OFF

The evaluator would make sure that the evaluation results meet transferability requirements by proving direct assumptions and descriptions of the processes. Further, the evaluator shall make sure that readers can draw their own conclusions concerning the evaluation results in any given context (Patton, 2002).

The evaluator shall provide an account of “the stable, consistent elements of study findings and the contextual changes that will take place during the study” (Steinberg et al., 2010) to show dependability of the study. The evaluator will also provide a detailed explanation of the study methodology to allow evaluators others apply his approach in another setting.

The evaluator would provide a detailed description of the evaluation processes so that others can also confirm the evaluation findings. In this respect, the evaluator will use five ways that Patton (2002) provide for confirmability of a study. They include the following:

  • Creating and reviewing various assumptions and conclusions
  • Finding and analyzing unusual findings that do not reflect earlier knowledge obtained
  • Triangulating by using many approaches, sources, or concepts to show consistency in results
  • Ensuring that evaluation approaches and gathered data are in context and must account for design challenges and their possible effects on data analysis and evaluation outcomes
  • Analyzing emerging problems and developing suitable approaches for further studies

The information and the sources of information

The national curriculum on environmental studies shall provide valuable information upon which the evaluator can develop his indicators. Data from the curriculum shall form the basis of the evaluation and their impacts on the EPA mission.

Research publications would act as the source of information on which the evaluator can gauge publications based on activities and experiences from the field and laboratory. On the same note, the laboratory staff shall provide accounts of how laboratory processes and assessments have influenced environmental education among students and the public.

Educators shall provide information on implementation, inputs, outputs, and impacts of the programs on the learners and the environment.

All students would provide information based on their programs. From such data, the evaluator shall be able to understand how environmental education has affected students both in theoretical and practical aspects. This shall also account for behaviors, attitudes, knowledge, and skills of students concerning the environment. Overall, the evaluator must focus on information and their sources with regard for EPA’s mission of clean and healthier environment.

The evaluator will select institutions that may provide in-depth information regarding environmental education.

Interviews and focus group’s credibility

  • The evaluator shall develop familiarity with the area of evaluation and prolonged engagement with respondents
  • Triangulation of both focus group and interviews shall ensure credibility of data collected
  • Only willing participants shall take part in the study
  • Evaluator shall use iterative questioning to detect anomalies or ‘lies’
  • Recognition of challenges in the study design and their potential effects
  • The evaluator shall ensure that findings can be generalized in other areas
  • Evaluator shall ensure accuracy of measurement instruments

The measures that will be developed

The evaluator shall provide the background account, objectives, and questions of the study. He will also determine the audience, their background, and study levels (Gotch & Hall, 2004). The evaluator will use purposive sampling in this study (Cordano, Welcomer & Scherer, 2003). Steinberg and fellow authors observe that qualitative researchers “purposefully sample the specific documents, artefacts, people, and programmes that are illuminative and that they believe will provide a great deal of information related to the purpose of the research” (Steinberg et al., 2010).

The evaluator shall generate measure or questionnaires for the evaluation. The evaluator shall use contents from literature review and the national curriculum on the environment to develop effective measures (Zimmermann, 1996).

The evaluator shall link the study instruments with the study purpose i.e., to measure the success of the environmental education program provided by United States Environmental Protection Agency (EPA) on achievement of creating public knowledge and awareness concerning prevailing environmental issues and problems. The evaluator shall define what the study shall measure, such as learners’ knowledge, skills, attitudes, opinions, behavior changes, and perceptions in relation to the environment (Vaske & Kobrin, 2001; Mayer & Frantz, 2004). The evaluator will also define his study variables at this stage (Morrone, Mancl & Carr, 2001).

The evaluator shall use Likert scale to quantify some aspects of study instruments.

The evaluator shall establish validity of the questionnaire. The evaluator shall review the instruments with a panel of experts on the field of study and conduct a test (Norland, 1990). The evaluator shall ensure that the questionnaire is valid, i.e., the questionnaire will “measure what it is intended to measure” (Norland, 1990). It will also represent the study content, be appropriate for various participants and comprehensive to gather enough data for the study (Clayton & Opotow, 2003; Leeming, Dwyer & Bracken, 1995).

The evaluator must also obtain approval from the Institutional Review Board (IRB) and ensure that the study meets all ethical standards (Pimple, 2006). He will then conduct a test with different participants and make changes to questionnaire based on the test outcomes (Norland, 1990).

The evaluator shall use pilot test results to establish reliability of the study questionnaires. The aim is to ensure that the questionnaire is consistent with what it is designed to measure (Thompson & Barton, 1994). Overall, the evaluator shall follow “appropriate and systematic procedures in questionnaire development, testing, and evaluation to avoid undermining the quality and utilization of data” (Esposito, 2002).

The data collection methods

The evaluator shall use an interview to collect information from participants because it is an effective method for exploratory qualitative research. He will carry out the interview in person. Interviews shall offer the researcher with the chance to collect in-depth information. This study shall have open-ended questions, and the evaluator shall make sure that the respondent are “free to say anything” (Steinberg et al., 2010). The evaluator will record responses from the participants by using a tape recorder. Afterwards, the evaluator shall “transcribe data for analysis” (Patton, 2002).

The study would also rely on focus groups to collect data. The focus group shall offer the advantage of interaction among study respondents and review of comments by the group. The evaluator shall eliminate problems of uneven contribution. In addition, focus group discussion saves time for data collection. Members are in groups in which they contribute as a team. However, Steinberg and colleagues point out that a focus group has some challenges. For instance, certain members of a group could dominate others from active participation while in some instances, the study design may be time-consuming because of comprehensive contents and subsequent lengthy discussions.

The evaluator shall keep data safe. All collected data shall remain confidential. The results of the study shall be published in the evaluation report. The evaluator shall destroy data after three years.

The methods of data analysis

The evaluator shall clean data collected and determine their accuracy. He then proceeds to code data into emerging patterns or themes depending on participants’ feedback.

The evaluator shall use both quantitative and qualitative methods to analyze data collected (Bazeley, 2002). The evaluator will use frequency for quantitative data analysis. SPSS software will be suitable for this process (Pallant, 2005). Conversely, he will apply qualitative methods such as content analysis for reviewing environmental curriculum and for themes and patterns (Miles and Huberman, 1994). Content analysis is the hallmark method in “social science methodology that deals with the understanding of the content of the message” (Steinberg et al., 2010).

Table 5: Analysis Plan.

Evaluation questionsData Analysis MethodAnswer the question
How would you comment on the overall success of the program in the following areas:
    1. Entrenchment in the national curriculum in the teaching institution
Quantitative: frequency or countsShow charts, table, or figures for comparison to targets
    1. The level of practical involvement of games and activities directed to K-12 students i.e., how many times per week, month etc
    1. Identify areas of success of the program on fellowships that targets and offered to selected post-graduates and college students from selected institutions of learning
    1. The quality of educational publications directed to researchers and public
Qualitative: content analysisIn-depth account of environmental education programs on EPA mission
    1. On campus eco-ambassadors who push for conservancy in their respective schools and learning places
    1. Laboratory research and assessment procedures
Evaluation questions
    1. How have students used the content on national curriculum to change the environment?
Qualitative: content analysisIn-depth account of environmental education programs on EPA mission
    1. How effective have K-12 students used games and other activities to enhance public awareness concerning healthy and clean environment
    1. To what extent have fellowships targeting post-graduates and college students from selected institutions of learning been effective for EPA mission?
Quantitative: frequency or counts

Qualitative: content analysis

Show levels of achievements

Role of participants in EPA mission

    1. How does campus eco-ambassadors who push for conservancy in their respective schools and learning places used their program to enhance EPA goals
    1. Have students effectively used educational publications to change the environment?
    1. To what extent do students use laboratory processes or results to create environmental awareness?
Quantitative: frequency or countsShow charts, table, or figures for comparison to targets
Evaluation questionsData Analysis MethodAnswer the question
    1. Do you believe that laboratory research and assessment procedures have been effective in enhancing environmental education among learners and educators?
Qualitative: content analysisIn-depth account of environmental education programs on EPA mission
    1. Do educators and learners apply findings from laboratory processes and procedures in their programs?
Quantitative: frequency or countsShow charts, table, or figures for comparison to targets
    1. How have the public embraced research findings from the laboratory?
Qualitative: content analysisIn-depth account of environmental education programs on EPA mission
Evaluation questionsData analysis methodAnswer the question
    1. Please may you suggest how we can improve the following programs of environmental education
  1. Entrenchment of environmental education program in the national curriculum
  2. Games and activities directed to K-12 students
  3. Fellowship programs for post-graduate and college students
  4. The program of campus eco-ambassadors who push for conservancy
Qualitative: content analysisIn-depth thoughts for improving the EPA education programs

References

Bazeley, P. (2002). Computerized data analysis for mixed methods research. Thousand Oaks, CA: Sage.

Clayton, S., & Opotow, S. (2003). Identity and the Natural Environment: The psychological significance of nature. Cambridge, MA: MIT Press.

Cordano, M., Welcomer, S.A. & Scherer, R.F. (2003). An analysis of the predictive validity of the New Ecological Paradigm Scale. The Journal of Environmental Education, 34(3), 22-28.

Creswell, J. W. (2008). Research Design: Qualitative, Quantitative, and mixed methods approaches (2nd ed.). Thousand Oaks, CA: Sage Publications.

EPA. (2013). EPA History. Web.

Esposito, J. (2002). Interactive, multiple-method questionnaire evaluation research: A case study. Charleston, SC: QDET.

Gotch, C. & Hall, T. (2004). Understanding nature-related behaviors among children through a Theory of Reasoned Action approach. Environmental Education Research, 10(2), 157-177.

Howe, K. R., & Eisenhardt, M. (1990). Standards for qualitative (and quantitative) research: A prolegomenon. Educational Researcher, 19(4), 2-9.

Johnson, B., and Christensen, L. (2010). Educational Research: Qualitative, Quantitative and Mixed Approaches (4th ed.). Thousand Oaks, CA: Sage.

Leeming, F.C., Dwyer, W.O. & Bracken, B.A. (1995). Children’s Environmental Attitude and Knowledge Scale: Construction and validation. The Journal of Environmental Education, 26(3), 22-31.

Mayer, F.S. & Frantz, C.M. (2004). The connectedness to nature scale: A measure of individuals’ feeling in community with nature. Journal of Environmental Psychology, 24, 503-515.

McCawley, P. (n.d). The Logic Model for Program Planning and Evaluation. Web.

Miles, M.B. and Huberman, M. A. (1994). Qualitative Data Analysis – An Expanded Sourcebook. London : Sage Publication.

Morrone, M., Mancl, K. & Carr, K. (2001). Development of a metric to test group differences in ecological knowledge as one component of environmental literacy. The Journal of Environmental Education, 32(4), 33-42.

Norland, E. (1990). Controlling error in evaluation instruments. Journal of Extension, 28(2), 1.

Pallant, J. (2005). SPSS Survival Manual. Sydney: Ligare.

Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Newbury Park, CA: Sage.

Pimple, K. (2006). Protection of human subjects in non-biomedical research: A tutorial. Web.

Steinberg, K., Bringle, R., and Williams, M. (2010). Service-learning research primer. Scotts Valley, CA: National Service-Learning Clearinghouse.

Thompson, S. & Barton, M.A. (1994). Ecocentric and anthropocentric attitudes toward the environment. Journal of Environmental Psychology, 14, 149-157.

Trochim, W. (2006). The Research Methods Knowledge Base (2nd ed.). Cincinnati, OH: Atomic Dog Publishing.

Vaske, J.J. & Kobrin, K.C.. (2001). Place attachment and environmentally responsible behavior. The Journal of Environmental Education, 32(4), 16-21.

W.K. Kellogg Foundation. (2004). Logic Model Development Guide. Michigan: W.K. Kellogg Foundation.

Wolcott, H. (1994). Transforming qualitative data: Description, analysis, and interpretation. Thousand Oaks, CA: Sage.

Zimmermann, L. (1996). The development of an environmental values short form. The Journal of Environmental Education, 28(1), 32-37.

Print
Need an custom research paper on Evaluation of Educational Programs written from scratch by a professional specifically for you?
808 writers online
Cite This paper
Select a referencing style:

Reference

IvyPanda. (2024, April 6). Evaluation of Educational Programs. https://ivypanda.com/essays/evaluation-of-educational-programs/

Work Cited

"Evaluation of Educational Programs." IvyPanda, 6 Apr. 2024, ivypanda.com/essays/evaluation-of-educational-programs/.

References

IvyPanda. (2024) 'Evaluation of Educational Programs'. 6 April.

References

IvyPanda. 2024. "Evaluation of Educational Programs." April 6, 2024. https://ivypanda.com/essays/evaluation-of-educational-programs/.

1. IvyPanda. "Evaluation of Educational Programs." April 6, 2024. https://ivypanda.com/essays/evaluation-of-educational-programs/.


Bibliography


IvyPanda. "Evaluation of Educational Programs." April 6, 2024. https://ivypanda.com/essays/evaluation-of-educational-programs/.

Powered by CiteTotal, free essay citation maker
If you are the copyright owner of this paper and no longer wish to have your work published on IvyPanda. Request the removal
More related papers
Cite
Print
1 / 1