Identifying Issues and Formulating Questions for Program Evaluation
Appropriate question formulation is the most important aspect in program evaluation as the question states the measures according to which the program should be assessed. Rossi et al (2004) assure that decision makers and stakeholders should take the main role in formulating questions. They are also sure that program evaluation should not depend precisely on the point of view of stakeholders due to their prejudiced opinion.
The format of the evaluation question should depend on the specific functions the question is aimed at performing. Therefore, the general logic of evaluation covers the following aspects, criteria of merit establishment, standards construction, performance measurement and its comparison with accepted standards.
Formulating a question for evaluation program, it should correspond to the following characteristic features, reasonableness, appropriateness, it should be answerable and convey the performance criteria (Rossi et al, 2004). Therefore, there are many other issues and techniques for formulating questions for evaluation programs.
Alvesson and Sandberg (2011) offer the problematization technique which is aimed at coming up “with novel research questions through a dialectical interrogation of one’s own familiar position, other stances, and the domain of literature targeted for assumption challenging” (Alvesson and Sandberg, 2011, p. 252).
Problematization technique for formulating question for evaluation program is one of the best means in a number of reasons. First of all, it corresponds to the principles discussed by Rossi et al (2004).
Additionally, it identifies a domain literature, considers the assumptions within identifies domain, evaluates those assumptions, develops alternative assumptions, relates those assumptions to the audience and finally evaluates assumptions with the purpose to meet the requirements of the evaluation program (Alvesson and Sandberg, 2011).
Key Concepts in Evaluation Research
Considering the key concepts in evaluation research, Berk and Rossi (1999) point at the following aspects, policy concerns, stakeholders, validity, effectiveness, and theories. The consideration of each of these concepts may help us understand the nature of the evaluation research better. Policy concerns are based on the information policymakers are eager to provide us with.
Thus, the evaluation of the research is based on the questions which appear in the focus of policymakers (issues and policies which remain the public domain). The attention of the evaluation is usually attracted by a number of people who are interested in the evaluation research outcomes which may vary depending on the nature of the research.
Another key concept for evaluation research program effectiveness which in case of vague goals of the program can measure marginal effectiveness (intervention), relative effectiveness (program/absence of program contrast) and cost effectiveness (measurement of the cost per unit). Validity concept presupposes the measurement of the evaluation research credibility.
Theory may be an important issue before developing various programs, formulating evaluation design, or analyzing the data. Therefore, this concept is important for evaluation research. There are more concepts which may be included in the evaluations research. Program’s environment and program’s intended and observed outcomes should also be included in evaluation research.
The environment impacts greatly all programs and processes which occur in the society. It is impossible to violate the social tendencies which appear in the environment. The comparison and contrast of the program’s intended and observed outcomes should be used as the main hypothesis for evaluation.
This information helps predict evaluation results and compare those with the got ones for assessing the evaluation credibility (McDavid and Hawthorn, 2006).
Bounded Rationality and Evaluation Validity
According to Herbert Simon, bounded rationality is defined as the limitation of the human beings by means of the following factors, failure to know everything and understand the future consequences correctly, failure to assess the worth of the future decisions due to the inability to measure the effectiveness and importance of the latter, and failure to consider all the alternative variants of the decision outcome.
All these failures are defined as the inability for a research to be rational, therefore, the notion of bounded rationality is present (Simon, Egidi, and Viale, 2008). It should be mentioned that evaluation validity depends on the commitment of the program evaluator, therefore, it means that the attitude to the evaluation is prejudiced.
Moreover, the research evaluation is measured by means of the actions and ideas, research questions and other specific issues chosen by the researcher. Therefore, research validity is based on the choice of the person who conducts evaluation. No matter how unprejudiced and fair a person may try to be, the bounded rationality is the concept which should not be ignored.
Considering the key concepts discussed above and the issues for forming an evaluation question, a researcher plays a dominant role in the evaluation outcome. Research evaluation program cannot be rationale in its entire meaning. There are always limitations and concerns which should be taken into account.
An evaluator’s approach to delivering an exhaustive evaluation with constructive recommendations is a product of personal experience and practice due to the similar to boundary rationality idea. An evaluator has a right to choose the evaluation criterion, descriptive program of the research and making judgments on the basis of the considered information (Gigerenzer and Selten, 2002, p. 117).
Reference List
Alvesson, Mats and Jörgen Sandberg. 2011. Generating research questions through problematization. Academy Of Management Review 36(2): 247-271.
Berk, Richard A. and Peter Henry Rossi. 1999. Thinking about program evaluation. New York: SAGE.
Gigerenzer, Gerd and Reinhard Selten. 2002. Bounded rationality: the adaptive toolbox. Cambridge: MIT Press.
McDavid, C. James and Laura R. L. Hawthorn. 2006. Program evaluation & performance measurement: An introduction to practice. New York: SAGE.
Rossi, Peter. H., Lipsey, Mark. W., and Howard E. Freeman. 2004. Evaluation: A systematic approach. Thousand Oaks, CA: Sage Publications.
Simon, Herbert Alexander, Egidi, Massimo, and Riccardo Viale. 2008. Economics, bounded rationality and the cognitive revolution. New York: Edward Elgar Publishing.