Content Summary
The article is dedicated to deconstructing the existing status quo on the matter of scientifically representative and acceptable research. It argues that the requirements relevant in the field of arts and humanities differ from those of hard sciences. The goal of this study is to establish a strong argument for using data with context as evidence in arts and humanities research assessments rather than systematic criteria. Researchers in the arts and humanities should be encouraged to think imaginatively about the kind of data they might create to support the worth of their study rather than relying on predefined measures. Because of the standardized implications of the current system that aims to transform social and cultural advantages into commercial terms, arts and humanities scholars may be understandably cautious of any attempt to force them to adopt metrics. When applied to curriculum implementation, the humanistic and less standardized approach focuses on delivering personalized options to as many individuals as possible. It assesses the complex context provided, regardless of there being no access to a fully accurate metric system. Overall, this approach aligns with the state of the humanistic approach in research in general.
Response
I believe that humanistic evaluation, as well as the research within the field of humanities in general, are often perceived unfairly. In the current understanding of the term, the humanistic approach focuses on the personal above the mathematical. The nature of data is considerably more qualitative and sometimes lacks solid metrics, and thus I understand why certain scholars take it skeptically. And yet, I believe that the qualitative nature of such analyses allows them to stand out and, in several frequent scenarios, performed better when compared to scientific counterparts. The reasoning for this is that life offers many different viewpoints, but quantitative data just delivers one. Humanistic evaluators are interested in learning about the complexity of the environment they are evaluating. This entails interviewing people and holding focus groups to record their stories. Observation is utilized to take notes on what is going on in the classroom rather than counting frequencies. Once again, the rejection of quantitative metrics is notable and often allows for the establishment of personal connection with the class.
The required smaller sample size is one of the primary drawbacks of the purely humanistic technique, making it less sustainable. It is not possible to interview 400 students, however, it is possible to interview 20. A scientific evaluator, on the other hand, should have no trouble conducting a survey with 400 pupils. Furthermore, there are concerns about the results’ objectivity. Because qualitative data is handled by the researcher, their own point of view might limit what they report when they share the respondents’ point of view. Scientific techniques, on the other hand, are more objective since the results are processed and reported by a computer. I would, however, argue that the humanistic approach is overall superior in individual cases for teachers aiming to build lasting and respectful relationships with their pupils. It treats every person in the class as a special case, as opposed to a code number with the majority of their characteristics removed. However, the benefits of the scientific approach are not to be discounted either, particularly for any large-scale educational research. As is frequently the case, a compromise would be an ideal solution to the presented dilemma.
Bibliographical Information
Thelwall, M. and Delgado, M.M. (2015), “Arts and humanities research evaluation: no metrics please, just data”, Journal of Documentation, 71(4), pp. 817-833.