Norm-referencing is a method of analyzing a student’s test performance by comparing it to other learners in a well-defined group who took the same test. Norm-referenced scores are often obtained from raw scores of assessments. In contrast, criterion-referenced scores are a ranking structure that compares a student’s test performance to the domain of achievements sampled by the evaluation. Converting a raw score to a percentage depending on the total points available is a common criterion point of reference (Gronlund and Waugh, 2008). The goal is to analyze how well an individual has met the program’s objectives. In other words, the technique essentially seeks to answer the question, “How much of the desired learning did the learners accomplish?” As a result, the purpose of administering these examinations is to determine how much one understands, and this is done either before or after instruction. Thus, the test must be designed in such a way that relevant knowledge and skills may be assessed. When instructors have a well-defined realm of competence for pupils to study, criterion-referenced grading is most appropriate. The recent educational effort to establish state-level standards has put pressure on school districts to apply these criteria to develop precise learning goals in the classroom.
Norm-referenced tests can have an impact on various instructional decisions. The raw score is what a student receives after a test has been evaluated based on the instruction (Gronlund and Waugh, 2008). According to the author, this is the number of questions a learner answers accurately on a classroom assessment. Essentially, this group could include students from the same grade across the country and those with special needs or disabilities. Norm-referenced assessments almost always use a nationwide peer group. The main purpose of these evaluations is to compare the performance of one individual to that of others in a predefined cohort. In other words, an educator can use the technique to determine how a pupil compares against others in a comparable cluster. Norm-referenced assessments, including the SAT, are standardized exams. The purpose is to rate the examinees so that decisions can be made regarding their chances of success, such as college admission.
Criterion-referenced scores can be used to affect instructional decisions. Some test authors have adjusted multiple-choice items to cover more real-life circumstances by incorporating open-ended performance to make standardized test sessions more helpful for educational reasons (Gronlund and Waugh, 2008). The percentage-correct score is one of the most well-known methods for providing criterion-referenced scores. In essence, this shows how many test items in a subgroup were successfully addressed. Individual pupils, classes, schools, or the whole district might be included in the report (Gronlund and Waugh, 2008). As one framework for assessing the schools, percentage-correct scores for respective learning institutions are matched to national norm sets. As a result, the approach has long been utilized in schools to evaluate student achievement compared to that of a group of learners at a state or regional level. Generally, this strategy can be beneficial in the classroom, but it must be used with caution.
Stanine is are normalized standard score found in one of nine different segments of a normal distribution. The stanines are always single-digit figures, have roughly equal units throughout the score scale, and do not suggest an exactness more than that warranted by the assessment (Gronlund and Waugh, 2008). Not everyone agrees that stanines should be used for norm-referenced interpretations. Some argue that stanines are more difficult to comprehend than percentile ranks.
Reference
Gronlund, N. E., & Waugh, C. K. (2008). Assessment of student achievement (9th ed.). Pearson.