Psychological tests that assist in the admission process are being widely used in teaching institutions, clinics, and hospitals, and industrial-organizational firms. There are certain factors to consider in creating a test for such a purpose among which reliability and validity of test results are of a pronounced important
We will write a custom Essay on Creating a Test to Assist in Admission Process specifically for you
807 certified writers online
A test is defined by Anastasi, 1988 (Domino and Domino, 2006) as an objective and standardized measure of a sample of behavior. Psychological tests can be used at schools and universities prior to admission in order to evaluate intelligence, individual abilities (according to a level of education), and possible behavior disorders. In recruitment for organizations, these tests can be used mainly for the purposes of selection and classification. In clinics and hospitals, these tests may be used to properly diagnose a disorder and to design a specific treatment (Kazdine, 2006).
Factors to consider in creating a test to assist in admission
Two basic questions should answer the aim of such a test: what does it measure? And how does it measure it? (Burke, 1995). First, what is the purpose of the test; second, a long list of questions (2-3 times of what is actually needed). The questions should, then, be given to two groups of people to compare and statistically analyze the responses. Maximizing the internal consistency, as a measure of reliability, can be done by determining the correlation between each item in the test and the sum of other items (item subscales are to be determined separately). Finally, the final draft of the designed test is tested on a representative sample.
Test validity and reliability should be achieved. Such a test should enable the examiner to determine the abilities and skills, personality, interests, attitude, and possible disorders of the examinee. The examiner should decide, prior to constructing a test, what measure to test and how to elicit it, should the test be written, on a rating scale or yes/no basis, does what the examiner want to measure have one dimension or multiple dimensions (e.g. intelligence) (Domino, 2007).
Reliability of the test
Test reliability is the consistency of measurements when the testing procedure is repeated on a population of individuals or groups. A reliable test limits the degree to which generalizations can be made beyond the specific testing event and quantifies the confidence that can be held in the value assigned to any performance (American Educational Research Association, 1999). To obtain a reliable test, the following should be fulfilled:
- Inter-observer reliability: There are consistent results among testers or codes who are rating the same information. Measuring agreement using the inter-observer reliability coefficient is a good rule of thumb; if Total agreement/Total observations >0.80, the data have inter-observer reliability.
- Test-retest: A measure at two different times with no treatment in between should yield the same results.
- Parallel forms: Two tests of different forms that supposedly test the same material should yield the same results.
- Split-half reliability: If the items are divided in half (e.g. odd versus even questions) the two halves give the same result.
For all forms of reliability, a quantitative measure of reliability can be used. It should be 0.80 or higher. However, the coefficient can be lower for averages in a group because individual scores vary. All reliability data are to be obtained by repetition.
Validity of the test
A test is valid when it measures what it was designed to measure. How valid a test depends on its purpose. Reliability is a prerequisite for validity measurement. Different types of validity measurements should be fulfilled in a proper test; these are:
- Face validity: Does the test measure what it is supposed to measure? There would be low face validity if the examiner is disguising intentions.
- Content validity: It is the full content of a concept’s definition included in the measure. It includes a broad sample of what is being tested, emphasizes important material, and requires appropriate skills. A conceptual definition can be thought of as a space that contains concepts and ideas.
- Criterion validity: Is the measure consistent with what is already known and what is expected. There are two subcategories: A) Predictive validity: which predicts a known association between the construct being measured and something else. B) Concurrent validity: associated with pre-existing indicators; something that already measures the same concept.
- Construct validity: This Shows that the measure relates to a variety of other measures as specified in theory.
- Discriminant validity: Does not associate with constructs that should not be related.
Sometimes construct and criterion validity seem to overlap, however, what is important is that the scores on the measure work like expected in relation to other measures. Techniques to ensure validity are
- best approach method; where content and criterion validity assessment data are incorporated. It is used in the stage of developing the test.
- The comparison method: Comparing the results of the designed test to those of a known test. It is used to validate the end result of the suggested test.
- The averages method, which is used to validate individual items of the test.
There is no right or wrong test, only the process for appropriate and meaningful use of test instruments for a well-designed purpose. A test that is well designed and fulfilling the criteria of reliability and validity would assist in an admission process.
Domino, G. and Domino, M. (2006). Psychological Testing: An introduction. Part1. Cambridge University press. P 1. Web.
Kzadine, P.K.S. (2006). Psychological Testing. An article from Funk and Wagnalls Encyclopedia. World Almanac Education Group. Web.
Bruke, E. (1995). Psychological Testing: A user’s guide. The Steering Committee on test standards, The British Psychological Society.
Domino, G. (2007). Psychological testing: An introduction. Part 2. Cambridge University Press. Pp.6-15.
American Educational research Association. American Psychological Association and National Council on Measurement in Education. (1999). Standards of educational and psychological testing. Washington, DC: American Educational Research association. Pp25-27.
Get your first paper with 15% OFF
George Town University. George Town school. Department of Psychology. Validity and reliability. Web.