Questionnaires Creation: Validity and Reliability Report (Assessment)

Exclusively available on IvyPanda Available only on IvyPanda

Introduction

Validity and reliability are integral parameters considered in the creation and the development of research instruments, particularly questionnaires. For a research instrument to collect appropriate data, it must meet the criteria of validity and reliability. Essentially, validity is the capability of a research instrument to measure a specific construct of interest, whereas reliability is the consistency and stability of a research instrument (Davenport, Davison, Liou, & Love, 2015). The determination of the validity and reliability of a research instrument is a rigorous process, which entails numerous steps and ways. Since special education assesses diverse constructs, Rumrill (2014) recommends the use of high validity and reliability in gauging their relevance. Researchers usually employ reliability tests, such as inter-rater, internal consistency of items, and test-retest, and validity tests, for instance, face validity, content validity, convergent validity, and discriminant validity. Depending on the nature of research and the type of research instrument, research in the realm of special education employs different methods of evaluating validity and reliability. To illustrate the validity and reliability of research instruments, this assessment examines how different authors exploited different steps in their works.

We will write a custom essay on your topic a custom Assessment on Questionnaires Creation: Validity and Reliability
808 writers online

The Active Empathic Listening Scale

In their study, Kourmousi et al. (2017) assessed the validity and reliability of the Active Empathic Listening Scale (AELS) in measuring active emphatic listening among Greek teachers. The first step in the assessment of the validity and reliability entailed the creation of a questionnaire with items derived from demographic questions, the Active Listening Attitude Scale (ALAS), and the AELS. Overall, the AELS comprises 11 Likert items on a seven-point scale, which measures the ability of Greek educators to listen actively and emphatically. According to Kourmousi et al. (2017), the AELS has subscales that measure processing ability (three items), sensing ability (four items), and responding ability (four items). The ALAS has 31 Likert items on a four-point scale reflecting the level of agreement with statements. Similarly, the ALAS has three scales in which listening skill with 11 items, listening attitude with 13 items, and conversation opportunity with seven items.

The data were collected from 3995 Greek educators (Males = 1108 and Females = 2847) who participated in the study. Researchers determined the construct validity of the AELS using the exploratory factor analysis and confirmatory factor analysis (Kourmousi et al., 2017). The construct validity measures the extent to which the Likert items in AELS weigh the ability of Greek teachers to listen actively and emphatically to their students. In undertaking the exploratory factor analysis, researchers selected the principal component analysis as the method of extracting factors with significant loadings for thresholds were set at 0.4 for factor loadings and 1.0 for eigenvalues. Furthermore, researchers used the confirmatory factor analysis to establish if the three-extracted factors account for the variation in active emphatic listening. The confirmatory analysis demonstrated that the extracted factors explain 67.7% of the variance, and thus, the AELS has high construct validity.

In assessing the reliability of the AELS, researchers measured the internal consistency using Cronbach’s alpha. The reliability test shows that all subscales of the AELS exhibit high reliability. Kourmousi et al. (2017) report that Cronbach’s values were 0.76 for processing ability, 0.82 for sensing ability, and 0.82 for responding ability. So, since Cronbach’s values are greater than 0.7, they indicate that AELS is a reliable research instrument. Zhou (2017) avers that Cronbach’s alpha that is greater than 0.7 does not only make the research instrument acceptable but also appropriate in the study of a given construct. Also, the correlation was employed in evaluating the extent to which subscales in AELS correlate with dimensions in ALAS. The existence of a positive correlation, which is statistically significant, confirms that AELS has robust construct validity.

The Inclusive Teachers Competency Questionnaire

Increasing reforms in the education sector and the need to meet the special needs of diverse students require teachers to provide inclusive education. In this view, Deng, Wang, Guan, and Wang (2016) undertook a study to develop and validate the Inclusive Teachers Competency Questionnaire (ITCQ) that measures inclusive competency among teachers in China. The first step in the assessment of validity and reliability of the research instrument comprised of the selection appropriate variables that capture the competency of teachers. A pool of relevant Likert items was created through extensive literature review done in various electronic databases, such as Springer, ProQuest, ERIC, and PsycINFO. Keywords searched in the databases to improve content validity are standards, competency, teacher preparation, knowledge, skills, and efficacy (Deng et al., 2016). During the development of ITCQ, professors (2) and doctoral candidates (4) assessed the facial validity of the drafted ITCQ and provided appropriate changes. The completed ITCQ had 40 Likert items on a five-point scale and demographic questions.

After the collection of data from teachers, item analysis and exploratory factor analysis were performed to determine the construct validity of ITCQ. Item analysis excluded two items from the 40-items, making the ITCQ a 38-item research instrument. Subsequently, the exploratory factor analysis was undertaken to evaluate the construct validity using the principal component analysis as an extraction method. The exploratory factor analysis reduced the number of items in ITCQ from 38 to 18. Items, which loaded on a single factor, clustered on related factors, and obtained loading values greater than 0.50, were retained in the research instrument. The principal component analysis managed to extract four factors with eigenvalues greater than one from the 18-items that remained after the exclusion of 22-items. The scree plot and factor analysis confirmed that the construct validity of 18-items is acceptable to predict the competency of teachers.

The authors also established the reliability of ITCQ, as an internal consistency test, was evaluated using Cronbach’s alpha. Items with Cronbach’s values of 0.5 and above were selected and considered satisfactory. The highest Cronbach’s value obtained is 0.89, while the lowest one is 0.531 among the significant values highlighted (Deng et al., 2016). Overall, Cronbach’s alpha of the whole scale was 0.95, which implies that ITCQ has an excellent reliability coefficient. Thus, the reliability test shows that ITCQ has reliable Likert items that explain a substantial level of variance and correlation.

1 hour!
The minimum time our certified writers need to deliver a 100% original paper

Functional Behavior Assessments and Intervention

Challenging behaviors, for example, aggression and stereotype are common among individuals with disabilities. Since teachers mostly interact with children, they have a noble duty of ameliorating challenging behaviors. Therefore, the Skills and Needs Inventories in Functional Behavior Assessments and Intervention (SNI-FBAI) is a comprehensive questionnaire, which measures the ability of teachers to manage challenging behaviors among children. The SNI-FBAI was used to collecting demographic information (10-items), current skills (14-items), training needs (10-point-scale), and favorable method of teaching (10-point-scale) (Dutt, Chen, & Nair, 2015). The SNI-FBAI was used to gathering data from 366 personnel who are experts in special education.

Content and face validity were used in the assessment of the validity of the SNI-FBAI. Researchers sought seven experts who had a doctorate level of education to review the SNI-FBAI and then determine content and face validity (Dutt et al., 2015). These experts extracted common themes and suggested changes to improve the validity of the research instrument. Additionally, researchers requested 11 psychology trainees to review the SNI-FBAI and provide appropriate feedback to enhance the validity of the findings. Feedback from the experts and psychology trainees allowed the refinement of the SNI-FBAI and the improvement of its internal validity. Factor analysis shows that the SNI-FBAI has sufficient construct validity because its loadings account for 54.14% of the variance. The internal consistency of the SNI-FBAI was evaluated using Cronbach’s alpha. According to Dutt et al. (2015), the current skills inventory (13-items) and the current training need inventory (6-items) have Cronbach’s values of 0.91 and 0.81 correspondingly. Therefore, the reliability test demonstrates that the SNI-FBAI has high internal consistency, while the validity test shows that the SNI-FBAI gives an acceptable face and content validity.

Conclusion

The analysis of the three articles examining the validity and reliability of research instruments provides meaningful insights. The three articles show that the development of research instruments commences with the creation of the questionnaire and the evaluation of the content and face validity of various items. Experts review research instruments and provide feedback, which helps in the refinement of research instruments to ensure that they have appropriate content and face validity. Subsequently, the exploratory factor analysis and confirmatory factor analysis aid in the assessment of construct validity. Cronbach’s alpha is a characteristic parameter used in the evaluation of the internal consistency of a research instrument.

References

Davenport, E., Davison, M., Liou, P., & Love, Q. (2015). Reliability, dimensionality, and internal consistency as defined by Cronbach: Distinct albeit related concepts. Educational Management Issues and Practice, 34(4), 4-9. Web.

Deng, M., Wang, S., Guan, W., & Wang, Y. (2016). The development and initial validation of a questionnaire of inclusive teachers’ competency for meeting special educational needs in regular classrooms in China. International Journal of Inclusive Education, 21(4), 416-427. Web.

Dutt, A., Chen, I., & Nair, R. (2015). Reliability and validity of skills and needs inventories in functional behavior assessments and interventions for school personnel. The Journal of Special Education, 49(4), 233-242. Web.

Kourmousi, N., Kounenou, K., Tsitsas, G., Yotsidi, V., Merakou, K., Barbouni, A., & Koutras, V. (2017). Active Empathic Listening Scale (AELS): Reliability and validity in a nationwide sample of Greek educators. Social Sciences, 6(4), 1-13. Web.

Remember! This is just a sample
You can get your custom paper by one of our expert writers

Rumrill, P. (2014). Research in special education: Designs, methods, and applications. Springfield, OR: Charles C Thomas Publisher.

Zhou, C. (2017). Handbook of research on creative problem-solving skill development in higher education. Hershey, PA : Information Science Reference.

Print
Need an custom research paper on Questionnaires Creation: Validity and Reliability written from scratch by a professional specifically for you?
808 writers online
Cite This paper
Select a referencing style:

Reference

IvyPanda. (2020, December 22). Questionnaires Creation: Validity and Reliability. https://ivypanda.com/essays/questionnaires-creation-validity-and-reliability/

Work Cited

"Questionnaires Creation: Validity and Reliability." IvyPanda, 22 Dec. 2020, ivypanda.com/essays/questionnaires-creation-validity-and-reliability/.

References

IvyPanda. (2020) 'Questionnaires Creation: Validity and Reliability'. 22 December.

References

IvyPanda. 2020. "Questionnaires Creation: Validity and Reliability." December 22, 2020. https://ivypanda.com/essays/questionnaires-creation-validity-and-reliability/.

1. IvyPanda. "Questionnaires Creation: Validity and Reliability." December 22, 2020. https://ivypanda.com/essays/questionnaires-creation-validity-and-reliability/.


Bibliography


IvyPanda. "Questionnaires Creation: Validity and Reliability." December 22, 2020. https://ivypanda.com/essays/questionnaires-creation-validity-and-reliability/.

Powered by CiteTotal, essay citation generator
If you are the copyright owner of this paper and no longer wish to have your work published on IvyPanda. Request the removal
More related papers
Cite
Print
1 / 1