Self-reported data is always associated with a significant risk of biasing because of the subjective nature of information (namely, personal perceptions, estimations, and thoughts) that a researcher aims to gather, as well as individual characteristics of study participants. For instance, Araujo, Wonneberger, Neijens, and de Vreese (2017) note that when a person is more interested in a survey, he or she will likely strive to be more accurate when reporting own behaviors.
We will write a custom Essay on Self-Reported Survey and Logged Behavioral Data Comparison specifically for you
301 certified writers online
It is also important to take into account the possibility of various cognitive and psychological biases linked to one’s self-perceptions, relationships with a studied topic, desire to adhere to social expectations/norms and so forth (Demetriou, Essau, & Özer, 2015). Ideally, a data collection tool will aim to minimize the mentioned risks of bias and take into account as many factors affecting a person’s responses as possible.
It may seem that questionnaires with open-end questions will be the most accurate measures because they may allow respondents to explain themselves fully. Nevertheless, the researcher may fail to interpret data provided in a free form, and it can consequently contribute to greater inaccuracy. Thus, it is valid to assert that structured surveys with closed-end, bipolar rating scales (for instance, Likert scale) will allow increasing the precision of responses.
Likert-scale questions require participants to match their responses/observations/emotions with specific points and numbers on a scale (for example, from agree to disagree, from 1 to 5, and so forth). The researcher can construct a survey in such a way that it may facilitate the interpretation of information by making answers more on-target while also letting respondents evaluate a full spectrum of their experiences.
The researcher, however, should also pay significant attention to the overall design of a self-report tool: clarity of questions, matching of research purposes with questionnaire items, and so forth. For example, the question “How many times did you use your mobile phone to send or receive text messages/SMS in the last seven days?” is the least accurate in the survey because it includes the word “or,” which is characterized by uncertainty and can confuse respondents.
For this assignment, respondents were recruited via Facebook and through a personal approach. Those individuals who were expected to agree to participate were prioritized during sampling. Therefore, the utilized recruitment technique can be regarded as the non-probability, convenience sampling. This method is frequently used in research because it allows finding respondents quickly and easily. However, compared to probability or randomized sampling it is less representative of the whole population (Hanif, Shahbaz, & Ahmad, 2017).
In other words, the utilized convenience sampling (as well as the small sample size) increases the chance that the demographic characteristics of participants are not diverse enough and, therefore, the findings obtained through interpretation of their answers cannot be generalized. At the same time, the degree of generalizability of results is often considered to demonstrate the level of their credibility.
As it was expected, many of provided self-reported responses did not match the objective logged data. Notably, the answers about one’s behavior from a week ago include a greater number of errors than those about the behaviors from a day before. In fact, none of the retrospective self-reported responses was accurate, whereas the “yesterday” group of questions has the accurate-inaccurate response ratio of 6 to 4.
The main reason for this is a natural inability to remember everything throughout a prolonged period. Thus, it is possible to conclude that retrospective self-reports tend to be especially biased due to inherent cognitive imperfections. Presumably, some of the participants were also not motivated enough to provide accurate answers and did not take time to think thoroughly, and the objective and subjective results were mismatched due to this psychological factor as well.
Araujo, T., Wonneberger, A., Neijens, P., & de Vreese, C. (2017). How much time do you spend online? Understanding and improving the accuracy of self-reported measures of internet use. Communication Methods and Measures, 11(3), 173-190.
Demetriou, C., Essau, C. A., & Özer, B. U. (2015). Self-report questionnaires. Web.
Hanif, M., Shahbaz, M. Q., & Ahmad, M. (2017). Sampling techniques: Methods and applications. New York, NY: Nova Science.