Home > Free Essays > Tech & Engineering > Data > Collecting Online Data With Usability Testing Software

Collecting Online Data With Usability Testing Software Essay

Exclusively available on IvyPanda Available only on IvyPanda
Updated: Sep 10th, 2022


The rapid changes taking place in all aspects of the society are reflected in the theoretical developments across various disciplines. In that regard, the emergence of new methods of researches can be seen driven by the changes in social life, in which new questions are posed for investigation. Accordingly, with new questions emerging, new evidences are required, the gathering of which lead to the emergence of new data collection techniques (Hesse-Biber & Leavy, 2008, p. 1). New forms of data collection and analysis are being conceived and existing data collection methods are being adapted to study new literacies in several disciplines, including communications, media, education, and public relations. They are being adapted for different reasons, including consistency with the data collected. Although the internet can be hardly described as an emergent phenomenon, the growth of its social significance as well as the emergence of new social interactions can be considered as a considerable factor in the emergence of new research methods.

The web environment is constantly changing, shifting from the state of being read-only toward read/write and co-operation states. An illustrative of such shift can be seen through the transition through Web 1.0, Web 2.0 and the coming up Web 3.0. In the attempt to keep up with these shifting technologies and consequently literacies, new methods of data collection and analyses are constantly developing. The array of methods can be seen in a diverse set of qualitative and quantitative methodological approaches, ranging from comparative ethnography studies to mixed approaches encompassing “surveys, interviews, diaries, focus groups, observation, documentary and historical analysis, and experimental or comparative studies” (Hesse-Biber & Leavy, 2008, p. 532). While the latter can be seen as a return to traditional methods of research, the new proposed approaches implies a combination of methods, in which ethnography is one of the essential parts, e.g. Howard’s (2002) network ethnography, Hine’s(2000) connective ethnography, and Constable’s (2003) combination of online participation, observation and exploration of context (cited in Hesse-Biber & Leavy, 2008, p. 533). In that regard, following the pattern of finding the most suitable strategy to study the internet, this paper explores the development of participant observation, linked with usability testing to conduct online research. Advantages, limitations, ethics considerations, and future directions are discussed.


The usage of participant observation as a method of conducting online research can be seen in the increased emphasis on the social aspect. The latter can be seen in virtual ethnography continuing to explore the complex connections between online and offline spaces (Fielding, Lee, & Blank, 2008, p. 258). Considering the traditional model of ethnography, which most characteristic form involves participation, overtly or covertly, with the key activities being watching, listening, and collecting information that will “throw the light on the issues that are the focus of the research” ( 259). In that regard, the acceptance of participant observation as a method of research in cultural anthropology implies the significance of information collected through such method, not less than from other techniques such as interviewing, structured observation, and the use of questionnaires and formal elicitation techniques (DeWalt & DeWalt, 2002).

Challenges surround the observation and data collection of new uses of computers and it is difficult to analyze findings using our current methods (Nixon, 2003). One of the essential factors of participant observation is attendance. In that regard, linking the traditional approach of participant observation and online research, it can be seen that attendance might not be interpreted in a physical term. The latter can be specifically outlined considering the purposes of attendance. The attendance of the observer is related to experiencing the setting as a participant, “the particular values and biases she/he brings to the setting (reflexivity)” (DeWalt & DeWalt, 2002, p. 68). In that regard, the virtual travel to the field site consists of “experiential rather than physical displacement” (Fielding, et al., 2008). An example of such a virtual travel can be seen in taking an active participation in the online community, an approach adopted in early studies of internet communications. With researchers taking the role of observers, they have used different observational methods, including naked eye and video.

The proposed method to be combined with participant observation – usability testing, is used frequently in research studies where companies attempt to find out how well purchasers/users are able to use the product they intend to sell. It differs from Beta testing in that it tests products prior to putting it out in the market. Beta testing, on the other hand, is an early version of a product that companies hope to make changes to, based on the feedback they receive from users. Usability testing has become much simpler and cheaper to conduct with new software development. Laboratory settings are no longer necessary. Using screen capture software with its enclosed analytical components may be one way to capture and analyze data more productively, conveniently, and proficiently. Components of usability testing combined with screen capture software are quickly being included in participant observation of online social behaviors. Programs like TechSmith’s Morae Software can be used to capture users working with computer software. Because of these new applications, usability testing software now holds great potential for conducting other types of research as well.

The following are the five characteristics of usability testing as described by Dumas and Redish (1993):

  1. The purpose of the test is to improve the usability of the product being tested.
  2. The participants are actual/potential users of the product
  3. The participants engage in authentic tasks.
  4. The participants’ actions and words are recorded
  5. The collected data is analyzed, problems are identified, and changes are recommended to address these problems.

Paralleling participants observation with usability testing, the observation as a virtual travel, largely contains the same meanings, where observing and recording individual participant’s behaviors is a distinguishable characteristic of usability testing (Dumas & Redish, 1999, p. 24). Accordingly, doing real tasks in real environment can be seen as a representation of participation, which accordingly, resembles such participation to a larger extent, than, for example, online surveys and interviews.

In conducting usability tests, tasks that are assigned must be ones that participants would normally do in the natural settings, such as home, school, or work. This means that the researcher must understand the user’s environment and the tasks that they may want to accomplish (Dumas & Redish, 1999). This, too, is true of participant observation research using usability testing software. Accordingly, the combination of usability testing with participant observation can be seen as a suitable approach in solving the dilemma of the latter, which is presence. In online ethnography, a shift can be observed in the status of the researcher as participant and as an observer. Usability testing, on the other hand, regardless of whether the researchers are the participants or observers, involves becoming the real user, the quantitative experience of whom can be later added to the qualitative aspect of reflective observation (Fielding, et al., 2008).

Even when choosing naturalistic settings like the World Wide Web to conduct research, researchers sometimes choose to modify the natural setting for their research purposes or take an existing setting and add boundaries to it that would not normally exist. For example, Lawless, Schrader, & Mayall (2007) disabled external links and advertisements on a website that they used for their research. They left active only links that pertained to the topic of interest.

Much of usability testing focuses on identifying which features of a program users do not use, either because they are not aware of its existence or because its functionality is too difficult for users (Dumas & Redish, 1999). This is similar to what communication or public relations researchers may want to learn about the social behavior practices of the public or what educational researchers may want to know about the digital practices of students. It is as important to know how participants are using the internet as it is to know how they are not using it. The latter perfectly resembles traditional participant observation strategies, ranging from pure observation to pure participation, which is the natural setting of the observed phenomena. Reaction and interaction is viewed in similar contexts, although the questions posed might differ, being abstract in participant observation, anticipating small proportions of the situations (DeWalt & DeWalt, 2002, p. 17), while being more precise being more formulated in usability testing. Questions such as what features do participants never use, which sites do they not visit, and what things are they not aware of, although similarly contain little prediction of situations, they are narrowed down to the internet available range of activities.

Usability tests include visual observation of participants while they are using a product, auditory observation commenting on their performance and compiling questionnaire or interview data about their experiences (Dumas & Redish, 1999). Again, these practices are shared with participant observation.

In Dumas and Reddish (1993), the authors state that usability tests are only effective if they manage to improve upon a particular product. Similarly, in participant observation, a researcher aims not only to report, but also to expand knowledge through formulating new questions, new hypotheses, and accordingly, the analysis and the interpretation allows creating social constructs (DeWalt & DeWalt, 2002, p. 32), in which the description of relationships serve particular purposes.

With usability testing, usually, only one user is tested at a time (Dumas & Redish, 1999). The intervention in usability testing, similarly to participant observation, is defined by the determination of the inclination the researcher’s role, participate or observe. In participant observation, intervention is not a usual part of the research. The researcher is not around to intervene nor would intervene because the goal is to find out what the user would naturally do in the next step. Nevertheless, ethical aspects might imply intervening. Accordingly, e a theoretical model by Nash defined researchers as participants with informants, as justification for intervening when necessary (DeWalt & DeWalt, 2002, p. 30).

Similarly, in usability testing, active intervention is one technique that might be used when the researchers’ participation is necessary to solicit information as to other participants’ intentions and actions. This helps giving the researcher insight into the cognitive processing of the participant (Dumas & Redish, 1999).

Nevertheless, the best method would be to allow the recording to happen with no intervention, while later to go through the video with the participants and at that point have them verbally explain their process and probe them with appropriate questions. The latter, however, is a procedure that would require a high time commitment from both the researcher and the participants and as a result may not be worth the added advantage of having an untouched recording analyzed first. Moreover, relying on participants to recollect past decisions may not be as reliable as having them explain the process as they are experiencing it.

When conducting usability tests it is important to know what can be measured. Performance measures can capture the behaviors and actions of users whereas subjective measures can capture their views, feelings, and reasoning (Dumas & Redish, 1999). Performance measures provide quantitative data while subjective measures can provide both quantitative and qualitative data. For example, information on beliefs and feelings can be collected using Likert-type scales and analyzed quantitatively. Some components that can be easily recorded and analyzed include the number of errors a user makes, the length of time it takes to complete a task and the number of times a user gets frustrated.

For example, in one study, I used Morae software with a think aloud protocol to capture adolescents’ school internet use. I was able to capture students’ comments and facial expressions by using the software and webcams and was able to gain a rich understanding of students’ views and feelings towards internet use for communication, socializing, and learning. Using the analytical and graphing tools of the software, I was also capable of conducting quantitative analysis of many features such as the time youth spent on social networks, the time it took them to accomplish a task, the average time they spend on web pages and other representation of online social behavior.

Likewise, Donald Leu et al. (2009) conducted a study on the online reading comprehension of students using different types of think aloud protocols linked with screen capture software (Camtasia), albeit without the analytical tools. They imported their data into other analytical software and were able to analyze the recordings both qualitatively for affective data and quantitatively for statistical information. This research tool fits well within most disciplines interested in studying online behavior.


One advantage of usability software is that it often comes with presentation and analysis components. Researchers can, therefore, use this software to conveniently organize and manage their data. The software also assists in creating graphs, charts and other practical figures from the collected data. The latter can be helpful in both analysis and presentation. For example, video clips and graphs can be created to import into presentation software to be later portrayed at conferences. Audience members can much more clearly appreciate and understand a particular phenomenon if they view it through a video clip rather than only being told about the occurrence. The visual example gives a lasting impression and it shows that it is indeed a truthful observation. As a result, the validity of the data is increased.

Some regular problems that existed in the past with capturing data using a video camera have been solved with the new available technology. The following problems arose frequently with video cameras: difficulty viewing the computer screen as a result of the light’s glare being reflected on the monitor, the angle that the shot was captured distorting the image on the screen, and the participant’s head or body parts blocking the screen (Dumas & Redish, 1999). Other problems included capturing synchronous video of both the screen and the users’ facial expressions. Although those issues are now solved with screen capture software, other issues have arisen.


One major limitation of usability testing paired with screen capture programs is the requirements to run the software. New versions of operating systems and fast hard drives are often needed to run such programs. Without them, the researcher runs the potential of excluding participants who do not have adequate technological requirements on their personal or work computers. Furthermore, compatibility issues are likely to arise with the needed software and users’ computers. For example, Morae software is presently not compatible with Macintosh operating systems. This raises the concern of excluding Macintosh users from the research or creates an unrealistic environment of recording participants using computers they are not familiar working on. Providing the needed equipment to those who do not meet the requirements would take the study a step closer to a laboratory setting which is not advisable.

Another challenge is related to potential cost of purchasing appropriate equipment and software, such as webcams, hard drives as well as their further installation on different computer systems. This can become costly and time consuming for the researcher and the participant. The time commitment is further increased as the researcher must learn how to operate the software, in terms of capture, analysis, and presentation. Participants should be taught to use a number of functions as well, e.g. recording using the software. During this process audio levels must be adjusted so that the recording is clearly audible for clear transcription and understanding. Moreover, updating the software might provide challenges in terms of compatibility with the data that was already collected.

In fact, the greatest challenge I faced with a recent study was installing the software on the school’s computers. Even though I had received the approval from the school district and the school principal, I still faced major challenges in getting clearance to install the software on school computers. With many institutions, technical work is not allowed to be done by outsiders, which is not limited only to researches but the school’s staff as well. The work must be passed through the technical staff first for testing and approval and then for installation. The latter can be a lengthy process and can cause conflicts within the institution between the technical staff and the administrative body, who originally granted the permission. Therefore, it is as important to get permission and recommendations from the technical staff of an institution as it is from the administrators, as union rules may be another obstacle.

Lastly, a major problem with capturing video data is the large amount of storage space it occupies on computers. This affects both the participant and the researcher. For screen capture software, the participant needs to save the recording on his/her computer hard drive or on an external drive. This takes up a large portion of space and can slow down the computer and other applications on the computer. This could infringe on the participants’ computer use outside of the research and other computer users such as family members or colleagues. Therefore, it would be advisable for the researcher to provide an external drive to each participant, using which he/she would be able to store the video data, facilitating the task for researchers removing the data off of their computers.

The researchers will face even greater space issues on their computer for two reasons. First, they will have to store all participants’ data and second they will be using the software for collecting, analyzing, synthesizing, and preparing presentations. To avoid space concerns, it would be worthwhile for the researcher to have a large external storage space to store the videos and to work from this external drive in analyzing the data and creating presentations.

Ethical Considerations

Researchers may face several challenges while they observe participants in online environments. They become an invisible lurker as those people communicating with the participants will not be aware of their presence. Incidental data will be collected and cannot be secured for participants’ “friends” or random people that happen to navigate onto the page/site/network being studied. Should participants send a message to everyone on their email, IM, or Social Network contact list and let them know that they are partaking in this study and that if they are contacted by these people that they too may be recorded? This may be a critical step when observing through programs like Skype or Facebook where even faces and voices could be recorded and recognized. In such instances, should the recording be turned off to preserve ethics? Leander (2008) allowed participants to review and delete portions of the screen capture data that they did not want to share with the researchers. However, it can be assumed that it would be an unrealistic task to contact everyone who falls into the incidental data.

Social relationships may be jeopardized or fall under the pressure of the research, if participants’ friends realize that they are being observed. Friends may cease communication with the participant or only communicate with them at a superficial level.

Also, issues surrounding intellectual property rights may arise depending on what types of information was captured (US Department of Education, 2002). Does the researcher have the right to include in reports or show in presentation video footage that may include copyrighted material (Voithofer, 2005)? The capability of new technology to store larger amounts of information at lower costs may tempt researcher to collect an abundance of data (Voithofer, 2005), where including raw data such as video footage or full transcriptions in reports can further endanger the ethics of a study.

Looking Forward

By considering the new democratic, participatory, and social web, my understanding of the relationship between the adult researcher and the participant has shifted. Advantages lie in equipping participants with the adequate tools to collect the data they deem relevant, instead of the researchers choosing what data to collect. Putting video or still cameras in the hands of participants and allowing them to capture their surroundings or a particular phenomenon would give an insight into their perceptions and understanding of the world. It may too be interesting to ask the participant to come up with the interview questions and even to interview one another. Finally, it may make sense to include the names of participants as co-authors of a study for their contributions.


Internet research in social sciences covers two main areas: 1) people’s competencies to locate and retrieve information from data sources and 2) How the internet is used for communication purposes (Costigan, 1998). As social science researchers studying new technologies, we must consider our question first and understand how it is framed within the social sciences. Steve Jones (1998) warns that “simply applying existing theories and methods to the study of Internet-related phenomena is not a satisfactory way to build our knowledge of the Internet as a social medium” (x). It may be necessary then to build new theories, combine existing theories or embrace multiple theories.

Only at this point can we take the methodology under consideration and ask ourselves the following questions: How might traditional methodological tools be used in the study of online social behavior and how might these traditional tools be altered to work in new “online social spaces” (Leander, 2008, 33)? The current development in the social aspect of internet already provides the level of interactivity and collaboration that partly resembles offline existence, e.g. MMORPG (Massive Multiplayer Online Role-Playing Games). Nevertheless, the differences in context as well as the attributes provided by internet, such as anonymity, knowledge sharing, expanded scale, and global presence imply a new research method; a method that will prevent the separation of internet, as a technology tool, from the social context in which it is used.

Proposing usability testing as a method, it can be stated that it will allow answering such quantitative questions as those related to frequencies and patterns of internet use as well as qualitative questions, the answer to which will shed the light to the nature of social interactions, the prerequisites, motives, and consequences.

The internet itself is a bricolage by pulling old materials in creating the new. Researchers studying this medium will have to be bricoleurs too, pulling from existing theories and methods to create new research paths.


Costigan, J.T. (1998). Forests, Trees, and Internet Research. In S. Jones (Ed.), Doing internet research (pp. 57-74). London: Sage.

DeWalt, K. M., & DeWalt, B. R. (2002). Participant observation : a guide for fieldworkers. Walnut Creek, CA: AltaMira Press.

Dumas, J.S., & Redish J. (1999). A practical guide to usability testing. Portland, OR, Intellect Books.

Fielding, N., Lee, R. M., & Blank, G. (2008). The SAGE handbook of online research methods. Los Angeles ;: London : SAGE.

Hesse-Biber, S. N., & Leavy, P. (2008). Handbook of emergent methods. New York: Guilford Press.

Jones, S. (Ed.). (1999). Doing internet research. London: Sage.

Lawless, K.A., Schrader, P.G., & Mayall, H.J. (2007). Acquisition of Information Online: Knowledge, Navigation and Learning Outcomes. Journal of Literacy Research, 39(3), 289-306.

Leander, K.M. (2008). Toward a Connective Ethnography of Online/Offline Literacy Networks. In J. Coiro, M. Knobel, C. Lankshear, & D. J. Leu (Eds.), Handbook of Research on New Literacies (pp. 33-65). New York, NY: Lawrence Erlbaum Associates.

Leu, D.J., McVerry, J.G., O’Bryne, W.I., Zawilinski, L., Castek, J., & Hartman, D.K. (2009). The new literacies of online reading comprehension and the irony of no child left behind: Students who require our assistance the most, actually receive it the least. To appear in: Lesley Mandel Morrow, Robert Rueda, & Diane Lapp. Handbook of research on literacy instruction: Issues of diversity, policy, and equity. New York: Guilford.

Luke, C (2003). Pedagogy, connectivity, multimodality, and interdisciplinarity. Reading Research Quarterly, 38(3), 397-403.

Nixon, H. (2003). New research literacies for contemporary research into literacy and new media? Reading Research Quarterly, 38(3), 407–413.

Pressley, M. (2000). What should comprehension instruction be the instruction of? In M.L. Kamil, P.B. Mosenthal, P.D. Pearson, & R. Barr (Eds.), Handbook of Reading Research (3rd ed., pp. 545-561). Mahwah, NJ: Erlbaum.

U.S. Department of Education. (2002). Legal and ethical issues in the use of video in education research. Web.

Voithofer, R. (2005). Designing new media education research: The materiality of data, representation, and dissemination. Educational Researcher, 34(9), 3-14.


The introduction was rewritten taking as the basis the mergence and the need for new methods in internet research.

The overview was modified so to include theoretical works in participant observation. Accordingly, the similarities between usability testing and participant observation were related to the theories of both concepts, in respective publications.

The conflicting paragraphs were revised, eliminating such portions such as the think aloud protocol.

A part was added to the conclusion, outlining the rationale of using usability testing as well as a glimpse of the questions that it might answer.

The paper was proofread.

This essay on Collecting Online Data With Usability Testing Software was written and submitted by your fellow student. You are free to use it for research and reference purposes in order to write your own paper; however, you must cite it accordingly.
Removal Request
If you are the copyright owner of this paper and no longer wish to have your work published on IvyPanda.
Request the removal

Need a custom Essay sample written from scratch by
professional specifically for you?

801 certified writers online

Cite This paper
Select a referencing style:


IvyPanda. (2022, September 10). Collecting Online Data With Usability Testing Software. https://ivypanda.com/essays/collecting-online-data-with-usability-testing-software/


IvyPanda. (2022, September 10). Collecting Online Data With Usability Testing Software. Retrieved from https://ivypanda.com/essays/collecting-online-data-with-usability-testing-software/

Work Cited

"Collecting Online Data With Usability Testing Software." IvyPanda, 10 Sept. 2022, ivypanda.com/essays/collecting-online-data-with-usability-testing-software/.

1. IvyPanda. "Collecting Online Data With Usability Testing Software." September 10, 2022. https://ivypanda.com/essays/collecting-online-data-with-usability-testing-software/.


IvyPanda. "Collecting Online Data With Usability Testing Software." September 10, 2022. https://ivypanda.com/essays/collecting-online-data-with-usability-testing-software/.


IvyPanda. 2022. "Collecting Online Data With Usability Testing Software." September 10, 2022. https://ivypanda.com/essays/collecting-online-data-with-usability-testing-software/.


IvyPanda. (2022) 'Collecting Online Data With Usability Testing Software'. 10 September.

Powered by CiteTotal, the best citation machine
More related papers