What is Augmented Reality?
We will write a custom Research Paper on The Technology, Applications, and Usage of Augmented Reality specifically for you
807 certified writers online
Augmented reality (AR) is a type of perception technology that integrates digital and real-life content into a singular, solid system. Amin and Govilkar (2015) identify AR as an instrument that allows the user to directly access attached information in real-time, to fully understand the context or purpose of a certain process or an object. Therefore, AR can enrich our visual and hearing perceptions of the surrounding environment. At the same time, a distinction must be made between augmented and virtual reality (VR), as the two terms are wrongfully considered to be interchangeable. The purpose of this paper is to explore the concept of AR, compare and contrast it to VR, and highlight the technology as well as the benefits and opportunities of its application.
History of Augmented Reality
The idea of enhancing and augmenting reality through artificial means is novel, but not new. The first attempts to do so were implemented in the 17th century, using mirrors, light, and shadows to create illusory images. One of the most famous applications of such included the “Pepper’s Ghost” – a popular technique used in European theaters (Billinghurst, Clark & Lee 2015). The principles behind modern AR are different, however, as they utilize digital technology instead of physical special effects to enhance and augment the existing perceptions of reality.
The first application of AR in the modern sense of the word dates back to 1963 when Ivan Sutherland developed a Sketch-Pad. He later continued his studies at Harvard University and created the prototype AR system, which went operational in 1968. The invention “combined a CRT-based optical see-through head-mounted display, with a ceiling mounted mechanical tracking system connected to the PDP-11 computer and custom graphics hardware,” which enabled it to create 3-dimensional graphics in the real world (Billinghurst et al. 2015, p. 85).
Further development of AR and VR was closely related to the military. One of the prime applications of both types of technology was found in the training of American pilots. While VR was capable of simulating artificial environments while keeping the training module on the ground whereas AR was useful in the cockpit of an actual flight plane, providing real-life experiences and generating artificial environment overlays (Billinghurst et al. 2015). Such developments were made in the 1970s, resulting in the creation of the Ultimate Display and Super-Cockpit by Tim Furness (Billinghurst et al. 2015). However, the scientific and commercial endeavors in AR continued. In 1974, the Artificial Reality Lab created a product called Videospace, which operated from the University of Connecticut (Stanovic 2015). It was created by Myron Krueger, which created an augmented environment using on-screen silhouettes and video cameras to help out with the user immersion (Stanovic 2015).
Boeing saw the potential for augmented and virtual reality constructs as a method of training not only military but commercial pilots as well, so in the 1990s the company invested heavily in creating training modules with realistic, human-computer interfaces and ways of altering and augmenting images to suit the company’s training needs (Frigo, da Silva & Barbosa 2016). These efforts were followed by other companies, such as Embraer, which used augmented reality not only in training but also in the production and construction of new planes (Frigo et al. 2016).
While these systems were functional for several decades, the majority of them faced a singular flaw – they were bulky and took a lot of space, meaning that individuals were limited in their mobility when using the system. None of them were portable enough to allow a human to operate AR outside of a room or a specialized cabin. That started to change with virtual fixtures, developed by Louis Rosenberg, who created an AR suit for the Airforce equipped with manipulators and a helmet that was used to project the augmentations upon the perceived reality (Billinghurst et al. 2015). This technology was developed in 1992, two years before Julie Martin managed to create a large-scale theatrical AR that allowed the audience to observe the acrobats dancing within and around virtual objects on their physical stage (Billinghurst et al. 2015).
In 2003, AR came into sports, to significantly enhance audience perception of the event by offering the 1st-person view of the field from the popular Skycam (Stanovic 2015). As a result, people were capable of experiencing watching the field from different angles combined into a singular picture. By the end of 2009, AR was adopted by artists and magazines, using special coding to improve the perceptions of the viewers using phone scanners (Stanovic 2015). However, past 2010, AR and VR technology started to make greater strides in the direction of availability and mundane applicability. Following the success of magazine-related AR, Volkswagen, and other companies engaged in producing augmented service manuals and catalogs to assist with self-repair processes and guide the users while implementing visual guides (Stanovic 2015). It also enabled technicians to demonstrate how the process would be carried out, greatly increasing customer understanding and satisfaction.
Since the pioneering of portable AR devices by Google in 2014, investments into AR technology started growing. By 2015, the total amount of money invested in the technology exceeded 700 million, with the largest one being made by Magic Leap (Barfield 2015). This illustrates the understanding and desire to exploit the potential of technology in many areas, from training to customer visualization. By the end of 2016, these investments grew over 1.1 billion dollars, with comparable growth continuing into 2017 and 2018 (Porter & Heppelmann 2017). Therefore, AR is not only a testament to technological progress but also a marketable business opportunity.
Differences Between AR and VR
We see items and services labeled as Virtual Reality all around us, but very few of these fit the definition. 360 degrees immersive video is marketed as a VR, which it is not. Just as “surround movies” and “VR storytelling” is not structurally considered VR. Second Life is not considered a VR either and yet is marketed as such. The confusion is understandable, as the processes in AR and VR are similar in certain ways. Both technologies create artificial images and sound for humans to see. However, the differences between both are greater and more profound than it initially seems.
Augmented reality is classified by the following parameters (Barfield 2015):
- Real-time interactivity. The device typically uses sensors and cameras, which could record real-time imagery and sounds and superimpose the digital enhancements upon the existing environment, allowing both real-world and virtual objects in the same scene.
- Integration of real-world and virtual world information. Typically, AR uses a sort of user observation point, be that through the screen of a mobile phone, google glasses, a computer screen, or something else, to alter the experience of reality.
- Supplementation and location of objects in 3D space. Augmented reality is widely used in training, military, tourism, production, and marketing for demonstration, mapping, and design.
- User presence. Users are physically present at the location they experience and can interact with real objects. While the interaction with virtual objects is sometimes allowed, it is not always the case.
In comparison, VR uses certain technological principles different from AR. Here is the summary of its characteristics and requirements (Porter & Heppelmann 2017):
- Complete immersion. The user must be able to look in any direction and see a completely digital and visualized environment.
- Depth perception and motion parallax. While AR typically features only one point of focus, which is the camera, VR must actively seek out the point of convergence for a human’s eyes by observing the eye motions and the contractions of musculature, to deliver a realistic digital picture.
- Audio-space. Audio must be related to the user about the position of their head in the digital environment. AR is simpler in that the focal points are usually the dynamics of a phone.
- Motion control. AR is connected to the user’s gaze, whereas VR requires to allow the user to interact with the environment while their eyes are focused on something else.
- The limited number of cameras. While AR can accommodate the use of many cameras to display an object from different directions, VR is limited to only the sensorium, otherwise, the purpose of VR is defeated, as a multitude of visual points breaks the immersion.
- Users are not at the location of the experience. The VR allows people to travel to locations they have never previously visited.
As it is possible to see, there are significant differences between AR and VR in terms of purpose, effects, and technologies involved.
Technology Behind AR
To further elaborate on how AR works it is necessary to understand the technology behind it as well as the principles of their application. The primary systems that are built into an AR module include cameras and sensors, which are used to gather the interactions of the user with the surrounding environment, allowing them to be enhanced and magnified by the system (Schmalstieg & Hollerer 2016). Since AR uses phone cameras and other sensors to do so to superimpose additional layers over reality, this is one of the most important parts of the system.
Get your first paper with 15% OFF
The process of information projection involves a screen or any other kind of surface that has the potential of reflecting the surrounding environment as perceived by the camera to the user (Schmalstieg & Hollerer 2016). In many cases, this involves the screen of a phone, but it could also use the shape of glass lenses, TV screens, goggles, and others. It serves as a necessary intermediary between the sensor and the person, converting digital data to imagery.
The third part of the AR module is a processing unit, which involves a tiny supercomputer placed into the device. An AR used in a smartphone, for example, typically consists of a CPU, a GPU, a flash memory card, RAM, a Bluetooth module, and a GPS microchip (Schmalstieg & Hollerer 2016). Not all of these components are critical, however, since some AR installations can work without a connection to GPS and the Internet via Bluetooth, with all of the necessary augmentations stored in the flashcard. However, advanced systems feature all of these to provide real-time information about a person, place, or event.
The final part of the system includes reflector modules that help the eye of the user observe the augmented reality (Schmalstieg & Hollerer 2016). They either improve the quality of the perception or add extra effects to it to enhance the user experience. Many systems, such as the phone-based ones, do not have any reflectors, with the screen being the final point of image projections. But advanced modules that go around the user’s eye may have reflector systems to guide and focus the eye of the user.
AR Software Types
Different types of AR are used in mobile and computer-based technologies. Each of these software types has different requirements and mechanisms of application. The first is a marker-based AR type, which recognizes certain patterns and images to modify them, based on specific instructions in regards to each object type (Datcu, Lukosh & Brazier 2015). An example of a visual marker could be a QR or 2D code type, typically placed on the object of interest. This type of marker is often used in museums, picture galleries, and historical monuments, effectively removing the need for a tour guide. Journals and books also use this system to animate pictures and demonstrate information.
The second most common type of AR software does not use an external marker to interact with the surrounding environment. The so-called GPS-based or marker-less technology utilizes the pre-programmed augmentations attached to certain locations. For this type of software to work, the device needs an internal GPS, digital compass, velocity meter, or accelerometer active to determine the user’s relative positioning and process data based on location (Datcu et al. 2015). This type of AR is typically more advanced and complex when compared to an external marker software since it has to hold the database of modifications inside.
The third type of AR is the 360 Panoramic vision, which is the system that revolves around modifying external light to alter the image and vision of the object (Datcu et al. 2015). This type of projection does not require extra gadgets from the user, as the perceived object is effectively an optical illusion that could be observed from any degree of observation. Some of the first AR systems in existence were panoramic.
Lastly, there is the superimposed augmented reality method. This type of software relies on acquiring the image through a camera or a sensor and then partially or completely altering it using the program and the CPU capabilities of the device. In other words, the program acts as a graphic applicator, which allows the user to change the form and appearance of the image captured by the recording device (Datcu et al. 2015). Many phone-based AR has such an option packaged along with other AR software as an add-on.
Examples of AR in Practical Application
There are numerous AR programs currently available on the market. They serve numerous functions ranging from training and practice to sightseeing and entertainment. One of the most famous modern AR programs is Pokémon Go. This software is a GPS-connected superimposed software that generates a digital image of a Pokémon in the real world, based on its relative location to the user. The purpose of the game is to simulate an adventure from the original Gameboy Pokémon series, with the person being capable of traveling and experiencing the quests in real life (Scholz & Smith 2017).
Social Media websites make good use of AR to simplify or improve customer experiences. Facebook and Snapchat feature all sorts of filters, which work as a superimposition software that allows the users to alter the videos and photos taken from the phone camera by altering gradient, color, pitch, and providing additional decoration elements to the image, such as cat ears, funny noses, colorful eyes, and etcetera (Scholz & Smith 2017). The AR is relatively simplistic in that regard, as it does not provide any additional value beyond entertainment.
The third type of AR is frequently implemented by producers of various goods and services, such as Volkswagen or Ikea, which allows demonstrating virtual 3D objects in real-time in real space (Scholz & Smith 2017). The purposes of such applications are numerous. Not only would a customer be allowed to see the object of interest, but also determine how well would it fit into a place or space. For example, an individual buying furniture, using IKEA’s AR application, would be able to visualize the furniture inside of an otherwise empty apartment.
Beauty companies benefit greatly from using AR to demonstrate the quality of the products or to illustrate the potential changes to a person’s appearance as a result of cosmetics, hairstyles, or fashion changes (Scholz & Smith 2017). Some examples of this AR software include Sephora, NikeID, and Shiseido. Sephora is a program for artists that allows altering and augmenting images and pictures in real-time like a graphics design program. NikeID is a 360 panoramic software that allows reviewing Nike products in real-time as if they were real. Shiseido is an electronic mirror that, in many aspects, is similar to Sephora (Scholz & Smith 2017). The camera records the customer’s face and enables the beauty artist to perform work on the image, demonstrating the potential use of colorants and cosmetics on a person’s body.
Lastly, food-related industries actively use AR to create interactive menus for customers to use. An example of such software would be Yeppar. It is a marker-based AR that demonstrates the dish in 3D on the screen of a smartphone or a tablet when the camera acquires the QR code found in the menu (Scholz & Smith 2017). This program serves several purposes. First, it demonstrates the dish to the customer by connecting the image from the memory flashcard to the QR code. Second, it makes the food seem more appealing to the buyer, thus promoting better sales.
Revenue Categories for AR and VR
As it was demonstrated throughout the history of AR and VR development as well as a plethora of products currently available on the market presents a significant opportunity for sales, development, and investment. There are three different ways of receiving revenue from AR and VR-related industries. These revenue flows are as follows (Orlosky, Kiyokawa & Takemura 2017):
- AR-related content. There are numerous areas of service that could benefit from AR content. Some examples include education, healthcare, military, gaming, and films optimized for the use of AR technology.
- Supporting hardware. AR software requires specialized hardware to work. This includes headsets, controllers, graphic cards, sensors, cameras, and video-capturing technology, as well as online marketplaces.
- Platform and delivery services. These include content creation tools, capturing and production software, delivery utilities, videogame engines, file hosts, clouds, and other useful aids that may either complement and benefit from AR technology.
The market for virtual reality products is growing exponentially, with the expectations of reaching 150 billion by 2020 in revenue alone, and even more in expenditures (Orlosky et al. 2017). One of the primary studios working on AR projects is Magic Leap, which received over 850 million dollars in funding from companies such as Google, Lensar, and Nantmobile. Large entertainment companies, which include Disney, Legendary, 20th Century Fox, and various venture capital firms have also expressed interest in influencing and developing new VR and AR-related products (Orlosky et al. 2017). In 2019, The revenue from the AR industry is expected to nearly double, with the market reaching temporary saturation by 2020.
Leaders and Tech Developers
There are four kinds of developers present in the market, dedicated to content, hardware distribution, software development, or some combination of the above. Depending on the market specialization, different companies are claiming the majority of the shares. The examples of these companies are as follows (Peddie 2017):
- Content creation. Some of the leading companies include Blippar, Beloola, Deepstream VR, River Studios, Vantage TV, Secret Location, and Altspace VR, among others. These companies specialize in content that could be optimized for AR solutions.
- Hardware and distribution. Known representatives include Nokia, Samsung, Kinect, Leap, Playstation, VRidio, and Sixense. These companies develop hardware that could be implemented along with VR software.
- AR software companies are represented by Unity, Nvidia, Amazon, and Unreal engine. They produce AR-processing programs that are utilized with the relevant content and hardware.
The top tech companies in the development of software and hardware for AR-related services include Facebook, Microsoft, Google, Canon, GoPro, Sony, Samsung, and HTS (Peddie 2017). All of them are multinational supergiants with a great multitude of diverse hardware and software products. The only notable exceptions to these are Facebook and Canon, as the former specializes in social media platforms while the latter is a famous name in digital scanning and printing devices. These companies not only develop their technologies but also sponsor smaller companies that have prospective developments in certain areas of technology.
Benefits of AR
AR offers significant advantages in all four aspects of the industry, which include manufacturing, education, health, and retail. The manufacturing industry would benefit from the use of AR technology in planning, building, and personnel training (Pedersen et al. 2017). AR is an excellent visualization tool that could be implemented during the design stage of the process or product development to help present the concept in a more tangible and perceivable way. Visualization is also a critical point in architecture and design, meaning that AR would help perceive the construction of a facility from a personal point of view, which is something that modern blueprints are currently incapable of. Specialized software, such as CAD and 3D Max can facilitate the implementation of AR and VR in engineering and project management (Pedersen et al. 2017).
Education is another great field of exploration for AR technologies. Modern education largely relies on written and pre-recorded material, which does not provide the appropriate level of interactivity and understanding for the learners (Pedersen et al. 2017). While they are expected to use their imagination to fill in the blanks, the imagined or perceived version of reality is often different from its factual representation. With AR technology, students will be able to experience many fields of knowledge first-hand, while never leaving the confines of the educational facility.
Healthcare is another potential beneficiary of VR and AR technologies. Medical students require many hours of practice to perform complex procedures and operations on human patients (Pedersen et al. 2017). At the same time, using living patients for practice purposes is unethical and illegal, whereas dead bodies are in short supply. VR and AR technologies could be used together to simulate an authentic medical experience while being an economically viable solution. A student would be able to repeat the procedure as many times as needed to generate adequate motoric skills to perform the activities with a high degree of accuracy.
Finally, the retailing industry can also use VR and AR technology for their profit. With e-commerce advancing and brick-and-mortar stores slowly becoming obsolete, customers would require a way to see and visualize the product before they buy it (Pedersen et al. 2017). Most internet stores currently do not have the option to do so, which poses a potential opportunity for expansion. Lastly, vendors of large products, such as furniture, cars, kitchens, or other similarly spacious goods could use VR and AR to help the customer perceive the product as if it were already there. All of these options increase convenience in a cost-efficient way – instead of purchasing multiple square meters of trading space, a store could become much smaller and use AR and VR to make a presentation of its products to the customer.
Challenges to the Implementation of AR and VR
Despite the enormous potential for visualization technology, AR and VR face several obstacles that reduce their growth and pose potential issues further down the line. These problems include privacy protection, implementation requirements, potential costs of use, and reality alteration difficulties. The following sections will explore the potential troubles in greater detail.
Privacy protection is one of the most pressing technology-related issues. Many AR modules require internet and a GPS connection to function. This creates a potential venue for criminals and hackers to determine an individual’s location and compromise the safety of security of their personal and digital assets (Bastug et al. 2017). As it stands, there are no standards of cybersecurity applied to AR, thus making their use an inherent security risk. The last decade proved the potential scope of cyberthreats by targeting individuals, banks, databases, and objects of the infrastructure. Smartphones have the potential of being attacked using the connection required for AR as a conduit for infiltration. To ensure the safety and security of all customers, AR-related services would be expected to solve the security issues by adopting an array of universal safety standards.
Implementation requirements are a different type of issue that is often faced by small businesses and institutions. Professional and specialized AR often requires not only technological investments but also custom designs to fit organizational demands (Bastug et al. 2017). These are often impossible for small companies and non-government organizations that do not have the necessary expertise in managing AR and VR technology. The market could overcome these limitations by simplifying, standardizing, and streamlining the professional requirements for equipment and software, thus opening up the market to smaller firms and individual entrepreneurs.
The third issue revolves around costs, as professional AR is expensive to procure and even more so to develop. Even the large multinational companies, such as Microsoft and Google do not have the specialists and the facilities to dedicate to a wide array of different technological products (Bastug et al. 2017). To cover their bases, these companies are forced to invest and work with a great number of other, smaller companies specializing in single or several areas of expertise. Due to AR and VR being complex technology, the number of specialists that could work on its development is also limited and highly contested, which only serves to increase the cost of the end product (Bastug et al. 2017). Future development and greater education in AR and VR may reduce these costs, though it is unlikely to happen any time soon.
Lastly, there is the issue of reality-altering technologies, which, while possessing great potential for education and entertainment, also can be used for deception and cause addictions (Bastug et al. 2017). Videogames are already notorious for causing all kinds of psychological issues in children and adults. The more immersive they become, the more likely it is for them to affect players negatively by being more attractive than the real world outside of the screen. Dealing with these issues may be outright impossible, as the existence of AR and VR as a replacement to traditional media would cause the issues mentioned above by default.
Potential for Improvement
The majority of existing AR applications are largely based on the usage of smartphones. While smartphones are convenient and are always present by their users’ side, they present a series of inconveniences that prevents many users from implementing AR and VR in their daily lives. The primary complaint towards AR is that it takes too long to use. The user has to activate the phone, enter the program, direct it upon the marker or an object of interest, and then perceive the alterations through the screen, which feels uncomfortable (Azuma 2019). The majority of user complaints towards AR revolve around this issue.
Another significant issue is the need to use one or both hands to operate the device. This distracts the user and prevents them from doing anything else with their hands while they are using AR, which significantly inhibits mobility and autonomy while presenting certain physical risks (Azuma 2019). A person climbing will not be able to use AR to look up information or enhance their perspective of the climb for fear of either falling or dropping the phone. In addition, certain events that might require AR could pass by so quickly a person will not be able to react in time.
Glasses-based AR modules have plenty of potential in regards to solving some of the major problems with phone-based AR and VR. They do not need hands to operate, are not tied to flat screens, and occupy less space. However, Google’s experiment with glasses proved to be a relative failure due to various limitations that glasses have as a carrier (Azuma 2019). The size of glasses significantly reduces the capacity of the device, as the existing levels of miniaturization are not enough to make the product remotely viable. In addition, the control system based on scanning the motions of the eyes causes significant issues with concentration – having to guide the glasses by looking at different parts of the lens causes more issues than doing so with hands (Azuma 2019). A prospective solution to that problem would involve humans controlling the device with their thoughts alone. Hyper miniaturization and thought control, however, remain in a very distant future.
AR and VR technologies are two distinctively different, yet mutually complimenting spheres of visualization. Whereas AR uses to enhance and transform the existing images in real-time, VR creates a completely different experience and the environment by creating an artificial digital environment around the user. Both VR and AR are used in military training, construction, design, retail, and entertainment. Computer games using this type of technology are likely to become a big hit in the market. The amount of investments and revenues associated with the industry is growing significantly, as more and more businesses realize the benefits of visualization in attracting customers.
At the same time, there are plenty of obstacles to overcome before AR and VR become widely and commercially available to everyone. The issues regarding costs and security are the primary concerns to both individual users and large-scale investors. The technological limitations prevent certain prospective technologies, such as Google glasses, from emerging and becoming popular among the populace. The prediction for the future is the following:
- AR and VR will become more and more integrated into our everyday lives;
- Smartphones will remain the primary AR platform for at least 10-15 years;
- The dedicated market size and revenues will continue to grow;
Thus, it could be concluded that AR and VR are a very perspective area of research likely to attract even more attention from customers and companies alike.
Amin, D & Govilkar, S 2015, ‘Comparative study of augmented reality SDKs’, International Journal on Computational Science and Applications, vol. 5, no. 1, pp. 11-26.
Azuma, RT 2019, ‘The road to ubiquitous consumer augmented reality systems’, Human Behavior and Emerging Technologies, vol. 1, no. 1, pp. 26-32.
Barfield, W 2015, Fundamentals of wearable computers and augmented reality, CRC Press, New York, NY.
Bastug, E, Bennis, M, Medard, M & Debbah, M 2017, ‘Toward interconnected virtual reality: opportunities, challenges, and enablers’, IEEE Communications Magazine, vol. 55, no. 6, pp. 110-117.
Billinghurst, M, Clark, A & Lee, G 2015, ‘A survey of augmented reality’, Foundations and Trends in Human-Computer Interaction, vol. 8, no. 2-3, pp. 73-272.
Datcu, D, Lukosh, S & Brazier, F 2015, ‘On the usability and effectiveness of different interaction types in augmented reality’, International Journal of Human-Computer Interaction, vol. 31, no. 3, pp. 193-209.
Frigo, MA, da Silva, EC & Barbosa, GF 2016, ‘Augmented reality in aerospace manufacturing: a review’, Journal of Industrial and Intelligent Information, vol. 4, no. 2, pp. 125-130.
Orlosky, J, Kiyokawa, K & Takemura, H 2017, ‘Virtual and augmented reality on the 5G highway’, Journal of Information Processing, vol. 25, pp. 133-141.
Peddie, J 2017, Augmented reality: where we will all live, Springer, New York, NY.
Pedersen, I, Gale, N, Mirza-Babaei, P & Reid, S 2017, ‘More than meets the eye: the benefits of augmented reality and holographic displays for digital cultural heritage’, Journal on Computing and Cultural Heritage, vol. 10, no. 2, pp. 1-15.
Porter, ME & Heppelmann, JE 2017, ‘Why every organization needs an augmented reality strategy’, Harvard Business Review, vol. 95, no. 6, pp. 46-57.
Schmalstieg, D & Hollerer, T 2016, Augmented reality: principles and practice, Addison-Wesley Professional, Boston, MA.
Scholz, J & Smith, AN 2017, ‘Augmented reality: designing immercive experiences that maximize consumer engagement’, Business Horizons, vol. 59, no. 2, pp. 149-161.
Stanovic, S 2015, ‘Virtual reality and virtual environments in 10 lectures’, Synthesis Lectures on Image, Video, and Multimedia Processing, vol. 8, no. 3, pp. 1-197.