Encoding Object Affordances and Geometrical Features Essay

Exclusively available on IvyPanda Available only on IvyPanda

Artificial intelligence first emerged as a term when John McCarthy coined it in his book “What is Artificial Intelligence” in 1956; defining it as “the science and engineering of making intelligent machines”. Ever since AI has been a domain of intense research and study. What researchers have been looking for in the capabilities of intelligent machines is knowledge, the ability to communicate, predict, reason, and other traits like planning, learning, and perception.

We will write a custom essay on your topic a custom Essay on Encoding Object Affordances and Geometrical Features
808 writers online

Artificial intelligence was more an issue of computer intelligence enhancement until it found a massive and dynamic field of robotics. Robots are machines that are designed to execute certain defined actions. They have arms and hands that follow certain instructions from a computer or a number of computers. The idea behind this is the researcher’s aim to design machines that possess the ability to move and manipulate objects. Strong AI, commonly known as General intelligence is a long-term goal of researchers. General intelligence includes everything that is humans. Their actions, perception, thinking, decision making, and execution; are some primary features.

The field of robotics now stands in close integration with AI. The bottom line is: robots ought to be human-like, and the foremost requirement for being human-like is near-to-human intelligence. Primarily, intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, and secondly, localization; knowing location, mapping; learning what is around and motion planning; figuring out how to move and reach the target (Russel and Norvig 815-905).

The biggest challenge scientists are facing is the object affordances of the robots. An affordance is a quality of an object that allows a robot to perform an action. In the context of Human-Machine Interaction, affordance refers to those action possibilities that machines or robots perceive to interact with the human or other objects (Norman 34). The underline theory is similar to how humans perceive an object through sight. Humans obtain information in two ways: speech and visual communication. These two means are parallel and independent. The perception, communication, and decision-making of the human brain work through a process based on visual language.

A visual language is a set of practices by which images can be used to communicate concepts. When a human brain hears or sees something, a visual image is formed in the brain with its rough characters. The brain responds and acts correspondingly. For instance, to hold an apple, this is readily perceived in the human brain how many fingers should open up, and what is approximately the weight of an apple; i.e. the affordance of apple is predefined.

Similarly, machines act on basis of object affordances. The signals they receive in various forms are encoded into digital signals and led to the computing unit of the robot. There are various methods for motion planning and responsiveness of robots. Simple robots work on response signals based on sensors. The sensors emit signals; sound and infrared rays, and echoes of those signals determine the response actions.

In other words, they ‘see’ their surroundings based on rays and sound echoing back. Such robots are not expensive to make and are widely used in universities. More advanced and expensive robots have now come to use visual imaging just like humans. The digital eyes of robots recognize an object with its size, geometry, and color in a three-dimensional representation of the world. Most of the robots use CCD (charge-coupled device) cameras as vision sensors (Arkin 250). This image is then sent to a computing unit where geometry is matched with its affordance. The robot then manipulates, handles, grips, or transforms the object according to affordance.

1 hour!
The minimum time our certified writers need to deliver a 100% original paper

One such method is referred to as Visual Motion Planning. The method skips the step of transferring image features back to the robot pose and hence makes motion plans directly in the image plane (Zhang & Ostrowski 199-208). For object characterization for manipulation, object affordances are installed according to their shapes, geometry, color, size, and weight. For certain robots, visual imaging is the most important factor. In their experiments, Pavese and Bauxbaum observed that the robot, in presence of similar targets and distracters, selected the target object almost always on basis of color (Pavese & Bauxbaum 559).

Although the major way to encode the affordances is language-based information, lately imitation has also been a way to learning affordances. In this way, robot not only carries out actions as already perceived according to predefined affordances, but it also learns from the environment. The algorithm works on the perceived actions and motions in the surroundings. The robot recognizes objects and motions in its environment. Then it interacts with the object and learns about its motion about the principal axis. The robot is then able to repeat the observed action (Kopicki 14-15).

Although it enhances the movements and interaction with the environment, a major part of movements and object affordances are predefined. Another scenario is learning affordances through contact. The robot reaches the object in different directions and learns about its physical features through contact (Kopicki 26). Other methods commonly used for encoding affordances include listening by microphone and programming to define actions and perceptions. Machine language is used to define three or two-dimensional structures of target objects. A near-human general intelligence robot would embrace all methods of learning object affordances and would have all possible reactions defined. Such is the long-term goal of robotics.

Bibliography

  1. Russell, Stuart J. & Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, NJ: Prentice Hall.
  2. Donald A. Norman, The Design of Everyday Things, London/New York: MIT Press, 2000.
  3. Pavese, Antonella; Buxbaum, Laurel J. Action matters: The role of action plans and object affordances in selection for action. In , Visual Cognition, Volume 9, 2002 pp. 559-590(32) Psychology Press.
  4. Hong Zhang Ostrowski, J.P. . Dept. of Mech. Eng., Rowan Univ., Glassboro, NJ. Web.
  5. Arkin, Ronald C. Behavior-Based Robotics. (1998). MIT Press.
  6. Mark Kopicki. Learning object affordances by imitation. Research report 3. The University of Birmingham. Web.
Print
Need an custom research paper on Encoding Object Affordances and Geometrical Features written from scratch by a professional specifically for you?
808 writers online
Cite This paper
Select a referencing style:

Reference

IvyPanda. (2021, August 14). Encoding Object Affordances and Geometrical Features. https://ivypanda.com/essays/encoding-object-affordances-and-geometrical-features/

Work Cited

"Encoding Object Affordances and Geometrical Features." IvyPanda, 14 Aug. 2021, ivypanda.com/essays/encoding-object-affordances-and-geometrical-features/.

References

IvyPanda. (2021) 'Encoding Object Affordances and Geometrical Features'. 14 August.

References

IvyPanda. 2021. "Encoding Object Affordances and Geometrical Features." August 14, 2021. https://ivypanda.com/essays/encoding-object-affordances-and-geometrical-features/.

1. IvyPanda. "Encoding Object Affordances and Geometrical Features." August 14, 2021. https://ivypanda.com/essays/encoding-object-affordances-and-geometrical-features/.


Bibliography


IvyPanda. "Encoding Object Affordances and Geometrical Features." August 14, 2021. https://ivypanda.com/essays/encoding-object-affordances-and-geometrical-features/.

Powered by CiteTotal, free essay bibliography maker
If you are the copyright owner of this paper and no longer wish to have your work published on IvyPanda. Request the removal
More related papers
Cite
Print
1 / 1