Being Alive
People often do not have time to think about life’s essence, as they follow their daily routines and focus on the necessary tasks one after another. Humanity develops technology, and computers are no longer mechanical machines meant for mathematical calculations. They became complicated devices, capable of memorizing, entertaining, and even making independent conclusions. Inventors constructed robots to help humans and to establish the connection between a man and the machine. Modern androids show character, react to their environment, and even mimic people’s emotions. Trying to find the borderline between a human and a machine may lead to logical fallacies when analyzing the interaction process.
Confining in a Robot
Without thinking, most people would try to give some human characteristics to a robot that can respond to questions. This happened when MIT students started interacting with the ELIZA computer program described by Turkle (458). After just several minutes of communicating with it, they switched to personal questions, expecting life advice from an object. When people engage in interaction, even an imaginary one, their brain replaces the missing emotions or engagement with normal reactions learned before. Students start to feel the emotions toward the machine only because it acts similarly to humans.
People’s minds engage in this conclusion, even more, when the robot looks similar to a person. It happened with Nexi, an android with a woman’s torso when one of the students saw it blindfolded and left behind the curtain (Turkle 478). Further studies showed that people referred to the robot as “she” and felt negative emotions when Nexi was neglected. Of course, the students realized that robots could not become real people with nervous systems and feelings, but the android’s shape and behavior made minds see a person, however strange or limited. She talked, reacted, and looked like a human, so the logical fallacy made her seem alive.
Toys and Feelings
Children have a vivid imagination even when they play with static toys, so it can be expected that they would consider interactive figures like Tamagotchi or Furby alive. Turkle describes her own experience with Tamagotchi as well as some cases with children and their electronic pets (462). Feeding, playing, and cleaning after they lead to subconscious emotional attachment to the toy. Some children could not reset the Tamagotchi after its death, as no pet should be resurrected after it has passed away (Turkle 466). One of the key triggers for a logical fallacy, in this case, is assuming that not being able to turn a toy off makes it alive.
Furby was another popular interactive figure that could learn how to speak English. It also had its language, required attention, and even said “I love you!” which the human brain recognized as gratitude. Some children were genuinely worried about their Furby when hearing about the toy’s fears (Turkle 476). Taking the little robot apart also made them feel uneasy, although some realized that changing the batteries or cleaning would not do any damage (Turkle 472). The fact that Furby talks learns, reacts to noise, sleeps, and requires hugs tricks people’s brains to consider him alive enough to be a friend.
Conclusion
The human brain seeks interaction and familiar patterns to choose the correct reaction. People may understand that toys and robots cannot truly think and sympathize, but certain words, reactions, and demands may trigger emotions similar to those between humans. It is a natural process; however, replacing interpersonal communication with robots and machines may lead to psychological disorders.
Works Cited
Turkle, Sherry. “Alone Together: Why We Expect More from Technology and Less from Each Other.” The New Humanities Reader, 5th ed., edited by Richard Miller and Kurt Spellmeyer, Cengage Learning, 2015, pp. 458-79.