At Least We Care
In a study conducted in 2013, a team sought to find out if humans show empathy towards robots. Amar Toor’s article, “The robots are coming, but will we love them,” recaps that study. A group of people were given four videos to watch: a human coddling a robot, a human coddling another human; a human mistreating a robot, and a human mistreating another human.
Third Video. The affect is strange.
The results were that those being tested produced the same neurological patterns consistent with each similar type of scenario. That meant they empathized with the robot when it was being mistreated the same way they did with the human subject .
To what degree was the empathy similar? Who knows. But that test proved that we, or those tested, show empathy, no matter how little, towards robots. But why show any empathy at all? It seems that this is one of those psychological phenomena or neurological ticks that humans have.
“It can only be attributable to human error” – Hal 9000
I highly doubt that our empathy readings spike when Leonard from sales loses it and smashes his computer to bits (maybe we empathize for him, but for the computer, no). I think this tick has a lot to do with our ability to transpose ourselves with robots, especially the ones that resemble us, a little. If it moves like us and speaks our language, both verbal and nonverbal, then we may see ourselves in it; like some ancient hex was placed on us to where we can’t distinguish the line that separates ourselves and others like us. But that goes with almost anything that resembles us: dolls, action figures, and so on. Of course those examples may be geared toward children’s fancies, but the fact remains.
If the empathy test was done on something “smart” would we react the same? If a robot vacuum (the ones shaped like CD players) was smashed repeatedly and violently with a mallet it may not elicit similar readings as if we witnessed George Jetson bludgeon Rosie the Robot until her circuit-board was showing. The two objects perform essentially the same task: cleaning; but Rosie has humans characteristics, the robot vacuum doesn’t. I don’t think we have an option whether or not we show empathy to the former, but the latter, we’ll just buy a new one.
Closing The Gap
Although Rosie the Robot may be a automaton clear of the “uncanny valley” we wouldn’t sacrifice our dog for her.
While robots and androids fake human consciousness and characteristics, plants and animals actually have one, or are thought to have one, because they operate, and have been operating, without human actors. A parrot repeats just as much as we tell them to but their mimicry is different than Siri’s. A dog sits because it has background knowledge; a robotic dog, like the Poo-Chi, sits because it was programmed to react to that command.
I don’t think we feel real empathy towards robots and “smart” objects. I think it’s just our brains firing unauthorized signals to clue us in on some primordial alert that we haven’t yet evolved from. We can hold full conversations with Watson, Hal 9000, Gerty, the Iron Giant, or any other real or imagined object of human imitation, but, still, that would be like talking to wall in an empty room.
Mufson, Beckett. “Could You Empathize With A Robot? | The Creators Project.” The Creators Project. The Creators Project, 25 July 2014. Web. 03 Oct. 2014.
Toor, Amar. “The Robots Are Coming, but Will We Love Them?” The Verge. The Verge, 26 Apr. 2013. Web. 03 Oct. 2014.
Wolchover, Natalie. “Why CGI Humans Are Creepy, and What Scientists Are Doing about It.” LiveScience. TechMedia Network, 18 Oct. 2011. Web. 03 Oct. 2014.