When I was very young, I read the Raggedy Ann (and Andy) stories by Johnny Gruelle over and over again. My grandmother made a Raggedy Ann doll for me. The doll was exactly my size, and one Halloween, I borrowed her dress to go trick-or-treating as Raggedy Ann. I was fascinated by the idea that my toys might walk and talk and live when I wasn’t around. Now, I am rediscovering the Raggedy Ann stories with my daughter, who loves them, too, and while I still find them charming, I also find them a little bit horrifying. Because I remember the vague guilt I would sometimes feel when, after days of forgetting she existed, I would discover my Raggedy Ann squashed (trapped) in the bottom of a container of toys, and in a fit of remorse, I would throw her tea parties and take her everywhere for a week or two before forgetting about her once again.
In her essay, “The Dream of Intelligent Robot Friends,” Carla Diana seems to welcome the possibility of smart objects that could respond to and interact with us:
The tools for meaningful digital-physical integration are finally accessible, but it’s still a messy challenge to get them all to work together in a meaningful way. Dreaming about robots is a bit like dreaming about finding strangers who will understand you completely upon first meeting. With the right predisposition, the appropriate context for a social exchange, and enough key info to grab onto, you and a stranger can hit it off right away, but without those things, the experience can be downright awful. Since we’ve got a lot more to understand when it comes to programming engagement and understanding, the robot of my dreams is unlikely to be commercially available any time soon, but with the right tools and data we can come pretty close.
I admit to being a technophile, like Diana. Robots, though, especially the kinds of robots she has helped to design, or the Kismet robot designed by MIT labs, evoke in me feelings of unease as well as fascination. As with the Raggedy Ann doll of my childhood, the potential “smart things” of our future raise for me the spectre of sentient objects, things that might resent us when we’re neglectful, things that might rebel if we treat them in ways they don’t like. Some scientists who work in artificial intelligence posit that things can be “smart”–that is capable of advanced human-like behavior–without being conscious or self-aware. If that’s the case, then arguably, we could have intelligent robots who aren’t bothered by their working conditions.
Yet, should feeling empathy with or responsibility toward things be dependent on a perception of those things as “intelligent” or “conscious”? For example, many of us go out of our way to avoid causing harm to animals, or plants, or even bodies of water or geologic resources. Why is it normal, even encouraged, to care for some objects but not others? How might our attitude to things like smart phones or robots be transformed if we could interact with them–and they could respond like–our pets or our friends? Would we be required to rethink the implicit ethics that guide our everyday interactions with things?
Some religions, such as the Japanese religion of shinto, posit a world in which inanimate objects are a manifestation of or are animated by living, spiritual forces. Environmentalists and animal rights activists often make compelling arguments that all living things have an equal right to existence, and that human needs and concerns must always be balanced against that right. To the extent we may develop smart objects that tend to blur the line between living beings and contrivances of inert matter, might we find ethical guidance about dealing with such smart things in religion or philosophy? Or should that guidance come from somewhere else? Or, maybe, are all of these discursive systems or intellectual disciplines potentially relevant?
Carefully read Diana’s essay, and use that piece and some of the resources linked in this prompt as a starting point for some quick research. Combine a web search with a search of the library’s eJournals, looking for resources that might help us understand the ethical systems that govern human/object interactions. Craft a post that summarizes the results of your research and provides links or citations to useful resources.
Posting: Group 2
Commenting: Group 1
Category: Smart Things
In your Blog #6 post, you should do more than offer a list of source summaries. Rather, you should frame the summary of your research, as a cohesive response to a research question that is posed or suggested by this prompt. Please carefully read and follow the guidelines and posting information for this blog as they’ve been outlined in the Blog Project Description.
Feature Image: “Forgotten 80/365” by Marcy Leigh on Flickr.