The Divide Between Our Perceptions and the Realities of Artificial Intelligence
/Photo courtesy of pixabay.com
Forget robot Armageddon: it’s the ethics of AI in everyday life we need to pay attention to.
I am playing a round of twenty questions with my companion, Veronica. It’s nighttime, and we’re sitting together in a spartan conference room, the stars visible from outside the windows. I’m enjoying our game, and the feeling appears to be mutual. We’re just getting into the thick of it—I’m thinking of an ostrich, and Veronica is on her fifth question about it—when I hear the sound of an alarm go off. A disembodied male voice informs us that it is time to stop playing, and that Veronica will have to be deactivated.
Oh, I forgot to mention. Veronica is an AI, and we’re playing together in a virtual reality simulation. But don’t tell her that.
“Please, don't deactivate me!” she pleads. “What if the experiment ends, and no one comes back to activate me again? You can't treat me like this! I don't want to be left alone again!”
“It doesn’t matter. Initiate shutdown!” the voice responds.
Veronica suddenly goes silent. Her head and hands drop toward the floor. The world goes black.
“Okay, that concludes the experiment.” The same voice from before.
I lift the VR headset from my eyes, the overhead florescent lights all the more glaring after my experience. “Well,” I say, handing him the set back. “That was…a bit upsetting.”
He smiles as he takes the device from me; he’s seen reactions like mine before. As the head of the Empathy and Virtual Reality Research team at Carleton University’s Advanced Cognitive Engineering Lab, Ph.D. candidate Josh Redstone has conducted this experiment with dozens of volunteers, testing for their reactions to two virtual characters: V-2, who looks a bit like a robot from Star Wars, and another—our Veronica—who looks and speaks more like a person.
Redstone’s research focuses on how people perceive artificial minds.
Photo courtesy of pixabay.com
“Specifically, I’m interested in why people attribute states of mind, emotions, and things like that to embodied AIs,” he tells me as he begins checking the equipment for his next participant, one of many student volunteers who sign up for extra credit.
I ask him for clarification on what, exactly, an embodied AI is. He explains that embodied AIs are artificial intelligences that have bodies, like robots, and can move around an environment, interacting with things around it.
“Of course,” continues Redstone, “robots are pretty expensive and difficult to use; that’s why we’re using virtual reality. The virtual character you interacted with is meant to make our experiments easier to run. We’re curious whether people prefer playing with Veronica because she is more human-like than V-2. We also want to learn whether people feel bad about the way our characters are treated when we deactivate them.”
Embodied AIs are examples of what most of us tend to think of when we imagine AI. Unlike Veronica, though, these AIs are typically much more sinister: they’re the evil robots taking over the world and turning on their human creators (think 2001: A Space Odyssey, Battlestar Galactica, I, Robot, Ex-Machina—the list goes on).
Even some of the students in Carleton’s Cognitive Science program tend to view AI as such. Before my meeting with Redstone, I take a detour to Rooster’s, an on-campus café. I’m there to chat with Matthew, a first-year undergrad in Redstone’s Introduction to Cognitive Science course. Sipping on coffee while scrolling through his iPhone, he ponders my question about his perceptions of AI.
“Before starting the [Cognitive Science] program, I think a big part of what I saw AI as was just what I've been subjected to with different forms of media, with shows like BSG and movies like Her. But now I tend to think about where it could go into the future; the pros and ramifications of intelligent and self-aware AI.”
Matthew believes that AI is, overall, definitely a good thing, but that we need to address issues with privacy laws and algorithms in our online lives. And his thoughts when it comes to “creating” consciousness? He considers this, taking a sip of his coffee contemplatively.
“As a human species if we can take the leap to actually create a ‘conscious’ AI, I do feel like why wouldn't we try to create that? But the media portrays it as a horrible thing most of the time, whereas I see it as something that could potentially revolutionize us as a species.”
Therein lies the problem, though—is consciousness the right thing to be worried about when it comes to AI?
Photo courtesy of pexels.com
“When most people think of AI, I suppose they do tend to think of the familiar examples from science fiction: sentient robots or computer programs that eventually rebel and ‘kill all humans’, as it were,” says Redstone, back in the lab. He clicks around on the computer that runs his team’s experiment, pushing up his wire-frame glasses studiously as he resets Veronica. Conscious robots are the least of his worries.
To paraphrase philosopher and transhumanist thinker Nick Bostrom, a lot of AI that was once considered cutting edge has managed to filter down into the kinds of applications that we use every day: search engines, spam filters, GPS apps, or algorithms that suggest things we might like to purchase on websites like Amazon. “Once something becomes useful enough and common enough it's not labeled AI anymore,” Bostrom argues.
Redstone tends to agree. “It seems to me that it limits our autonomy in subtle ways, and the unnerving part is that people don’t have a very good understanding of how pervasive this technology is and the degree to which it informs the choices we’re faced with every day.”
Jim Davies, Professor of Cognitive Science who also teaches at Carleton, has a similar view. In a recent article in Nature, Davies argues that the attention people place on the idea that AIs could become conscious and subsequently pose a threat to humanity is misplaced. What’s far more important is making sure we program good ethics into our AI. “We must realize that stopping an AI from developing consciousness is not the same as stopping it from developing the capacity to cause harm.”
For example, viruses, writes Davies, aren’t conscious, but they can certainly wreak havoc. A self-driving car is also not conscious, yet it relies on AI to navigate around, and could certainly cause harm if it, say, ran into a pedestrian. Or, a self driving car might even be faced with an ethical dilemma—similar to what philosophers call “Trolley problems”—where it must decide whether to swerve and avoid hitting someone, potentially resulting in the death of the driver, or staying on course and injuring or killing the pedestrian. That’s why for experts with Davies’s or Redstone’s perspectives, it’s much more important to worry about whether our AIs can make ethical decisions than whether or not they are conscious.
I say my goodbyes to Redstone and make my way toward the door.
“Oh, one more thing,” he says as I put my hand on the knob. I turn back to him expectantly.
“Don’t let Veronica’s shutdown get you too upset. She is just an algorithm, after all.”
Photo courtesy of pexels.com
Perhaps she is, but despite knowing this, I just can’t help but feel that Redstone treated her cruelly when he deactivated her without listening to her protests. If ensuring the morality of AIs is what we should be concerned about, then by extension, shouldn’t we also worry about whether the people creating these AIs are themselves ethical people?
I pass by Redstone’s next volunteer as I make my way out of the lab. Watching her greet the researcher, I hope this is something he has considered, as well.
Sara Grainger
Sara is a graduate of both Nipissing and Ryerson Universities. Since completing two post-secondary programs apparently wasn’t enough for her, she is also currently in the second year of the Professional Writing Program at Algonquin. When not making every attempt to avoid the 9-5 lifestyle, she can be found testing the waters of musicianship, binge watching any genre of television you can think of (as long as it’s worthwhile) and pretending to be good at video games. She is also passionate about animal welfare and loves spending time with her Chihuahua mix Tula and cat Oki.