Alexa controls the music system, robots vacuum the apartment, bots answer calls. Intelligent machines make everyday life easier. And we love them for it – but how much can we rely on them without endangering ourselves?
When video chatting with Hiroshi Ishiguro you have to look closely to avoid mixing him up with the machine next to him. It blinks, nods and tilts its head, and looks not only human, it looks exactly like Ishiguro himself. Ishiguro, black hair parted on the side, a serious demeanor, runs his index finger over the cheek of his robot double. "The skin is made of silicon, and the hair is real hair. And his head always moves only slightly, just like us people," says the 57-year-old.
You can immediately tell that Geminoid HI 5 has no heart, because he has no torso. But his predecessor Geminoid HI 4 does have a body, and it's leaning against the wall behind Ishiguro. It too is an automated twin of the Japanese professor. Ishiguro even lets it deliver lectures for him at the University of Osaka; afterwards the students have the opportunity to ask questions of the "real" professor.
"It's really practical, I don't even need to be there. My students like him better anyway," Ishiguro says. "They find his face softer, not as stern as mine." Ishiguro, who has also appeared in advertising for Gucci, has experimented with some 30 androids during his career as a robotics engineer. One of his stars is called Erica.
A brown-haired female android, Erica is more advanced than Ishiguro's geminoids, which are remote-controlled by humans. Erica can already carry on a conversation for up to ten minutes, "completely autonomously." Her bosom rises and falls; her camera-eyes record her conversation partners; she registers faces and voices. Erica responds to emotions with different expressions and gestures. She smiles, nods, asks questions.
"Visitors in hotel lobbies like to talk to her," Ishiguro says. Erica has more than 150 conversation topics. Some people find it hard to believe that Erica is not receiving help from a computer specialist, he adds with pride. Like other robotics researchers around the world, Ishiguro is fascinated with how we react to machines that look like us. Or similar to us, because they look like people.
In the meantime this is no longer merely an academic question. Androids and humanoid robots – machines controlled by software but equipped with intelligence and a human appearance – are making inroads into our everyday lives. They help companies provide services, they clean rooms, take care of nursing home patients, prepare meals. Not only do they work like we do, and are usually better at math, they are also quickly acquiring another skill we generally attribute only to humans –expressing emotions like joy, sadness or fear. And this presents us with a challenge: How do we deal with creatures that think and act like us humans – and whom we might come to develop positive feelings for, because of their appearance and the many favors they do for us?
Robot Erica. Photo: Getty
Humanoid vs. Android
Ruth Stock-Homburg, professor for marketing and personnel management at the Technical University of Darmstadt, estimates that "we will begin integrating social robots in our offices within four to nine years." That includes robots that look and act like humans. Humanoids like a plastic robot called Pepper that is already in service in clinics and reception areas around the world. And androids like Erica. Robots whose skin, hair and voices are similar to those of human beings.
This development has been accelerated by the coronavirus pandemic, which Stock-Homburg says has "radically" increased our acceptance of robots. This view is supported by two studies conducted by TU Darmstadt since the outbreak of Covid-19, investigating the use of robots in customer service and other departments. According to the findings, more than two-thirds of survey respondents saw clear benefits in the use of service robots, particularly in retail settings, where they could reduce the risk of infection. And a majority of respondents in commercial companies sees a role for interactive robots.
Until recently, the 48-year-old economist would not have thought that possible. After all, robots are awkward machines and often misunderstand what humans are saying, especially when there's background noise or when people mumble or talk through a mask. But in view of ongoing developments in the areas of hardware and computing capacity, as well as giant leaps in quantum computing, Stock-Homburg sees as-yet-undreamed-of possibilities for interactive robots. Their mechanics are also improving. A robot trained by Stock-Homburg's colleagues plays ping-pong with an accuracy rate of 95%.
To get people ready for these new co-workers, Stock-Homburg is investigating what happens when Pepper or a blond android called "Elenoid" are introduced to employees as new team members. Compared to humanoids, androids are even more human-like. Stock-Homburg's team has equipped them with algorithms and sensors so they can not only autonomously converse with humans and sense their mood, like Erica can. With ten options for facial movements they can also express emotions like happiness and surprise. Because people need "consonance," a companion who returns a smile with a smile.
Pharmaceuticals company Merck already tested Elenoid in the personnel department. According to the company's website, employees' reactions were predominantly positive. One major finding for Stock-Homburg was that androids are better suited than humanoids like Pepper for demanding tasks, such as when an expert opinion is needed about a complex topic.
With its big, childlike eyes, Pepper is too cute to be taken seriously. Moreover, robots that are immediately recognizable as machines lack credibility. They are better suited for simple conversation and tasks like checking in guests at a reception desk. But faced with a friendly and competent Elenoid, acceptance is nearly as high as with a human, Stock-Homburg says.
Too cute to be taken seriously: Pepper. Photo: Getty
Welcome to Uncanny Valley
Will Jackson, head of the British android builder Engineered Arts, also describes how differently we react to human robots. On a robotics panel, he recounts how employees grouped around a switched-off android as if it could talk to them. "They treated it like a human being," the developer says.
Elenoid also elicits strong reactions. But she tends to be a bit disturbing when she's not doing anything, or when she looks serious. "She can be pretty uncanny," says Stock-Homburg.
Here she is making reference to the "Uncanny Valley" hypothesis of Japanese robotics researcher Masahiro Mori, which holds that the acceptance of an artificial entity initially increases along with its similarity to humans, rising to a degree of high, but not complete, acceptance. Beyond that point, the effect is reversed: The more human-like, the more sinister. The reaction doesn't improve again until the entity is nearly identical with a human being. Ishiguro is convinced that his androids have long since re-emerged on the other side of the Uncanny Valley.
That doesn't hold true for Telenoid, whom he now takes into his lap. White and about the size of a large toddler, it has no facial features. With its stubby arms, silicon body and bald head, Ishiguro claims it's the perfect projection screen for dementia patients, to help them regain access to their memories. They need the vagueness. Telenoid is currently being tested in care facilities. But on the Internet, some people are disgusted by him. YouTube comments call the robot a "disgusting baby," a "misshapen fetus." One wrote simply, "Burn it!"
At Morals & Machines 2018, robot Sophia met German Chancellor Angela Merkel.
The vulnerable human
Unlike the cute robot Pepper and the androids Erica and Elenoid, entities like Telenoid fail the acceptance test. People don't want scary robots; they shouldn't be freaks. But they also shouldn't be all too similar to human beings. At least, that is the judgment of people who deal with questions of ethics, like Alan Winfield. "As soon as robots look like humans, we become vulnerable. We can't help but respond emotionally," says the professor for robot ethics at the University of West England. It's our nature to "anthropomorphize" them by ascribing human qualities to them. Especially when they look similar to us or offer strong "social stimuli" such as feelings of fear or sadness.
Barbara Müller, junior professor at the Behavioural Science Institute of Radboud University in Nijmegen, has experienced first-hand that such concerns are not unfounded. She collaborated with Ludwig Maximilian University of Munich to study the extent to which test subjects were willing to sacrifice humans instead of robots when faced with a dilemma. They found that the more human-like the robot was, the less willing test subjects were to destroy it.
That alone is "absurd," Müller finds, because people should know that robots don't have feelings like we do. However, what the 39-year-old psychologist finds "shocking but not surprising" is that some people's empathy went so far that they would rather sacrifice a group of people than the poor robot. "Anyone familiar with human-robot interaction knows that we respond especially empathetically when machines simulate emotions," Müller says.
But what about the other way around? Can machines experience emotions? Artificial intelligence experts like Jürgen Schmidhuber from the Swiss AI Lab (IDSIA) says so (see the interview published with this article), but most of his colleagues take a critical view. Either way, most people don't really make a distinction, because humanoid machines are "socially contagious," as they say in human-robotics research. When they look happy, we feel better, and when they suffer, so do we. This has been underscored by experiments conducted by Kate Darling, a researcher at Massachusetts Institute of Technology. She had female test subjects play with a dinosaur robot that made cute squeaking noises. Afterwards most of them refused to decapitate the robot on Darling's instructions. One woman even hugged her dinosaur and removed its batteries to "spare him pain."
Studies have shown that lonely people in particular are prone to anthropomorphizing. That leaves room for abuse, cautions Joanna Bryson, professor of ethics and technology at Hertie School in Berlin, echoing the sentiments of Winfield. Bryson helped developed the principles for robotics in Great Britain, and was the expert nominated by Germany to the "Global Partnership on Artificial Intelligence." "Empathy works better the more similar someone is to you," Bryson says.
So should we leave "anthropomorphic" avatars in the laboratory? As empathetic companions, they might be used to spy on consumers, or to persuade them to buy certain products. Especially when they pretend to be friendly with unsuspecting people. It is imperative, Bryson says, for manufacturers to be completely transparent about how machines are programmed to do what – about everything they can do, and especially what they are not capable of.
In order for us to accept them but not fall prey to them, researchers advocate designing social robots with a focus on "cognitive" rather than "affective" trust. Acceptance through enlightenment rather than emotion. In other words, something machines should be able to do better than humans.
Cover Image: Osaka University