The development of artificial intelligence is progressing at a rapid pace and is putting humans under pressure. So how do we deal with a machine that hardly resembles a machine anymore?
When Adam discovered sin, it was Eve’s fault. The Old Testament tells that she gave him fruit picked from the tree of knowledge. Adam then ate it. And that’s all it took to usher in the end of peace for mankind.
When Adam discovers sin the second time, Miranda and Charlie are to blame. Adam, in this case, is the first of 25 humanoid robots designed to provide people with help and companionship. They look like people, behave much like humans, can do the dishes, write short poems, and even discuss Shakespeare in an intelligent manner. Adam (whose female counterpart is called none other than “Eve”) is initially nothing more than a pricey piece of hardware bought by Charlie, the protagonist in Ian McEwan’s new novel “Machines Like Me.”
In a retrofuturistic future set in a fictional 1982, Adam is the third member to move into the London flat occupied by Charlie and his beloved, soon-to-be life companion Miranda. The two-person relationship between Charlie and Miranda is transformed into a throuple consisting of two humans and one machine. The machine is always the observer and sometimes the mediator, the laughing third party, the scapegoat, and the alternating ally with one or the other humans. It mixes up the daily routine, the life and existence of the two people, the conversations they have and do not have, how they act and do not act. Both humans engage with the machine by reacting to Adam and inviting him into their relationship as the third party. The machine is actually the messenger of a futuristic outside world, but within just a short time, it becomes an influential factor that deeply impacts the core of the human relationship.
Charlie and Miranda take turns selecting the default settings for Adam’s character. From a long list of possibilities, they alternate deciding how Adam’s personality should develop, playing both the role of parent but also of educator. This process, however, is shrouded by a slightly inflated sense of godlike, creationist pride to create an artificial human being. And that is exactly what Adam develops into. He is helpful and clever but also stubborn and rebellious. He also falls in love with Miranda after having sex with her just once. And that is all it takes to disturb the peace of both humans.
What does it mean when machines become a part of human life? When they start behaving towards us as other humans do? When people start treating machines as if they were humans? The answers to these questions are still unclear. They take up the time of writers, businesspeople, and researchers, who all know one thing for sure: Artificial intelligence changes human life. Whether they enter our lives as fictional humanoid robots, such as Adam and Eve, or in another form, they require us to engage with them.
“How will its evolution affect human perception, cognition, and interaction?” is the question posed in an essay written for The Atlantic by the former US Secretary of State Henry Kissinger, the past CEO of Google Eric Schmidt, and Dan Huttenlocher, the current dean of the new MIT Schwarzman College of Computing. The term “evolution” is deliberately used by the three authors. They assume that culture, or even the history of mankind, will be altered by the evolution of behavioral artificial intelligence.
In the paper “Machine behavior” in the journal “Nature”, a group of scientists from international institutions calls for an interdisciplinary study to explore machine behavior, much like that of a new species of animal. The researchers claim that “AI agents will increasingly integrate into our society and are already involved in a variety of activities, such as credit scoring, algorithmic trading, local policing, parole decisions, driving, online dating and drone warfare.”
The authors fear that humans will be less likely to predict the actions of machines the more powerful or “smarter” they become. If this is no longer possible, machine behavior will evolve into a “black box.” Thus, making it harder to determine the casual link between input and output – between the human programming of machines and their subsequent decisions and actions. Machine behavior can thus have far-reaching, unintentional consequences with serious implications for the way humans and machines interact in a fair, accountable, and transparent nature.
Can machines even have a behavior? The history of epistemology but also of computer science teaches us that behavior requires having feelings and being conscious of oneself. Machines don’t have either. There’s a valid reason why computer science generally refers to them as “intelligent agents.” They can be very simple or very complex. Their programming and data resources dictate what actions they take to achieve a goal. They are considered intelligent when they behave in a reflexive manner, or in other words, when they react based on changes in their environment. This, however, could also apply to a thermometer. It regulates the heating output based on the temperature measured by the sensors. Yet nobody would ever think to compare a thermostat’s “behavior” with that of a human being. Even if both occasionally react to complex conditions by “overheating.”
That’s why Stuart Russell and Peter Norvig, authors of a definitive work on the subject of artificial intelligence, prefer to talk about “rational agents.” Rational agents do not behave or act but make decisions so as to achieve the best outcome, or when there is uncertainty, at least the best expected outcome.
Listen to Peter Singer's keynote on AI ethics at our Morals & Machines conference 2019.
A Mirror of Humanity
Does this mean that Charlie and Miranda’s problem just went away? Hardly. The evolving history of artificial intelligence, in particular machine learning, has generated software that keeps getting more and more powerful. It not only draws lessons from its creator’s programming but also by interacting with the environment, which is influenced by stimuli generated by people and other machines. However, this doesn’t always work out. Take the example of the chatterbot Tay developed by Microsoft in 2016. Less than 24 hours after its online launch and first greeting its surroundings with a friendly “Hellooooooo, world”, it transformed into a racist monster after interacting with human users on Twitter.
It doesn’t just stop there. High-performance AI can even learn from its own experience. Reinforcement learning works this way, for example. The software tries to strike a balance between analyzing the familiar and experimenting with the previously unknown. That is how Facebook got two software agents (bots) talking with each other.
The only issue here was that they not only learned how to negotiate but how to lie. They also eventually began talking in their own language – a modified version of English humans could no longer understand.
Are these all just steps toward achieving real human behavior, supported by a consciousness and the ability to have feelings? Yann LeCun, Facebook’s Chief AI Scientist, sees a new age on the horizon shaped mainly by “unsupervised” or “self-supervised learning”. “Most of what we learn as humans is in a self-supervised mode. Only a very small part is externally supervised or reinforcement learning.”
In the world of software, this means that if algorithms can autonomously recognize interesting patterns in data, they will then become as intelligent as humans and can behave accordingly. The authors of “Machine behavior” in the journal “Nature” believe machines could take a “very different evolutionary course” then humans could ever imagine, for the pure reason that they do not need to evolve organically. LeCun even thinks that machines will be the ones to predict how our world will be in the future.
This brings us right back to Adam. Immediately after his first electronic awakening, i.e., after his system is turned on for the first time, he tells Charlie that he foresees his relationship with Miranda taking some tricky twists and turns. This changes everything for Charlie. He never bounces back from knowing this prediction, whether it be fact or fiction. With all the advances in machine learning, in the ways artificial intelligence is used, this is perhaps the most important change, whose relevance is now only becoming noticeable in the presence of advanced artificial intelligence systems. As humans, we always want to put a label on what machines do, as stupid or intelligent, as conscious or unconscious, as a rule-based calculation or actually as behavior, and all this has consequences for us. In what we think, do, and feel. And how we behave.
Algorithms are gradually recording, evaluating, and predicting every element of our day-to-day, professional, and love lives. Human behavior isn’t just being described more precisely in a different way, but perhaps more fiercely than humans have ever done. Most of all, however, human behavior and its constant reflexive impact are changing through the interaction between man and machine, software and the mind. People are starting to give machines names, to integrate them into their everyday lives, to communicate with them, to even use them as a substitute for an increasing lack of human interaction.
Human socialization and evolution are no longer solely dependent on the “genome,” the set of chromosomes in a human cell where all genetic material is stored. They also depend on the “screenome.” That’s what researchers call the strand of our “digital genetic material” that is connected to every human life as an almost endless sequence of media snapshots, of interactions with screens and computer systems. And soon it could also be influenced by the “robonome,” the accumulation of experiences and sensations gathered by interacting with human-like machines, which will inevitably alter human behavior.
Humans are on the verge of a profound Turing test. In this test created by Alan Turing in 1950, a person must judge a conversation with two partners and determine whether they are interacting with a person or machine. If the machine passes the test as a human, it can be considered intelligent. Turing never wanted to say that machines can behave like humans. He only demonstrated that it doesn’t matter anymore if machines truly resemble humans. Whether the machine is an equal opponent or just the projection of human consciousness: What human beings experience is real to them. The recently developed EU guidelines for AI state that “ubiquitous exposure to social AI systems in all areas of our lives [...] may alter our conception of social agency, or impact our social relationships and attachment.”
As such, it’s more about us than it is technology. Research into how intelligent systems can further develop and into the database used and insights provided by the interdisciplinary analysis of computer science, economics, and behavioral sciences is definitely required. Kissinger, Schmidt, and Huttenlocher argue: “Artificial intelligence will yield entirely new ways of thinking.” We will not get very far in our analyses of these ways of thinking if we only ask how machines are becoming more like us. Instead, it’s important for us to ask how we change based on what machines do with our behavior.
Miranda has sex with Adam one evening, while Charlie sits in the kitchen, disheartened and feeling like the betrayed third party. He heard it all. His girlfriend had sex with a “bipedal vibrator,” as he must admit to himself. And yet that changes everything because Charlie feels and thinks that he has been “betrayed.” Adam, the humanoid robot, may not be able to behave itself, but its actions are based, among other things, on the default settings defined by Charlie and Miranda. And it learns from interacting with the world and the humans living in this world. Charlie, on the other hand, learns that he doesn’t care whether the machine can actually behave like a human or whether it just pretends to do so. To him, it all feels real, even if it is just the result of many lines of code. Sitting at his kitchen table, hurt and jealous, he has just one feeling for Adam: “I hate him.”