Deutsch English
  • Deutsch English
  • Tech of Tomorrow
  • Morals & Machines
  • Burn to Learn
  • Think Tanks
  • About us
  • 04.06.2021
  • Miriam Meckel

A Good Face for a Bad Game

Large corporations as well as tech companies are betting on emotion recognition – and so is the Chinese state. A lot of it is mumbo jumbo. But it has its uses.

Mention the name Adam Smith, and people immediately think of the “invisible hand” that supposedly allows a great, good whole to emerge from the accumulation of individual egoisms. This idea has shaped our economic order for more than 250 years. Smith describes this meaning of the metaphor prominently in "The Wealth of Nations" (1776). Less well known is that the Scottish national economist was also thought a lot about emotions.

His "Theory of Ethical Emotions" (1759) postulates that human use moral rules as a means for locating themselves in reality. Humans are constantly observing themselves as they interact and communicate with each other. It’s how we discern when we ourselves and others are suffering or experiencing pleasure, and make the perceived similarities and differences the moral standard for interacting with one another.

Adam Smith would have had no use for "emotion recognition." Emotion recognition is the “next big thing” for all those prediction fanatics who believe nothing in the world should be spared from data-based prediction.

And so underage students in Hong Kong sit in front of a screen, learning remotely, while an artificial intelligence constantly analyzes their facial features. The AI aggregates and evaluates micro-movements of their facial muscles, sometimes even minimal changes in skin pores, to determine how happy, motivated and attentive the learners are.

Coca-Cola and Intel use marketing service providers specializing in facial recognition technologies to test the emotional response to their advertising. And tech companies like HireVue and Human use facial expressions to analyze whether job candidates are conscientious and reliable as early as the initial application process. Those who think this is mumbo jumbo are right.

Emotion recognition endangers the human core of humanism


Every individual can be identified by his or her facial features. It is also possible to read a lot from them – but unfortunately not necessarily the emotional state a person is in. AI systems are trained with data. How correct the result is depends on this data.

How often these technologies confuse people or, for example, fail to recognize the faces of black people in the first place, is now abundantly documented. On the other hand, the systems also recognize good looks.

When I recently tried a newer version of emotion recognition, the system reported "poker face" several times. Over the course of a lifetime, people learn not only to express emotions, but also to withhold or disguise them.

There is no one standard for how joy or fear are expressed. It varies individually, but also culturally. The theory of six universal emotions (fear, anger, joy, sadness, disgust and surprise) developed by U.S. psychologist Paul Ekman has long been widely challenged.

So much of emotion recognition is indeed mumbo jumbo. But it’s profitable. The market for AI-powered emotion recognition is expected to grow from just under $20 billion to $37 billion in the next five years.

So there's a lot of money at stake. And extensive control. The BBC recently published research showing that China is using algorithmic analysis to monitor the emotions of the Uighur minority in Xinjiang and further restrict their rights. Alleged suspects are strapped into chairs in police stations, questioned, and filmed to determine whether they are in a "negative" or "fearful" emotional state that indicates guilt.

This "emotion detection" threatens the human core of humanism. Adam Smith saw it as a given that man takes an interest “in the fortune of others, and renders their happiness necessary to him, though he derives nothing from it except the pleasure of seeing it."

If we are so inclined, caution is advised in the field of emotion recognition. Emotions allow us to balance in-betweens, explore trade-offs, and endure ambiguity. AI-based emotion recognition turns emotions into analytics, and moral understanding into computational principles. It is an example of uncreative destruction.

Miriam Meckel

Prof. Dr. Miriam Meckel is the Co-founder and CEO of ada Learning GmbH and professor of Communication Management at the University of St. Gallen, Switzerland. In this column, Miriam Meckel writes biweekly about ideas, innovations and interpretations that yield progress and improve our lives. Because what the caterpillar calls the end of the world, the rest of the world calls a butterfly.

You might also like

  • Leadership
  • 27.05.2021

The Power in Losing Hope

In recent months, hope has become our daily companion. But what if we lose hope? A conversation with philosopher and psychologist Bayo Akomolafe. 

  • Tech of Tomorrow
  • 25.08.2021

Three changes transforming our economy

How Carbon Coins, Decentralized Finance and Fintechs revolutionize our economic system 

  • Privacy
  • 20.05.2021

The Freedom to Choose

Apple users can now decide who is permitted to collect their data. This is a disruption of the common link between digital advertising and data privacy.

© 2022 ada
Imprint
privacy policy