Deutsch English
  • Deutsch English
  • Tech of Tomorrow
  • Morals & Machines
  • Burn to Learn
  • Think Tanks
  • About us
  • 07.09.2021
  • Thuy-An Nguyen

How to rid AI of bias

Awareness has risen over the past few years that algorithms can be biased. Nevertheless, developers are still struggling to solve the problem – not only for technical reasons, but also for social ones.

 

Kenza Ait Si Abbou is one of the most prominent AI experts in Germany. She's the leading manager for artificial intelligence and robotics at Deutsche Telekom, has received numerous awards, and is a writer and speaker who advocates for more diversity in the tech industry. But when she applied for a job in Germany in 2008, a recruiting program threw her application out. 

Because Ait Si Abbou did not have an internship on her resumé at the time, a computer system rejected her application as not qualified enough. Although she already had several years of professional experience. If a human being had done the pre-selection, they might have overlooked the missing internship after noting the candidate's other outstanding qualifications. But the rigid screening program insisted on ticking off the "internship" box.

This case is just one example of the problems computer-generated decisions can cause. And it shows why companies will have to deal more intensively with the question of ethically fair artificial intelligence in the future.

In any case, there is an enormous need for guidance in the industry. According to a study by the British think tank Dotveryone, 59% of developers have reservations about the social consequences of their AI applications. Respondents feared possible negative social impact due to applications that have not been adequately tested or have potential security vulnerabilities. And 78% of respondents said they wish they had more time to think about the impact of technology. 

For this reason, knowledge about "de-biasing" techniques is becoming increasing important. The word itself is ambiguous; in colloquial usage, "bias" generally stands for "prejudice" or "discrimination" and usually refers to social conditions. But there is also algorithmic bias – a distortion that can be caused by an AI system's technical design. Experts are agreed that de-biasing needs to take place on both the technical and social levels. 

Wanted: Non-Discriminatory Data

For de-biasing from a technical perspective, it is important to start at three levels, according to Kenza Ait Si Abbou: What data forms the basis? What is the logic behind the AI model? And what is the composition of the development team? An AI system is a model based on mathematics and statistics, says Ait Si Abbou. "However, it learns from data that we train it to use. That means any discrimination that arises has its origin in people. The data is a reflection of society."

That's why people need to ensure the data remains non-discriminatory. In general, data scientists are responsible for carefully selecting data and avoiding potential distortions. Recognizing potential discrimination within data sets, however, is a relatively new skill, and one that they need to learn, Ait Si Abbou points out. "Before we train an AI model we have to ensure we have a sufficient, representative data set. Because the more diverse the data set is, the better the results will be."

However, filtering the data set for possible discrimination can also bring complications. That's why Tobias Krafft, a social computer scientist, approaches de-biasing data sets with caution. Especially when the AI application is already running and the data is subsequently changed. For example, if you simply remove the criterion of gender, this does not rule out proxy variables that closely associated with gender. That might be shoe size, for example, or membership in a soccer club – criteria that are usually associated with males. "The new dataset is still representative of previous decisions," emphasizes Krafft, who conducts research on de-biasing technologies in Katharina Zweig's department at the Algorithmic Accountability Lab at the Technical University of Kaiserslautern. Because of this complexity, Krafft considers this approach to de-biasing data sets to be a method with only limited promise. 

Instead, AI experts are increasingly attempting to understand better how AI systems work – and thereby de-biasing their underlying logic. The mechanisms within the AI decision-making process are considered something of a "black box." To solve this problem, research around "explainable AI" or "interpretable machine learning" has emerged in the past few years. "It's important to understand how the neural networks make their decisions," says Ait Si Abbou. 

"Neural networks are fed data and gain insights by recognizing patterns within the data. This enables them to see subtleties that we humans cannot discern," she says. "But sometimes they find correlations that don't yield causality. This became clear in an AI application designed to recognize horses. The application's performance was considered excellent. But scientists subsequently studied the AI and noticed that all the pictures with a horse were made by a single photographer, and the photographer had included a copyright sign on them. The AI had solved the problem not by recognizing the animals shown, but on the basis of the "copyright sign" correlation.

Understanding algorithmic decisions

This was the conclusion of a study by Dr. Wojciech Samek, head of the artificial intelligence department at the Fraunhofer Heinrich Hertz Institute (HHI), and Klaus-Robert Müller, professor of machine learning at TU Berlin. They developed an analysis method called layer-wise relevance propagation (LRP), which allowed them to investigate which pixels within an image were crucial for the AI's decision. 

Using LRP it is possible to track a neural network's "thinking process." A heat map highlights the parts of the picture the AI is using to make its decision. The scientists noticed that the copyright sign was always highlighted, but not the horses in the pictures. This method uses saliency maps, a technique in the field of computer vision. Applying saliency maps to deep-learning processes is just one of many approaches to make AI more explainable.

Other de-biasing methods come from software development. Social computer scientist Tobias Krafft, for example, focusing on testing the software used in AI models, which is one way to test the functionality of a model. Software-testing procedures enable computer scientists to compare and optimize system results. The underlying idea is that AI can always learn new things and correct errors. This means possible discrimination can be discovered, and the model modified. Ideally, this should happen during the testing phase, i.e. before the AI model is put to use. "That way, the application arrives in society with as few errors as possible," says Krafft.

As research around the traceability of AI grows, the technological possibilities to de-bias it are improving. But the composition of the development team also plays a role in any successful de-biasing effort. The diversity of the team can be a criterion for better identifying potential discrimination, Ait Si Abbou points out. However, diversity is not the only decisive factor. More central is how sensitive teams are to discrimination. "People developing AI need to be trained and learn more about unconscious bias," says Ait Si Abbou. One way to do this is to bring external experts on board, such as sociologists.


The illusion of an entirely fair AI

Because designing ethically just AI is so complex, the Bertelsmann Stiftung has developed a practical guide called the Algo.Rules project. The guide was developed with the help of more than 400 experts in the field. It is aimed at executives, data collectors, programmers and software designers, and is intended to provide assistance at various stages of development. "With all the hype, many AI applications have been developed without those involved in the process having the necessary skills," says Lajla Fetic, co-author of Algo.Rules. 

The basic idea of the Algo.Rules is to build in "ethics by design." That means opportunities for reflection must be built into the development process from the very beginning. "For one thing, companies that want to develop AI applications need to ask themselves up front what problem the AI is intended to solve, and what skills it needs," says Fetic, who was named one of 100 Brilliant Women in AI Ethics by the nonprofit Social Good Fund. 

On the other hand, the process itself must be transparent as well. This includes defining responsibilities, setting goals for the use of AI, and considering the impact to those affected. Finally, thought should also be given beyond the development phase. Once the application is running, for example, there is a need for ongoing optimization of the AI and a possibility for users to register complaints. All of these principles are aimed primarily at AI technologies that have an impact on people. 

Despite all the measures and technologies, however, Fetic cautions against believing that AI can ever be entirely discrimination-free. "Algorithmic systems reflect societal biases, which is why they cannot be truly discrimination-free until society reaches that state – an illusory goal," says the social scientist. The objective, she says, is therefore not to make AI completely discrimination-free, but to create ethical spaces for reflection during the development process.

Images: Jr Kopa/Unsplash

You might also like

  • Think Tank
  • 11.09.2021

“We should all be feminists”

A federal chancellor who enjoys reading literature, and a bestselling Nigerian writer who thinks like a politician – what unites Angela Merkel and Chimamanda Ngozie Adichie?

  • New Work
  • 29.09.2021

Minimum Possible Change

Germany’s election results show the country’s biggest problem: a fundamental lack of vision. However, minimum possible change is better than no change at all.

  • Cybersecurity
  • 04.08.2021

„Nothing is unhackable“

In an increasingly connected world, our privacy is becoming more and more vulnerable. A conversation with cybersecurity expert and activist Eva Galperin

© 2022 ada
Imprint
privacy policy