Carla Hustedt researches the impact of digitalization on society. In this interview she explains how police hope to prevent crime with the help of algorithms – and what the consequences may be.
Ms Hustedt, it's common knowledge that using artificial intelligence can lead to bias. How are companies working to ensure their algorithms don't discriminate?
It differs from case to case. Certainly the problem has been receiving more attention. But facial recognition software still suffers from a dearth of representative data. When you train your AI system using only pictures of white men, it has more trouble recognizing women and people of color.
People have known about this problem for more than 20 years. Why haven't the data sets not been expanded?
Many companies definitely lack the necessary sensitivity for diversity issues. More attention is being paid to gender equality these days, but intersectional approaches are for the most part non-existent – black women and the differently abled, for example, fall completely through the cracks. In addition, companies often don't understand how to improve the quality of their training data; in addition to testing for accuracy, they should be holistically testing for prejudice. Even a technically flawless system can lead to discrimination if it is used for problematic purposes. So we need more people who have a broad understanding not only of the technology but also of the social context within it is used. Currently, we often don't notice errors until they have already occurred.
Would companies be better advised to examine their real-world processes for fairness and inclusion before automating them?
Absolutely. And this is particularly true for the recruiting process. When algorithmic systems are trained using data from current employees, they tend to reproduce the discrimination that is present in existing, analog processes. Prior to automating a process you have to re-examine the problem you want to solve. For example, you need to precisely define what makes a good employee, which traits and characteristics are required, and how they can be quantified. This also presents an opportunity to revamp processes and employ people more on the basis of talent and competence rather than subconsciously relying on gender and ethnicity.
Unlike in the USA and China, socially sensitive AI is still rarely used in Germany. How will algorithms be used here in Germany to make personnel decisions in the future?
One algorithmic system most Germans are already affected by is the one that calculates our credit score. Many people don't realize that is done by software, and exactly how it works is not public knowledge. Personnel recruitment is another huge market where we can expect more and more processes to be automated. Fortunately, the General Data Protection Regulation (GDPR) sets limits on the use of personal data, for example on the use of algorithms in the judiciary. But numerous security agencies are already using AI for "predictive policing." That's allowed because they only collect location data to predict break-ins.
But you are still concerned about their use?
We should at least keep in mind that such systems can also have an impact on various groups within society, for example by increasing police patrols in neighborhoods where many migrants live. Increased police presence can result in certain demographics being policed more aggressively, which in turn has a negative impact on the atmosphere of the neighborhood. These effects have hardly been studied yet; it is currently unclear how effective AI systems are in anticipating criminality in the first place.
Issues such as this one have led the state criminal police office of North Rhein-Westphalia (NRW) to install a simplified AI system instead of a complex neural network for predictive policing. The aim is to make decisions easier to understand.
I think that makes a lot of sense for socially relevant decision-making systems. I am even in favor of requiring agencies or companies to test beforehand whether a less complicated system would serve their purposes just as well as a neural network. This is conducive to transparency, because with neural networks you can never be certain what an algorithm has learned, and how it reaches its decisions.
Should neural networks be banned from certain areas entirely? IBM has completely abandoned the technology after protests against the police use of facial recognition in the USA.
It depends on the magnitude of the threat to our fundamental rights and our democracy, and whether individuals are able to escape the system. A fitness app may siphon sensitive information, but I don't have to download it. The situation is different in the case of governmental applications, like with the police for example, or the French algorithm that decides which university a student may attend.
Are there any applications in Germany you are worried about?
Absolutely. Automatic facial recognition is already being tested at the Südkreuz train station in Berlin. We can't even be sure yet how the various surveillance systems, which may have been tested individually, will interact with one another, potentially resulting in restrictions to our civil liberties. Then there's an AI system in the state of NRW that aims to reduce the rate of suicide among prisoners. A combination of AI and cameras monitors prisoners around the clock, sounding an alarm if suspicious activity is detected. This case requires a strong value trade-off between bodily integrity and freedom, which is already maximally restricted for inmates. These value trade-offs must be disclosed and discussed in a democratic process.
Photo: Marzena Skubatz