Racist prejudice still lurks in AI systems. The reason: Artificial intelligence is also made by people.
Refugees are registered with fingerprints in Europe as if they were criminals Photo: Anja Weber-Decker / plainpicture
Until recently, anyone applying for a visa in Great Britain had their data screened by an algorithm. The software assigns a green, yellow or red risk rating to every applicant who wants to enter the country for a study or tourist stay. The automated procedure was dated responsible home office suspended in early August.
The reason: the algorithm was racist. The agency is said to have kept a secret list of “suspicious nationalities” that were automatically given a red risk rating. Afterwards the civil rights organization Foxglove spoke of a “speedy boarding for white people”. While whites were waved through by the algorithmic border guards, blacks apparently had to pass a security check. A brutal selection.
It is not the first time that algorithms discriminate against black people. In 2015, for example, Google’s photo app tagged an African American and his girlfriend as “gorillas”. As if humans were monkeys. Algorithms still have problems recognizing African American faces – the error rate is up to ten times higher than that of whites, as numerous studies show. But instead of optimizing its models, Google simply removed the categories “gorilla”, “chimpanzee” and “monkey” – and thus “solved” the problem. Technology isn’t advanced enough to be unbiased, so you blind it. An epistemological sleight of hand.
The problem is of course not solved – it is of a structural nature. Because machines that are trained by humans with racially distorted data reproduce stereotypes. When police officers guided by prejudice check African Americans in certain neighborhoods, predictive policing software sends the emergency services to these neighborhoods over and over again, perpetuating racial profiling because the models are fed with distorted data. Automated systems cement stereotypes. A vicious circle.
Decolonization of the AI
The South African AI researcher Shakir Mohamed, who works at the Google subsidiary DeepMind, criticizes that the colonial legacy of the past is still slumbering in the western scientific tradition. In a current article, he therefore calls for a “decolonization of AI”: The models should become more diverse and also take into account other philosophical traditions than those of the West or China.
Now, with such theses, which are brought up for discussion by a Google developer, one initially suspects that this is product advertising wrapped in social criticism. According to the motto: We at Google are working on history! Still, there is something valid about the diagnosis. Because AI is at its core a Western construct that is based on certain Western moral and value concepts (such as individualism) and also conveys a worldview with all its fuzziness.
The organization Algorithm Watch, the regularly automated systems put to the test, has shown in an experiment that the object recognition algorithm of Google Vision Cloud labels a clinical thermometer as a monocular in a white hand, but as a weapon in a black hand. The erratic neural network holds up the mirror to people: Because often you see things first that you want to see. The combination of “black and gun” is apparently a mental short-circuit that is coded in AI systems.
If you search for “men” in the Google image search, only white men appear, which is of course not representative – reality is much more colorful and heterogeneous – but ultimately a result of our ideas, which in turn determine consciousness.
Do all human lives count equally?
Admittedly, the criticism is not new. The photo manufacturers Fuji and Kodak were already confronted with accusations that the contrasts of black people and people of color did not come out as well in their photos as that of “Caucasian” faces. In view of the triumph of digital photography and numerous filter technologies, the criticism may now be out of date. Nevertheless, there are still numerous deficits in the area of machine vision.
A study by the Georgia Institute of Technology last year found that the sensors of autonomous vehicles recognize pedestrians with lighter skin tones than those with darker skin tones. Where technology looks closely on the one hand, it looks away at the crucial point. For practical use in traffic, this means that a black person has a greater risk of being hit by a robotic vehicle than a white person. Do black human lives count the same for machines?
Machine ethicists like to claim that one only has to clean up the training data, that is to say “feed” the algorithms with enough photos of black people, then the models would produce valid results. But the problem is not the data basisbut the design itself. The patterning, classification and sorting of human features is a traditional, anthropometric technique that is returning in a new guise through supposedly objective processes such as face or finger recognition.
In India, the colonial administration began fingerprinting soldiers in the 1860s to avoid fraud in pensions. If you unlock your iPhone with a finger scan today, this colonial practice still resonates – even if you don’t feel suppressed, but rather superior. Face recognition, which has its origins in the identification service of Bertillonage – the criminologist Alphonse Bertillon had criminals’ body parts measured at the end of the 19th century – is essentially a racist registry.
Biometric processes colonize the body
These technologies are still being tested on weaker groups in society today. When refugees arrived in Europe, they were fingerprinted as if they were criminals. And in UNHCR refugee camps, people have to authenticate themselves with iris and face scans for food rations. The colonial framing of this technology would not change even if the error rate were zero. Biometric processes colonize the body and subdue the data subject.
Media theorist Ariana Dongus argues that the camps are “test laboratories for biometric data acquisition”: New technologies are being tested in the global south before they are considered safe and salable in the western world. Anyone who argues that all we need is a broader data base is not only reducing racism to a technical problem, but also misunderstanding the underlying power structures.
Jacob Levy Moreno, the ancestor of the social network analysis that is used today by secret services and police authorities, wrote in his work “The Basics of Sociometry” that “race” is a determining factor in group behavior of people. The advocates of social physics still proceed from the crude premise that the “aggregate” of society consists of social atoms that relate to one another like molecules – as if it were a scientific law that a black man should commit a crime.
If these tools produce racist results, one shouldn’t be surprised. Perhaps in the future we will not only need more diverse development teams, but also more flexible models that take into account the complexity of reality. Because in the end it is not machines that stigmatize and criminalize people, but people themselves.
.
Credit: Source link