Deep neural networks from X-rays can determine the ethnicity of the respective patient – although data from which this would be easy to infer were explicitly not included. This has been shown by an international research group from Australia, Canada and the USA.
With their work, the researchers want to draw attention to the danger that self-learning algorithms in medicine treat people of different ethnic backgrounds differently. Such a bias can occur even if the AI is not supposed to explicitly recognize the ethnic group. The researchers describe technical details in one Paperthat you have published on the preprint server ArXiv.
Disadvantage to certain groups
Previous studies showed that this fear is quite real. 2019 became for example foundthat an algorithm widely used in the US to prioritize care for the sickest patients puts African Americans at a disadvantage. In order to rule out such an algorithmic disadvantage, the corresponding data is increasingly being deleted. However, the new study clearly shows that this is not enough. Especially since the researchers themselves cannot explain the characteristics based on which the classification works: they exclude indirect visual characteristics that indicate BMI, bone density, age, gender or specific diagnoses because of the study design.
This article is from issue 5/2021 of the Technology Review. The magazine will be available from July 8th, 2021 in stores and directly in the heise shop. Highlights from the magazine:
Research teams around the world are working on the increasingly pressing question of how such distortions can be corrected – not only in medicine, reports Eva Wolfangel in the cover story of the current issue 6/21 of Technology Review (from Thursday at the well-stocked kiosk or can be ordered online).
Synthetic data should help
Some researchers rely on synthetic training data for AI: First, a model learns the properties of a real data set. This in turn is the basis for a generative model that learns to generate data sets with the same statistical properties. If you then discover that a data set has gaps and an AI therefore comes to discriminatory decisions, you can fill corresponding gaps using artificially generated data from the generative model – in other words, change the data sets in a targeted manner so that they get different properties.
Sandra Wachter from the Oxford Internet Institute, however, thinks this is a “wonderful Techbro solution”, as she says – a technical solution based on a one-sided, often equally discriminatory worldview, because: “Where do I get the data from?” Often enough, there wasn’t even enough real to model artificial. Instead, she suggests using AI as an “agnostic tool that shows us where the injustice lies”.
(wst)
Article Source
Disclaimer: This article is generated from the feed and not edited by our team.
Credit: Source link