Part 3 of the TED Radio Hour episode Warped Reality
Data, numbers, algorithms are supposed to be neutral … right? Computer scientist Joy Buolamwini discusses the way biased algorithms can lead to real-world inequality.
About Joy Buolamwini
Joy Buolamwini is a graduate researcher at the Massachusetts Institute of Technology who researches algorithmic bias in computer vision systems. She founded the Algorithmic Justice League to create a world with more ethical and inclusive technology.
Buolamwini serves on the Global Tech Panel convened by the vice president of European Commission to advise world leaders and technology executives on ways to reduce the harms of AI. In late 2018, in partnership with the Georgetown Law Center on Privacy and Technology, she launched the Safe Face Pledge, the first agreement that prohibits the lethal application of facial analysis and recognition technology.
She holds two masters degrees from Oxford University and MIT as well as a bachelor’s degree in computer science from the Georgia Institute of Technology.
MANOUSH ZOMORODI, HOST:
It’s the TED Radio Hour from NPR. I’m Manoush Zomorodi. On the show today, technology, deception and our changing sense of reality. And so far, we’ve been talking about deepfakes, conspiracy theories and other kinds of misinformation. But data and algorithms – they can warp our reality, too.
JOY BUOLAMWINI: We can deceive ourselves into thinking they’re not doing harm, or we can fool ourselves into thinking, because it’s based on numbers, that it is somehow neutral. AI is creeping into our lives. And even though the promise is that it’s going to be more efficient; it’s going to be better, if what’s happening is we’re automating inequality through weapons of math destruction and we have algorithms of oppression, this promise is not actually true and certainly not true for everybody.
ZOMORODI: Weapons of math destruction, algorithms of oppression, which basically means bias and human error can be encoded into algorithms leading to inequality. To keep them in check, the Algorithmic Justice League to the risk.
BUOLAMWINI: My name is Joy Buolamwini. I’m the founder of the Algorithmic Justice League, where we use research and art to create a world with more equitable and accountable AI. You might have heard of the male gaze or the white gaze or the post-colonial gaze. To that lexicon, I add the coded gaze. And we want to make sure people are even aware of it because you can’t fight the power you don’t see, you don’t know about.
ZOMORODI: Joy hunts down the flaws in the technology that’s running every part of our lives, from deciding what we see on Instagram to how we might be sentenced for a crime.
BUOLAMWINI: What happens when somebody is harmed by a system you created? You know, what happens if you’re harmed? Where do you go? And we want that kind of place to be the Algorithmic Justice League so you can seek redress for algorithmic harms.
ZOMORODI: You are a lot of things. You’re a poet. You’re a computer scientist. You are a superhero. Like…
(LAUGHTER)
ZOMORODI: Kind of hard to put into a box. Can you just explain why you created the Algorithmic Justice League?
BUOLAMWINI: Yes. So the Algorithmic Justice League is a bit of an accident. When I was in graduate school, I was working on an art project that used some computer vision technology to track my face.
(SOUNDBITE OF ARCHIVED RECORDING)
BUOLAMWINI: Hi, camera. I’ve got a face. Can you see my face?
So at least that was the idea.
(SOUNDBITE OF ARCHIVED RECORDING)
BUOLAMWINI: You can see her face. What about my face?
And when I tried to get it to work on my face, I found that putting a white mask on my dark skin…
(SOUNDBITE OF ARCHIVED RECORDING)
BUOLAMWINI: Well, I’ve got a mask.
…Is what I needed in order to have the system pick me up. And so that led to questions about, wait; are machines neutral? Why do I need to change myself to be seen by a machine? And if this is using AI techniques that are being used in other areas of our lives – whether it’s health or education, transportation, the criminal justice system – what does it mean if different kinds of mistakes are being made? And also, even if these systems do work well – let’s say you are able to track a face perfectly. What does that mean for surveillance? What does it mean for democracy, First Amendment rights, you know?
ZOMORODI: Joy continues from the TED stage.
(SOUNDBITE OF TED TALK)
BUOLAMWINI: Across the U.S., police departments are starting to use facial recognition software in their crime-fighting arsenal. Georgetown Law published a report showing that 1 in 2 adults in the U.S. – that’s 117 million people – have their faces and facial recognition networks. Police departments can currently look at these networks unregulated, using algorithms that have not been audited for accuracy.
Machine learning is being used for facial recognition, but it’s also extending beyond the realm of computer vision. So who gets hired or fired? Do you get that loan? Do you get insurance? Are you admitted into the college that you wanted to get into? Do you and I pay the same price for the same product purchased on the same platform?
Law enforcement is also starting to use machine learning for predictive policing. Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison. So we really have to think about these decisions. Are they fair? And we’ve seen that algorithmic bias doesn’t necessarily always lead to fair outcomes.
When I think about algorithmic bias – and people ask me, well, what do you mean machines (laughter) are biased? It’s just numbers. It’s just data. I talk about machine learning, and it’s a question of, well, what is the machine learning from?
ZOMORODI: Well, what is the machine learning from? Like, what’s the information that it’s taking in?
BUOLAMWINI: So an example of this – what I found was that for face detection, the ways in which systems were being trained involved collecting large datasets of images of human faces. And when you look at those datasets, I found that many of them were pale and male, right? You might have a dataset that’s 75% male faces, over 80% lighter-skinned faces. And so what it means is the machine is learning a representation of the world that is skewed. And so what you might have thought should be a neutral process is actually reflecting the biases that it has been trained on. And sometimes what you’re seeing is a skewed representation, but other times what machines are picking up on are our own societal biases that are actually true to the data.
ZOMORODI: For example, Amazon was building a hiring tool.
BUOLAMWINI: You need a job. Somebody in your life needs a job (laughter), right? You want to get hired.
ZOMORODI: And to get hired, you upload your resume and your cover letter.
BUOLAMWINI: That’s the goal. It starts off well.
ZOMORODI: But before a human looks at your resume, it gets vetted by algorithms written by software engineers.
BUOLAMWINI: So we start off with an intent for efficiency. We have many more applications than any human could go through. Let’s create a system that can do it more efficiently than we can.
ZOMORODI: And how to build that better system?
BUOLAMWINI: Well, we’re going to gather data of resumes, and we’re going to sort those resumes by the ones that represented candidates we hired or did well. Your target is who you think will be a good long-term employee.
ZOMORODI: And now the system gets trained on the data.
BUOLAMWINI: And the system is learning from prior data. So I like to say the past dwells within our algorithms. You don’t have to have the sexist hiring manager in front of you. Now you have a black box that’s serving as the gatekeeper. But what it’s learning are the patterns of what success has looked like in the past. So if we’re defining success by how it’s looked like in the past and the past has been one where men were given opportunity, white people were given opportunity and you don’t necessarily fit that profile, even though you might think you’re creating this objective system, it’s going through resumes, right? This is where we run into problems.
ZOMORODI: So here’s what happened with Amazon’s hiring tool.
BUOLAMWINI: What happened was, as the model was being built and it was being tested, what they found was a gender bias where resumes that contained the word women or women’s or even all women’s colleges – right? – so indication of being a woman were categorically being ranked lower than those that didn’t. And try as they might, they were not able to remove that gender bias. So they ended up scratching the system.
(SOUNDBITE OF RECORD SCRATCHING)
ZOMORODI: They scratched the system, and that’s a big win. But one win compared to thousands of platforms that use skewed algorithms – that could warp reality.
BUOLAMWINI: It has not been the case that we’ve had universal equality or absolute equality, in the words of Frederick Douglass. And I especially worry about this when we think about techno benevolence in the space of health care, right? We’re looking at, let’s say, a breakthrough that comes in talking about skin cancer. Oh, we now have an AI system – right? – that can classify skin cancer as well as the top dermatologists, a study might say, a headline might read. And then when you look at it, it’s like, oh, well, actually, when you look at the data set, it was for lighter-skinned individuals. Then you might argue, well, you know, lighter-skinned people are more likely to get skin cancer. And when I was looking into this, it actually – darker-skinned people who get skin cancer – usually, it’s detected in stage 4 because there are all of these assumptions you’re not even…
ZOMORODI: Ah.
BUOLAMWINI: …Going to get it in the first place. So these assumptions can have meaningful consequences.
ZOMORODI: You know, we are talking just before the U.S. presidential election. Have you seen any examples of artificial intelligence being used in voting or politics?
BUOLAMWINI: Yeah. So Channel 4 News just did this massive investigation showing that the 2016 Trump campaign targeted 3.5 million African Americans in the United States, labeled them as deterrents in an attempt to actually keep people from showing up to the polls.
ZOMORODI: They used targeted ads?
BUOLAMWINI: Yes. And we know we know from Facebook’s own research – right? – that you can influence voter turnout based on the kinds of posts that are put on their platform. And they did this in battleground states. And so in this way, we’re seeing predictive modeling and ad targeting – right? – being used as a tool of voter suppression, which has always been the case to disenfranchise, right? You might say Black lives don’t matter, but it’s clear Black votes matter because of so much…
ZOMORODI: Right.
BUOLAMWINI: …Effort used to rob people of what blood was spilt for, you know, for generations. So it should be the case – right? – that any sorts of algorithmic tools that are intended to be used, again, have to be verified for nondiscrimination before it’s even adopted.
ZOMORODI: So as a Black woman technologist, you know, there are not that many of you, frankly. Why not, you know, go work at Google or Amazon and make these changes to the algorithms directly? Why act as sort of a watchdog?
BUOLAMWINI: Well, I think there are multiple ways to be involved in the ecosystem. But I do think this question you pose is really important because it can be an assumption that by changing who’s in the room, which is important and needs to happen, we’re going to then change the outcome and the outputs of these systems. So I like to remind people that most software developers, engineers, computer scientists – you don’t build everything from scratch, right? You get reusable parts. And so if there’s bias within those reusable parts or large-scale bias in the datasets that have become standard practice or the status quo – right? – changing the people who are involved in the system without changing the system itself is still going to reproduce algorithmic bias and algorithmic harms.
ZOMORODI: So how do we build systems that are more fair? Like, if there’s no data for the artificial intelligence to sort of, you know, process to start to pump out recommendations, then how do we even change that?
BUOLAMWINI: Yeah. Well, it’s a question of what tools do you use towards what objectives. So the first thing is seeing if this is the appropriate tool. Not every tool – not every decision needs to be run through AI. And oftentimes you also need to make sure you’re being intentional. And so…
ZOMORODI: Right.
BUOLAMWINI: The kinds of changes you would need to make systematically for even who gets into the job pool in general – it means you do have to change society to change what AI is learning.
ZOMORODI: What do you say, Joy, to people who might be listening and thinking, like, you know, let’s take a step back and look at the bigger picture? We – in many ways, things are way better than they were thanks to technology because, you know, here we are in a pandemic, and anyone can work from anywhere because we have the Internet and we have Zoom and all of these platforms. Equality and access is on the whole improved. Why – let’s not, like, be Debbie Downers about it.
BUOLAMWINI: Yeah. I mean, I always ask, who can afford to say that? – because I can tell you the kids who are sitting in McDonald’s parking lot so they can access the Internet to be able to attend school remotely – that has never been their reality. And so oftentimes, if you are able to say technology on the whole has done well, it probably means you’re in a fairly privileged position. There’s still a huge digital divide. Even – there are billions of people who don’t have access to the Internet.
I mean, I was born in Canada. I moved to Ghana and then grew up in the U.S. So I had very Western assumptions, you know, about what tech could do and very much excited to use the tech skills I gained as an undergrad at Georgia Tech, you know, to use tech for good, tech for the benefit of humanity. And so when I critique tech, it’s really coming from a place of having been enamored with it and wanting it to live up to its promises. I don’t think it’s being a Debbie Downer to show ways in which we can improve so the promise of something we’ve created can actually be realized. I think that’s even a more optimistic approach than to believe in wishful thinking that is not true.
ZOMORODI: You know, one thing that you’ve said that I find so – I love this idea that – you say there’s a difference between potential and reality and that we must separate those two ideas.
BUOLAMWINI: Yes. So it’s so easy to fixate on our aspirations of what tech could be. And I think in some ways, it’s this hope that we can transcend our own humanity – right? – our own failures. And so, yes, even if we haven’t gotten society quite right, ideally, we can build technology that’s better than we are. But we then have to look at that fact that technology reflects who we are. It doesn’t transcend who we are. And so I think it’s important that, when we think about technology, we ask, what’s the promise? What’s the reality? And not only what’s that gap but who does it work for? Who does it benefit? Who does it harm and why? And also, how do we then step up and stand up to those harms?
ZOMORODI: That’s Joy Buolamwini, founder of the Algorithmic Justice League. You can watch her full talk at ted.com.
(SOUNDBITE OF MUSIC)
ZOMORODI: Thank you so much for listening to our show this week about technology, deception and our warped reality. To learn more about the people who were on it, go to ted.npr.org. And to see hundreds more TED talks, check out ted.com or the TED app.
Our TED radio production staff at NPR includes Jeff Rogers, Sanaz Meshkinpour, Rachel Faulkner, Diba Mohtasham, James Delahoussaye, J.C. Howard, Katie Monteleone, Maria Paz Gutierrez, Christina Cala and Matthew Cloutier with help from Daniel Shukin. Our intern is Farrah Safari. Our theme music was written by Ramtin Arablouei. Our partners at TED are Chris Anderson, Colin Helms, Anna Phelan and Michelle Quint. I’m Manoush Zomorodi, and you’ve been listening to the TED Radio Hour from NPR. Transcript provided by NPR, Copyright NPR.
Credit: Source link