WIUM Tristates Public Radio

Colette Pichon Battle: How Can We Prepare For The Next Hurricane Katrina?

Feb 26, 2021

Part 2 of the TED Radio Hour episode Black History ... And The Future

Sea level rise will displace millions by 2100 — and the Louisiana bayous, where Colette Pichon Battle lives, may disappear entirely. She describes how we can avert the worst when disaster strikes. A version of this segment was originally heard in the episode Our Relationship With Water.

About Colette Pichon Battle

Colette Pichon Battle is the founder and executive director of the Gulf Coast Center for Law & Policy, a non-profit, public interest law firm and justice center with a mission to advance structural shifts toward climate justice and ecological equity in communities of color.

At GCCLP, Pichon Battle develops programming focused on equitable disaster recovery, climate migration, community economic development, and climate justice. She also works with local communities, national funders, and elected officials in the post-Katrina/post-BP disaster recovery, work for which she received the U.S. Civilian Medal of Honor from the state of Louisiana in 2008. Pichon Battle is also a practicing attorney, and manages GCCLP's legal services for immigration and disaster law.

She was named an Echoing Green Climate fellow in 2015 and in 2019 was named an Obama Foundation Fellow.

Copyright 2021 NPR. To see more, visit https://www.npr.org.

MANOUSH ZOMORODI, HOST:

It's the TED Radio Hour from NPR. I'm Manoush Zomorodi. In this hour, we have been honoring Black History Month with some of our favorite interviews from the past year, conversations about history and climate justice. And this next speaker is from our show about deception and misinformation, explaining how data and algorithms can warp our reality and discriminate, too.

(SOUNDBITE OF ARCHIVED NPR BROADCAST)

JOY BUOLAMWINI: We can deceive ourselves into thinking they're not doing harm, or we can fool ourselves into thinking, because it's based on numbers, that it is somehow neutral. AI is creeping into our lives. And even though the promise is that is going to be more efficient - it's going to be better - if what's happening is we're automating inequality through weapons of math destruction and we have algorithms of oppression, this promise is not actually true and certainly not true for everybody.

(SOUNDBITE OF MUSIC)

ZOMORODI: Weapons of math destruction, algorithms of oppression, which basically means bias and human error can be encoded into algorithms leading to inequality. To keep them in check, the Algorithmic Justice League to the rescue.

BUOLAMWINI: My name is Joy Buolamwini. I'm the founder of the Algorithmic Justice League, where we use research and art to create a world with more equitable and accountable AI. You might have heard of the male gaze or the white gaze or the post-colonial gaze. To that lexicon, I add the coded gaze. And we want to make sure people are even aware of it because you can't fight the power you don't see, you don't know about.

(SOUNDBITE OF MUSIC)

ZOMORODI: Joy hunts down the flaws in the technology that's running every part of our lives, from deciding what we see on Instagram to how we might be sentenced for a crime.

BUOLAMWINI: What happens when somebody is harmed by a system you created? You know, what happens if you're harmed? Where do you go? And we want that kind of place to be the Algorithmic Justice League, so you can seek redress for algorithmic harms.

ZOMORODI: You are a lot of things. You're a poet. You're a computer scientist. You are a superhero. Like...

(LAUGHTER)

ZOMORODI: Kind of hard to put into a box. Can you just explain why you created the Algorithmic Justice League?

BUOLAMWINI: Yes. So the Algorithmic Justice League is a bit of an accident. When I was in graduate school, I was working on an art project that used some computer vision technology to track my face.

(SOUNDBITE OF ARCHIVED RECORDING)

BUOLAMWINI: Hi, camera. I've got a face. Can you see my face?

So at least that was the idea.

(SOUNDBITE OF ARCHIVED RECORDING)

BUOLAMWINI: You can see her face. What about my face?

And when I'd try to get it to work on my face, I found that putting a white mask on my dark skin...

(SOUNDBITE OF ARCHIVED RECORDING)

BUOLAMWINI: Well, I've got a mask.

...Is what I needed in order to have the system pick me up. And so that led to questions about, wait - are machines neutral? Why do I need to change myself to be seen by a machine? And if this is using AI techniques that are being used in other areas of our lives - whether it's health or education, transportation, the criminal justice system - what does it mean if different kinds of mistakes are being made? And also, even if these systems do work well - let's say you are able to track a face perfectly - what does that mean for surveillance? What does it mean for democracy, First Amendment rights, you know?

ZOMORODI: Joy continues from the TED stage.

(SOUNDBITE OF TED TALK)

BUOLAMWINI: Across the U.S., police departments are starting to use facial recognition software in their crime-fighting arsenal. Georgetown Law published a report showing that 1 in 2 adults in the U.S. - that's 117 million people - have their faces in facial recognition networks. Police departments can currently look at these networks unregulated, using algorithms that have not been audited for accuracy.

Machine learning is being used for facial recognition, but it's also extending beyond the realm of computer vision. So who gets hired or fired? Do you get that loan? Do you get insurance? Are you admitted into the college that you wanted to get into? Do you and I pay the same price for the same product purchased on the same platform?

Law enforcement is also starting to use machine learning for predictive policing. Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison. So we really have to think about these decisions. Are they fair? And we've seen that algorithmic bias doesn't necessarily always lead to fair outcomes.

When I think about algorithmic bias - and people ask me, well, what do you mean machines (laughter) are biased? It's just numbers. It's just data. I talk about machine learning. And it's a question of, well, what is the machine learning from?

ZOMORODI: Well, what is the machine learning from? Like, what's the information that it's taking in?

BUOLAMWINI: So an example of this - what I found was that for face detection, the ways in which systems were being trained involve collecting large data sets of images of human faces. And when you look at those data sets, I found that many of them were pale and male, right? You might have a dataset that's 75% male faces over 80% lighter-skin faces. And so what it means is the machine is learning a representation of the world that is skewed. And so what you might have thought should be a neutral process is actually reflecting the biases that it has been trained on. And sometimes what you're seeing is a skewed representation, but other times what machines are picking up on are our own societal biases that are actually treated as data.

ZOMORODI: For example, Amazon was building a hiring tool.

BUOLAMWINI: You need a job. Somebody in your life needs a job, right? You want to get hired.

ZOMORODI: And to get hired, you upload your resume and your cover letter.

BUOLAMWINI: That's the goal. It starts off well.

ZOMORODI: But before a human looks at your resume, it gets vetted by algorithms written by software engineers.

BUOLAMWINI: So we start off with an intent for efficiency. We have many more applications than any human could go through. Let's create a system that can do it more efficiently than we can.

ZOMORODI: And how to build that better system?

BUOLAMWINI: Well, we're going to gather data of resumes, and we're going to sort those resumes by the ones that represented candidates we hired or did well. Your target is who you think will be a good long-term employee.

ZOMORODI: And now the system gets trained on the data.

BUOLAMWINI: And the system is learning from prior data. So I like to say the past dwells within our algorithms. You don't have to have the sexist hiring manager in front of you. Now you have a black box that's serving as the gatekeeper. But what it's learning are the patterns of what success has looked like in the past. So if we're defining success by how it's looked like in the past and the past has been one where men were given opportunity, white people were given opportunity and you don't necessarily fit that profile, even though you might think you're creating this objective system, it's going through resumes - right? - this is where we run into problems.

ZOMORODI: So here's what happened with Amazon's hiring tool.

BUOLAMWINI: What happened was, as the model was being built and it was being tested, what they found was a gender bias where resumes that contained the word women or women's or even all-women's colleges - right? - so indication of being a woman were categorically being ranked lower than those that didn't. And try as they might, they were not able to remove that gender bias, so they ended up scratching the system.

(SOUNDBITE OF RECORD SCRATCHING)

ZOMORODI: They scratched the system, and that's a big win. But one win compared to thousands of platforms that use skewed algorithms - that could warp reality.

BUOLAMWINI: It has not been the case that we've had universal equality or absolute equality, in the words of Frederick Douglass. And I especially worry about this when we think about techno benevolence in the space of health care, right? We're looking at, let's say, a breakthrough that comes in talking about skin cancer. Oh, we now have an AI system - right? - that can classify skin cancer as well as the top dermatologists, a study might say, a headline might read. And then when you look at it, it's like, oh, well, actually, when you look at the dataset, it was for lighter-skinned individuals. And then you might argue, well, you know, lighter-skinned people are more likely to get skin cancer. And when I was looking into this, it actually - darker-skinned people who get skin cancer, usually it's detected in stage 4 because there are all of these assumptions you're not even going to get it...

ZOMORODI: Ah.

BUOLAMWINI: ...In the first place. So these assumptions can have meaningful consequences.

ZOMORODI: Have you seen any examples of artificial intelligence being used in voting or politics?

BUOLAMWINI: Yeah. So Channel 4 News just did this massive investigation showing that the 2016 Trump campaign targeted 3.5 million African Americans in the United States, labeled them as deterrents in an attempt to actually keep people from showing up to the polls.

ZOMORODI: They used targeted ads.

BUOLAMWINI: Yes. And we know from Facebook's own research - right? - that you can influence voter turnout based on the kinds of posts that are put on their platform. And they did this in battleground states. And so in this way, we're seeing predictive modeling and ad targeting - right? - being used as a tool of voter suppression, which has always been the case to disenfranchise, right? You might say Black lives don't matter, but it's clear Black votes matter because of...

ZOMORODI: Right.

BUOLAMWINI: ...So much effort used to rob people of what blood was spilt for, you know, for generations. So it should be the case - right? - that any sorts of algorithmic tools that are intended to be used, again, have to be verified for nondiscrimination before it's even adopted.

ZOMORODI: So as a Black woman technologist, you know, there are not that many of you, frankly. Why not, you know, go work at Google or Amazon and make these changes to the algorithms directly? Why act as sort of a watchdog?

BUOLAMWINI: Well, I think there are multiple ways to be involved in the ecosystem. But I do think this question you pose is really important because it can be an assumption that by changing who's in the room, which is important and needs to happen, we're going to then change the outcome and the outputs of these systems. So I like to remind people that most software developers, engineers, computer scientists - you don't build everything from scratch, right? You get reusable parts. And so if there's bias within those reusable parts or large-scale bias in the data sets that have become standard practice or the status quo - right? - changing the people who are involved in the system without changing the system itself is still going to reproduce algorithmic bias and algorithmic harms.

ZOMORODI: So how do we build systems that are more fair? Like, if there's no data for the artificial intelligence to sort of, you know, process to start to pump out recommendations, then how do we even change that?

BUOLAMWINI: Yeah. Well, it's a question of what tools do you use towards what objectives. So the first thing is seeing if this is the appropriate tool. Not every tool, not every decision, needs to be run through AI. And oftentimes, you also need to make sure you're being intentional. And so the kinds...

ZOMORODI: Right.

BUOLAMWINI: ...Of changes you would need to make systematically for even who gets into the job pool in general - it means you do have to change society to change what AI is learning.

ZOMORODI: What do you say, Joy, to people who might be listening and thinking, like, you know, let's take a step back and look at the bigger picture. We - in many ways, things are way better than they were thanks to technology because, you know, here we are in a pandemic, and anyone can work from anywhere because we have the Internet and we have Zoom and all of these platforms. Equality and access is on the whole improved. Why - let's not, like, be Debbie Downers about it.

BUOLAMWINI: Yeah. I mean, I always ask who can afford to say that? Because I can tell you, the kids who are sitting in McDonald's parking lot so they can access the Internet to be able to attend school remotely, that has never been their reality. And so oftentimes, if you are able to say technology on the whole has done well, it probably means you're in a fairly privileged position. There's still a huge digital divide. Even - there are billions of people who don't have access to the Internet.

I mean, I was born in Canada, moved to Ghana and then grew up in the U.S. I had very Western assumptions, you know, about what tech could do and very much excited to use the tech skills I gained as a undergrad at Georgia Tech, you know, to use tech for good, tech for the benefit of humanity. And so when I critique tech, it's really coming from a place of having been enamored with it and wanting it to live up to its promises. I don't think it's being a Debbie Downer to show ways in which we can improve so the promise of something we've created can actually be realized. I think that's even a more optimistic approach than to believe in a wishful thinking that is not true.

ZOMORODI: You know, one thing that you've said that I find so - I love this idea - that you say there's a difference between potential and reality and that we must separate those two ideas.

BUOLAMWINI: Yes. So it's so easy to fixate on our aspirations of what tech could be. And I think in some ways, it's this hope that we can transcend our own humanity - right? - our own failures. And so, yes, even if we haven't gotten society quite right, ideally, we can build technology that's better than we are. But we then have to look at that fact that technology reflects who we are. It doesn't transcend who we are. And so I think it's important that when we think about technology, we ask, what's the promise? What's the reality? And not only what's that gap, but who does it work for? Who does it benefit? Who does it harm and why? And also, how do we then step up and stand up to those harms?

(SOUNDBITE OF MUSIC)

ZOMORODI: That's Joy Buolamwini, founder of the Algorithmic Justice League. You can watch her full talk at ted.com.

Thank you so much for listening to our show this week celebrating Black History Month. To learn more about the people who were on it and for more powerful stories and ideas from Black speakers, check out our Black History Month Playlist at ted.npr.org. And, of course, to see hundreds more TED talks, check out ted.com or the TED app.

Our TED Radio production staff at NPR includes Jeff Rogers, Sanaz Meshkinpour, Rachel Faulkner, Diba Mohtasham, James Delahoussaye, J.C. Howard, Katie Monteleone, Maria Paz Gutierrez, Christina Cala, Matthew Cloutier and Farrah Safari, with help from Daniel Shukin. Our intern is Janet Woojeong Lee. Our theme music was written by Ramtin Arablouei. Our partners at TED are Chris Anderson, Colin Helms, Anna Phelan and Michelle Quint. I'm Manoush Zomorodi. And you've been listening to the TED Radio Hour from NPR. Transcript provided by NPR, Copyright NPR.