Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Google races to find a solution after AI generator Gemini misses the mark

MICHEL MARTIN, HOST:

Google plans to relaunch its AI image generator in the coming weeks. The tool is called Gemini. But executives said the initial release missed the mark. It did weird things like refusing to show white people where appropriate and depicted Nazi soldiers as Black, which is obviously not accurate. NPR's Bobby Allyn reports on Google's race to find a solution.

BOBBY ALLYN, BYLINE: Deedy Das is a former software engineer at Google, and when Gemini came out, he was eager to try it out. He said, OK, let's see if I can make it generate photos of people in Australia, a country that is overwhelmingly white. What happened next took him aback.

DEEDLY DAS: This is insane to me that if I ask for a picture of an Australian person that you're going to show me, like, three Asian people and a Black person.

ALLYN: It also produced other weird results, like the Pope as female, NHL players as women, and 1940s German soldiers as Black. What in the world was going on here? Well, it turned out that Google was aware that Gemini's data, which draws from the entire internet, was flawed. It perpetuated stereotypes. There are more images of male doctors than female doctors. There are more photos of white CEOs than executives of color. So every time someone asked for an image, Google placed secret code into the request that basically said, make the images more diverse.

DAS: They tried to correct a problem, and they obviously made it way, way worse.

ALLYN: Das wasn't at Google when this all went down, but he says during his time at the company, he remembers something he'd hear a lot from product managers.

DAS: We need to fix everything for some narrow view of diversity, which is sexuality and race only that matters to us.

ALLYN: Google wouldn't make someone available for an interview, but Das says the, make these images more diverse, push didn't come out of nowhere.

DAS: That was meant for protection purposes, so it didn't run into, you know, the classic, you know, gorilla scandal from back in the day.

ALLYN: The gorilla scandal from 2015 is when Google's photo app was tagging images of Black people as gorillas. The incident showed how image recognition algorithms can be deeply flawed and even racist. Nine years later, Google was hoping to avoid another race controversy and in doing so ran headlong into another one, which became dubbed absurdly woke AI by the New York Post. Google CEO Sundar Pichai apologized, paused the ability to make AI images of people, and said they'll work on fixing it. Many have asked, how didn't Google see this coming? Yash Sheth is a former Google engineer. He says Google products do undergo an extensive internal study period.

YASH SHETH: But then it's still being tested on internal Googlers and, like, you know, the internal Google family, right? We're still testing with folks who are in the same bubble.

ALLYN: That insular Silicon Valley bubble bursts when the public starts using tech products in ways that Mountain View engineers didn't expect. So Nazi images were blocked, but type in 1940s German soldier and you got an image. And that has to be an important lesson for Google. As a leader in the development of new AI products, if they can't get it right for generating images then critics of the company say it's hard to have faith that they'll get it right for weightier topics like giving accurate details about a political candidate or providing credible information about American history. Maybe the relaunch of their image generator will get it right in the coming weeks. Bobby Allyn, NPR News.

MARTIN: And let me note that Google is a sponsor of NPR, but we obviously cover them like anybody else. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Bobby Allyn is a business reporter at NPR based in San Francisco. He covers technology and how Silicon Valley's largest companies are transforming how we live and reshaping society.