Google still hasn’t fixed Gemini’s biased image generator

Last February, Google suspended the ability of its AI-powered Gemini chatbot to generate images of people after users complained about historical inaccuracies. For example, to depict “a Roman legion”, Gemini would show an anachronistically diverse group of soldiers, while making the “Zulu warriors” uniformly black.

Google CEO Sundar Pichai apologized and Demis Hassabis, co-founder of Google’s AI research division DeepMind, said a fix should arrive “in very short order” – but we are now well into May and the promised fix has not yet been delivered. appear.

Google showed off many more Gemini features at its annual I/O developers conference this week, from custom chatbots to a vacation itinerary planner and integrations with Google Calendar, Keep, and YouTube Music. But people image generation continues to be disabled in Gemini apps on web and mobile, a Google spokesperson confirmed.

So what’s the problem ? Well, the problem is probably more complex than Hassabis alluded to.

The datasets used to train image generators like Gemini’s typically contain more images of white people than of people of other races and ethnicities, and images of non-white people in these datasets reinforce stereotypes negative. Google, in an apparent effort to correct these biases, implemented some clumsy hardcoding under the hood to add diversity to queries in which a person’s appearance was not specified. And now he’s struggling to find a reasonable middle path that avoids repeating history.

Will Google succeed? Maybe. Maybe not. Regardless, this long-running case is a reminder that fixing AI bad behavior isn’t easy, especially when bias is the cause of the bad behavior.


Back to top button