Google found itself in hot water last week after its Gemini AI tool generated racially inaccurate historical images, including Black and Asian people portrayed as World War 2 Nazi soldiers.
Speaking out on the issue this week, CEO Sundar Pichai has told Google employees that this was ‘unacceptable’, and that the company would be working around the clock to resolve the issues.
It follows similar stories of AI going rogue, with ChatGPT recently spitting out alarming and threatening messages to users.
Gemini Goes Rogue with Insensitive Images
Last week, users of Google’s Gemini AI tool reported that it was generating factually inaccurate images, some of which were racially insensitive, such as depicting German Nazi soldiers as Black and Asian people. It also depicted the founding fathers of the US as being Black men.
The company was quick to respond to criticism, apologizing on X (formerly Twitter), as well as removing the ability for Gemini to generate images of people, as a short term measure. In a statement, Google said that Gemini was designed to generate a ‘wide range of people’, but in this instance, it had missed the mark.
We're aware that Gemini is offering inaccuracies in some historical image generation depictions. Here's our statement. pic.twitter.com/RfYXSgRyfz
— Google Communications (@Google_Comms) February 21, 2024
This just in! View
the top business tech deals for 2024 👨💻
Sundar Pichai Respond to Gemini Criticism
In a memo to Google employees, obtained by Semafor, CEO Sundar Pichai spoke out on the furore around the historically inaccurate images.
Pichai stated that the images created were ‘unacceptable’, and that the Gemini team was working around the clock to rectify the issues, saying that progress had already been made.
“No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes.” – Statement from Sundar Pichai
Pichai went onto to say that Google’s focus must be on continuing to build helpful products, and building trust.
Learning Curve for AI Companies
As Pichai noted in his statement, ‘no AI is perfect’, and that’s certainly been a lesson that many tech companies have been quickly learning in their rush to create AI platforms.
Just last week, dozens of ChatGPT users reported strange behaviours from the chatbot, from the fairly odd but inoffensive use of ‘Spanglish’, a mix of Spanish and English, to far more creepy responses such as suggesting it was sentient and in the room with the user.
Despite a lot of these AI solutions being relatively new, we have already seen a lot of cases of the technology behaving recklessly, causing offense to the public, and headaches for the companies behind them. It’s likely that instances such as Gemini’s race swap images are just the tip of the iceberg going forward. The real test will be how quickly these companies respond to, and fix, these anomalies.