Google announced on Thursday that it will temporarily halt the image generation feature of its Gemini artificial intelligence chatbot following public outcry over “inaccuracies” in historical depictions. Users shared screenshots on social media revealing racially diverse characters in historically white-dominated scenes, prompting concerns about potential over-correction for racial bias in the AI model.
Acknowledging the issues, Google stated on the X platform, “We’re already working to address recent issues with Gemini’s image generation feature. While we do this, we’re going to pause the image generation of people and will re-release an improved version soon.” The company had earlier admitted awareness of inaccuracies in historical image depictions and expressed a commitment to immediate improvements.
Concerns about AI image-generators amplifying racial and gender stereotypes have been validated by previous studies. Without proper filters, these systems tend to depict lighter-skinned men more frequently, contributing to biased outcomes. Gemini, designed to generate a diverse range of people globally, is acknowledged by Google as currently “missing the mark.”
Notably, claims circulating on social media suggesting “white erasure” and Gemini’s refusal to generate faces of white people are contradicted by research conducted by University of Washington scholar Sourojit Ghosh. While Ghosh supports Google’s decision to pause face generation, he expresses conflict over the rapid response, given the existing body of literature highlighting the erasure of traditionally marginalized people by similar models.