Google’s AI chatbot, Gemini, faced backlash after users noticed it refraining from showing White people in historical contexts where they were likely to dominate depictions. Public figures and news outlets with large right-wing audiences claimed that their tests of Gemini showed Google had a hidden agenda against White people. Google paused Gemini’s image generation of people and attempted to shed light on the company’s decision. The incident highlights the challenge tech companies face in preventing their AI systems from amplifying bias and misinformation, especially given competitive pressure to bring AI products to market quickly. Google had included a technical fix to reduce bias in its outputs, but did so without fully anticipating all the ways the tool could misfire and without being transparent about its approach.
Read more at: https://time.com