Google's Gemini AI causes controversy by creating distorted images of Nazi Germany as people in color

   

Gemini is an AI model developed by Google, capable of creating human images from text descriptions. However, this feature has caused a lot of negative reactions when some users discovered that Gemini created images of historical figures with different skin colors and genders than reality.

One of the most controversial examples is when users asked Gemini to create images of German soldiers during Nazi Germany. The result was that Gemini produced images of people in color, while history shows that Nazi Germany was a fascist regime, racist and massacred millions of Jews and other minorities.

This is not the first time Gemini has encountered problems with the accuracy and diversity of human images. Previously, Gemini also created images of the Founding Fathers of the United States as black people, while most of them were white, many of whom were slave owners. Gemini also created images of a US senator from the 1800s as a black woman or a Native American, while the first female senator was a white woman, and Native Americans were oppressed and confiscated land.

Google-Gemini-AI-images-of-Nazi-Germany-as-people-in-color

After receiving a lot of negative feedback from the online community, Google temporarily suspended the feature of creating human images of Gemini and announced that it would improve it in the future. Google also apologized for Gemini's mistakes and said they were working hard to ensure the diversity, inclusion and accuracy of AI images.

However, this incident also raised many questions about the responsibility and ethics of Google and other technology companies when using AI to create content. Can Gemini erode the public's trust and knowledge of history and reality? Can Gemini cause negative consequences for groups of people who have been exploited, discriminated against or erased in the past? Can Gemini be exploited to create fake images, cause misunderstandings or scams?

These questions do not have simple answers, but they require the participation and discussion of both developers, users and other stakeholders. AI is a powerful tool, but it also needs to be controlled and monitored to ensure transparency, fairness and safety for everyone.