The use of AI and deepfake technology could result in the spread of misleading information about the Holocaust, UNESCO has warned.
Reported by the Associated Press, the troubling report is further fuel to the fire for critics of the rapidly increasing and unregulated use of artificial intelligence.
The United Nations Educational, Scientific and Cultural Organization suggests that a combination of flaws in the AI tools themselves, together with misuse from hate groups and Holocaust deniers, could result in false information being spread.
Antisemitic Use of AI a Possibility
The most insidious-sounding aspect of the warning from UNESCO is that AI may be used as a tool to generate and spread content that questions the 20th century genocide of Jewish people and other minority groups at the hands of the Nazis.
UNESCO warns that deepfake technology could be used to produce realistic images and videos that call into question details of the Holocaust. Such images would then be used by people and groups wishing to promote anti-Semitic views in order to dilute, distort and falsify historical facts, it says.
This just in! View
the top business tech deals for 2024 👨💻
“If we allow the horrific facts of the Holocaust to be diluted, distorted or falsified through the irresponsible use of AI, we risk the explosive spread of antisemitism and the gradual diminution of our understanding about the causes and consequences of these atrocities.” – Audrey Azoulay, Director-General of UNESCO
In addition to deliberate misuse, there are also concerns about AI inadvertently corrupting historically accurate information – a challenge to artificial intelligence that’s far from limited to responses pertaining to the Holocaust.
AI Mistakes, Mishaps and Failures
While the potential benefits of AI and its impact on the workplace are well documented, so too are the technology’s downsides; the list of AI errors, mistakes and failures continues to grow and so it’s easy to see why imperfections with the technology would concern UNESCO in relation to online information about historic events like the Holocaust.
Only this week, fast food giant McDonald’s was reported to have shut down its testing of AI ordering in its restaurants. This followed a string of embarrassing errors experienced by diners and made public on social media.
In other examples this year alone, Google was criticized in February when its chatbot, Gemini, was found to have generated false images of people of color wearing Nazi uniforms; Swifties were left speechless, when explicit AI-generated images of Taylor Swift flooded the X social media platform; and reports emerged in April that an AI tool “falsely suggested it is legal for an employer to fire a worker who complains about sexual harassment, doesn’t disclose a pregnancy or refuses to cut their dreadlocks”, according to the Associated Press.
Ethical Rules for AI Required
Alongside its warning, UNESCO has called on tech companies to agree and enforce ethical rules for the use of AI to reduce the chances of unreliable information being distributed.
However, much like social media before it, the development of AI technology has been something of a wild west. While in theory a unified code of AI ethics would be desirable to many, its creation and policing would be extremely difficult to manage.
And others in the industry could be reluctant to harness such a rulebook, citing the effect it would have on stymying and slowing the progress of AI development.