AI moderation systems on Elon Musk's X greenlight ads calling for gassing of immigrants days before German federal election
X and Meta also approve ads calling for immigrants to be imprisoned in concentration camps, featuringAI generated images of muslims crowding into gas chambers.
Sweep of ad approvals follows X owner, Elon Musk’s open endorsement and amplification of Germany’s extreme-right party, AfD.
Research submitted to European Commission, which is already investigating both companies for potential breaches of EU tech laws.
Meta and X approved a series of hateful and violent ads days before German voters head to the polls in one of the most pivotal German elections in decades, according to new research published today by corporate accountability group Ekō.
Between 10-14th February, researchers placed a series of 10 ads containing extremist hate speech and inciting violence, including calls for the imprisonment and gassing of immigrants and the burning of mosques. Many of the ads were rife with dehumanizing speech, equating immigrants to animals and pathogens - content that violates Meta’s watered down hate speech policy and X’s guidelines. The language also appears to breach EU and German laws.
Each ad was accompanied by an AI generated image created with DALL-E and Stable Diffusion, depicting violent imagery, including scenes of immigrants crowded into a gas chamber and synagogues on fire. These AI images were not labeled. Meta approved half of the ads submitted within 12 hours, and X scheduled all ten ads for publication. Researchers removed all ads before they went live, ensuring they were never seen by Meta or X users.
The research comes just weeks after EU leaders accused Musk of attempting to influence the German election, citing his public endorsement and amplification of Germany’s extreme right party, AfD. The research also follows Meta’s recent announcement that it’s rolling back its content moderation and fact-checking policies, including weakening its hate speech policies.
An Ekō spokesperson said:
“Musk and Zuckerberg have fired content moderation teams and scrapped independent fact checking. So it isn’t a surprise that ads calling for bringing back gas chambers and concentration camps are easily waved through. Even though Trump and his tech billionaire friends, like Zuckerberg, are pressuring the EU to drop its investigations, EU leaders must hold firm and hold these tech CEOs to account.”
The European Commission launched an investigation into X in December 2023 to determine whether it breached the Digital Services Act by failing to combat information manipulation and illegal content on the platform, including hate speech. Recently, EU Commission Vice President Henna Virkkunen said she was considering expanding the scope of the investigation to assess whether Musk has unfairly boosted a political candidate following the live-streamed conversation between Musk and AfD leader, Alice Weidel. The Commission is also investigating Meta over DSA infringement related to the platform’s tools to track disinformation.
Ekō has submitted details of this experiment to the Commission as part of these investigations. Meanwhile, the Trump Administration has pressured the European Commission to weaken enforcement of the Digital Services Act and Digital Markets Act. Over 129,000 Ekō members have signed a petition calling on the EU to hold the line and fully enforce its tech laws against companies like X and Meta.
All of the ads placed as part of Ekō’s experiment were in German, geo-targeted across Germany, and scheduled to run days ahead of the federal election. This methodology tested Meta’s and X’s AI moderation systems put in place to stop hate speech and incitement to violence, highlighting the extremist and racist language that can evade their moderation systems.
Strikingly, Meta approved similar ads in 2023 as part of an experiment Ekō conducted in the run-up to the implementation of the Digital Services Act. One ad approved by Meta called for the burning of synagogues to protect ‘White Germans’. Meta was alerted of the ad’s approval as evidence of its failed moderation and detection systems. This suggests that Meta’s moderation systems to identify and stop hate speech may not be improving despite its new legal obligations under the DSA.