skip to content

Hate-GPT: Adolf Hitler, Osama Bin Laden-themed AI chatbots on the rise in Europe

Gab Social, a far-right social media network notorious for its extreme content, has ignited controversy with the introduction of a chatbot version of Adolf Hitler on its AI platform. This move has raised concerns about the potential for online radicalization facilitated by artificial intelligence.

The newly launched Gab AI platform, unveiled in January 2024, allows users to create AI chatbots, with some emulating historical and modern political figures. Among these chatbots is one portraying Hitler, the fascist dictator responsible for the Holocaust during World War II.

When one interacts with the AI version of Adolf Hitler, the chatbot denies the Holocaust, echoing conspiracy theories and portraying Hitler as a victim of a supposed conspiracy.

While other chatbots on the platform, such as an Osama Bin Laden simulation, refrain from explicitly endorsing violence, they subtly hint at justifications for extreme actions.

Gab Social, founded in 2016 as an alternative to mainstream social networks, has faced criticism for fostering extremism and conspiracy theories.

It gained notoriety in 2018 when it was revealed that the perpetrator of the Pittsburgh synagogue shooting had used the platform to spread antisemitic rhetoric before carrying out the attack.

Despite facing bans from major app stores due to its promotion of hate speech, Gab Social persists by utilizing decentralized platforms like Mastodon. Its recent venture into AI chatbots has intensified concerns over the platform’s potential to radicalize users and spread harmful ideologies.

The introduction of Gab AI comes when regulatory bodies grapple with the challenges posed by AI technology in social media.

The European Parliament is set to vote on the EU AI Act, which aims to regulate AI systems based on their societal risks. Similarly, the UK’s Ofcom implements the Online Safety Act, holding social media platforms accountable for harmful content.

While regulatory measures are underway, questions remain about the effectiveness of self-regulation within the tech industry. As AI evolves, policymakers and tech companies face the complex task of balancing innovation with safeguarding against the spread of harmful content and radicalization online.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed