How hackers with $24 to spare are hijacking Bangladesh’s 2024 election


Policymakers worldwide are expressing concerns about the potential misuse of AI-generated disinformation to manipulate voters and intensify divisions in the lead-up to significant elections in 2024, according to a report by the Financial Times.


The focus of this worry has materialized in Bangladesh, a nation of 17 crore people headed for an election in January 2024. The campaign so far has been marked by a contentious power struggle between incumbent Prime Minister Sheikh Hasina and her rivals.

Both pro-government and pro-opposition news outlets and influencers in Bangladesh have reportedly been promoting AI-generated disinformation, especially deepfakes, using affordable tools offered by US and Israel-based AI startups.

This trend underscores the challenges in controlling the use of such tools in smaller markets that major American tech companies may overlook.

The Financial Times report quoted Miraj Ahmed Chowdhury, the managing director of Bangladesh-based media research firm Digitally Right, saying that while AI-generated disinformation is currently at an experimental stage, the use of AI tools allows for the mass production and dissemination of misinformation and poses a significant threat.

The Financial Times cites instances where politically motivated deepfakes or fake videos, often as news clips, are created using tools like HeyGen, a Los Angeles-based AI video generator. This tool enables users to produce clips featuring AI avatars for as little as $24 a month.

The disinformation exacerbates Bangladesh’s already tense political climate before the upcoming elections.

Despite calls for action, tech platforms have shown apathy when confronted with the fake nature of these videos.

A primary challenge in identifying such disinformation is the need for reliable AI detection tools, particularly for non-English language content.

Sabhanaz Rashid Diya, the founder of Tech Global Institute and former Meta executive, noted that the solutions proposed by significant tech platforms, primarily focused on regulating AI in political advertisements, may have limited efficacy in countries like Bangladesh, where ads play a minor role in political communication. She emphasized that the need for more regulation and selective enforcement by both platforms and authorities exacerbates the problem.

Diya also highlighted a more significant threat: the possibility of politicians leveraging the mere potential of deepfakes to discredit information.

The ease with which a politician can claim a genuine piece of news to be a deepfake or claim “This is AI-generated” whenever they are questioned adds a layer of confusion, challenging people’s ability to distinguish truth from falsehood. As AI-generated content is weaponized, particularly in the global south, the challenge lies in addressing how it erodes the public’s trust in information.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed