Generative Data Intelligence

AI-Generated Disinformation in Bangladesh Elections

Date:

In Bangladesh’s January elections, concerns are increasing as AI-generated disinformation spreads throughout the nation, creating a political atmosphere. This is the challenge of regulating such content in smaller markets, faced by tech companies like Google and Meta.

Bangladesh is experiencing its effects with a population of 170 million. This South Asian nation is ready for a political contest marked by separation between the Prime Minister, Sheikh Hasina, and her opposition rivals from the Bangladesh Nationalist Party.

In recent months, pro-government news outlets and influencers in Bangladesh have actively promoted AI-generated disinformation created using tools offered by AI start-ups. This development is a reminder of how AI can manipulate voters and provoke divisions.

AI disinformation in Bangladesh elections

Sheikh Hasina’s government has been accused of silencing the opposition, leading to the arrest of leaders and activists. Critics argue that these actions amount to an attempt to equip the upcoming polls in her favour, prompting public pressure from the United States to ensure free and fair elections.

One example of AI-generated disinformation emerged on BD Politico. At this online news outlet, a news anchor named “Edward” presented a studio segment accusing US diplomats of interfering in Bangladesh elections and inciting political violence. This instance is even more concerning because it was created using HeyGen, an AI video generator based in Los Angeles. It allows customers to create clips fronted by AI avatars for as little as $24 a month.

[embedded content]

Global concerns and Tech platform responses

The spotlight has intensified on using AI to create misleading or false political content with the increase of powerful tools like OpenAI’s ChatGPT and AI video generators. Earlier this year, the US Republican National Committee used AI-generated images in an attack to draw a future under President Joe Biden. Similarly, YouTube suspended several accounts in Venezuela for using AI-generated news anchors to spread disinformation favorable to President Nicolás Maduro’s regime.

Tech giants Google and Meta have announced policies requiring campaigns to disclose digitally altered political advertisements. However, the case of Bangladesh demonstrates not only the exploitation of AI tools but also the difficulty in controlling their use in smaller markets, often overshadowed by American tech companies.

AKM Wahiduzzaman, a BNP official, revealed that his party had requested Meta to remove AI-generated disinformation content, but responses have been scarce.

Challenges in identifying disinformation

Identifying AI-generated disinformation poses challenges due to unreliable AI detection tools. Sabhanaz Rashid Diya, a founder of the Tech Institute and a former Meta executive, emphasized that off-the-shelf products are ineffective at identifying non-English language content. This underscores the need for more comprehensive and language-agnostic solutions.

Moreover, the solutions proposed by tech platforms, which mainly focus on regulating AI in political advertisements, may have limited effectiveness in countries like Bangladesh, where ads play a smaller role in political communication. This highlights the need for region-specific strategies to combat AI-generated disinformation effectively.

The expansion of AI-generated disinformation is caused by the lack of regulatory oversight and enforcement by authorities.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?