Zephyrnet Logo

Election AI Deepfakes: Will EU Regulators Cope?

Date:

Election AI deepfakes are expected to overwhelm EU regulators as over 10 countries in the region go to polls in 2024 on the backdrop of ill-prepared governments.

In 2024, the continent is expected to see an influx of politically motivated deepfakes in the form of videos, audios, and images. With tools like Midjourney version 6, creating “incredible photorealism,” it will be easy to target real people or scenes of riots and migrant movements.

Multiple EU elections

According to Brussels Signals, Austria, Finland, Croatia, Belgium, Lithuania, Iceland, Portugal, Romania, and Slovakia will hold nationwide elections this coming year.

Elsewhere, Germany, Ireland, Malta, and Poland have local elections, while Spain has two regional elections.

Globally, about 25% of the world’s population lives in a country that is holding elections in 2024, for instance, the US.

The election period has also come at a time when the world is seeing a boom in generative AI following the success story of ChatGPT, which was launched in November 2022.

As such, Henry Ajder, founder of a generative AI startup in Cambridge, says the year 2024 will see “the most people going to elections, and at the same time, we’re seeing a pretty lightning-quick evolution and deployment of AI.”

Also read: South Korean Government Says No Copyright for AI Content

Poor preparation

Sara Ibrahim, a barrister at London’s Gatehouse Chambers who works and writes often on AI, holds that the upcoming elections will face AI threats due to the ill-preparedness of EU regulators to deal with AI use during the election period.

She thinks the region is currently characterized by “a perfect storm of ill-prepared governments and a large-scale possibility for deception and fraud, spreading misinformation at speed.”

Already, courts in Europe have had a taste of the impact of generative AI, and this serves as an early warning sign.

The legal system, according to Ibrahim, is “already experiencing people citing made-up cases hallucinated by ChatGPT, so a real stress to public resources at a time when the economy is hardly robust.”

Stopping the spread of deepfakes using regulation may be an uphill task for the EU now.

“Regulators can’t stop a person on their computer using an incredibly accessible tool to generate audio or video, share it on Twitter at an opportune moment, or spread it on Discord groups,” says Ajder.

Despite having the Digital Services Act in the EU, its effectiveness is “yet to be proven,” according to Oxford Analytica.

The deceptive AI

Although it has been touted as a game changer for its transformative abilities, the world has already witnessed the pitfalls of generative AI. AI-generated images, videos, and audio are becoming “perfect” and difficult to distinguish from the real ones, leaving users in a fix.

In September, amid the Bangladesh election, a news outlet known as BD Politico aired a video on the X platform with footage of rioting because US diplomats were meddling in the country’s elections.

But the video was created using HeyGen, an AI video generator. The news anchor in the footage, “Edward,” is one of the many avatars the video generator offers to users at a subscription fee of $24 a month.

Another example is that of an audio recording featuring Sir Keir Starmer verbally abusing his aides during the UK Labour Party’s September conference. The clip had 1.5 million views.

During the same month, another clip of London’s Mayor Sadiq Khan urging for a rescheduling of Armistice Day due to a pro-Palestinian march was widely circulated.

According to Brussels Signal, the clips proved they were AI-generated.

[embedded content]

Vulnerable politicians

Due to the positions they hold, politicians and high-profile individuals have been targets for AI deepfakes, especially with elections around the corner all over the world.

Ajder concurred that politicians “are especially vulnerable.”

“There is a lot of training data in the footage of them, and standing at podiums or sitting at desks is something that is especially easy to fake,” said Ajder.

Recently, Pakistan’s Imran Khan had an AI-generated voice of himself calling for support while in jail in a seven-hour virtual rally.

Video clips from the Gaza and Ukraine conflicts have also depicted bloody and abandoned babies, but these were not real. A closer look revealed fingers curled in “anatomically impossible ways or unnatural eye colors,” according to Imran Ahmed, chief executive of the Washington, DC-based Center for Countering Digital Hate.

What can be done?

While producing and broadcasting extremist material can be pricey, the frequency with which such content appears on platforms like X is high.

According to Ahmed, unregulated AI will “turbocharge hate and disinformation.”

While there is little regulators can do right now, there is a need to ensure disclosure of any AI-generated content by politicians, according to Ajder.

“Or at very least only permitted to be used without disclosure when it’s incredibly clear to audiences it’s not real and is parody,” he says.

spot_img

Latest Intelligence

spot_img