AI Deepfakes Are Now a Threat to Elections

BY
Chelle Louren
/
Aug 23, 2024

As AI technology gets more and more sophisticated, creating realistic content has never been easier. When put to good use, AI can be man’s best friend, but when abused—like creating fake videos to promote unregistered medical cures or spreading misinformation during an election campaign—then it gets dangerous. Just a few years ago, you’d need to invest time and effort to master Photoshop or Blender to really fool someone online, but now, anyone can easily spread misinformation and manipulate public opinion with just a few clicks.

Deepfakes, or AI-generated videos, audio clips, and images so good that you can’t tell them apart from the real thing, are becoming a huge threat to the security of elections worldwide. One common technique is the "lip-sync deepfake," where AI tools alter a subject's lip movements to match synthetic speech. Although these are not always perfectly executed, they can still be convincing enough to mislead viewers.

In fact, this year, there have been several election-related deepfake incidents that caused a stir both here in the Philippines and abroad.

The Growing Threat to Election Integrity

Last April, a video of President Bongbong Marcos urging the Armed Forces to attack China made the rounds online. The video began circulating during the Balikatan military exercises, which involved the Philippines, the U.S., Australia, France, and 14 observer nations. The Presidential Communications Office quickly identified the video as a deepfake, but many feared that the damage had already been done, and that the incident would affect our country’s foreign policy.

Image Source: ANC

This wasn't the only deepfake targeting President Marcos. In July 2024, just hours before his State of the Nation Address, another deepfake was released by his political opponents. This time, it showed the president in his younger years caught on cam inhaling illegal substances. The deepfake detection tool Sensity flagged the video as a faceswap in which Marcos's face had been digitally edited onto someone else's body. Analysts claim these deepfakes are part of a larger plot to destabilize the Marcos administration.

In Taiwan, a politician who bowed out of the election race was surprised to learn that an AI audio clip of him endorsing another candidate was posted online on election day. He had never made such a statement. The AI clip, which had been posted by a group affiliated with the Chinese Communist Party, was later taken down by YouTube. Meanwhile, in South Africa, a deepfake of the famous rapper Eminem endorsed the country’s opposition party.

Image Source: Africa Check

Other cases aimed for character assassination. In Taiwan, an AI videoclip featured a woman accusing a candidate of having several mistresses. In Bangladesh, which is known for its conservative society, a smear campaign against a female candidate used a deepfake of her wearing a bikini to tank her reputation online. In Slovakia and Nigeria, deepfakes featuring candidates conspiring to rig the elections through ballot manipulation successfully swayed some voters.

The United States has also faced its share of deepfake threats. In February 2024, an audio deepfake mimicking President Joe Biden's voice was used in an automated phone call urging Democratic voters in New Hampshire not to participate in the state’s primary election. In this case, misinformation was used to attempt to suppress voter turnout. 

Another high-profile deepfake incident in the U.S. involved a manipulated campaign ad featuring Vice President Kamala Harris. The video, which was retweeted by Elon Musk, featured Harris verbally criticizing President Biden’s mental state. This video, which was originally posted on YouTube by a user named “Mr Reagan,” lacks any label or caption indicating that it has been manipulated—which, ironically, violates the platform’s policies on misleading content. In response, Harris’ camp condemned Musk and former President Donald Trump for spreading lies.

And just recently, Trump blasted Harris for spreading AI-generated photos of a huge crowd that showed up for a Democrat campaign rally at an airport in Detroit. According to Trump, the crowd didn’t exist. But according to Harris and company, the photo showed an actual crowd of 15,000 in Michigan. There are literally entire articles online written just to prove that the photos are genuine.

Trump claims this photo is a deepfake.
Image Source: CBS News

So now we have the two major political parties slamming each other with deepfake allegations while the rest of us are left trying to figure out who’s telling the truth.

Even celebrities are getting dragged into this political mess. Trump recently shared obviously AI-generated photos that appear to show award-winning singer Taylor Swift endorsing his campaign.

Actual AI-generated photos shared by Trump
Image Source: X

These incidents are just the tip of the iceberg of what can happen once deepfakes become the norm. They have already been used for extortion and kidnappings; now they’re being used to attack public personalities and manipulate public opinion. In fact, the World Economic Forum’s Global Risks Report 2024 ranks misinformation and disinformation as the number one global threat we’ll be facing over the next two years.

How Countries Are Dealing With the Deepfake Threat

Countries around the world are doing their best to respond to the issue. In Singapore, authorities took down deepfake videos of prominent politicians ahead of the elections. In some of these videos, former Prime Minister Lee Hsien Loong appeared to be discussing sensitive topics like investment opportunities and foreign affairs. Although Singapore has only had to deal with a few deepfakes compared to its neighboring countries, the potential impact on the elections led the government to consider temporarily banning AI-generated content during the election period.

South Korea also passed a law banning political deepfake videos within 90 days before an election. Anyone caught in the act would be sentenced to a max of 7 years in prison or a fine of up to 50 million won ($37,618). Even creators posting political-related videos outside the 90-day period are required to inform their viewers whenever AI-generated content is present. South Korea successfully caught 130 political deepfakes ahead of its parliamentary election this year.

India has also taken action against AI-generated misinformation. In March 2024, just weeks before the country's elections, India launched the Deepfakes Analysis Unit (DAU) to let Whatsapp users report suspicious audio and video content suspected to be AI-generated. Since its launch, the DAU has received and reviewed hundreds of cases, most of which targeted politicians, business leaders, and celebrities.

Locally, National Unity Party president and Camarines Sur Rep. Lray Villafuerte has urged lawmakers in the House of Representatives and the Senate to draft new laws to regulate the use of AI technology. This call follows a warning from Department of Information and Communications Technology (DICT) Secretary Ivan John Uy about the potential threats of deepfakes and generative AI tools when misused for political reasons. He used the term “scandemic” to describe the rise of such deepfakes.

In response, Villafuerte has introduced House Bill 10567, which aims to penalize producers or distributors of deepfake materials who do not disclose to the public that these are AI-generated. The bill, filed in July, supports a proposal by COMELEC chairman George Erwin Garcia to prohibit AI and deepfake technology in electoral campaigns. Garcia also proposed disqualifying and filing cases against candidates caught using such technologies in the upcoming 2025 elections.

As the technology behind deepfakes gets better, it will be even harder to tell the real thing apart from the fake. If left unchecked, deepfakes could disrupt true democracy on a global scale. Public awareness, regulatory measures, and technological solutions will all be crucial in addressing this emerging threat. The incidents we've seen so far are just the beginning, and without cooperation between the government and the tech sector, the consequences could be even more severe.

How Blockchain Technology Helps Combat Deepfakes

Blockchain technology offers several potential solutions to address the challenges posed by deepfakes and the manipulation of AI-generated content, especially in the context of elections and other high-stakes scenarios. Here are some ways blockchain can help:

1. Immutable Records and Content Verification

Probably the greatest benefit of blockchain technology is its ability to create immutable, tamper-proof records. This can be used to verify the authenticity of digital content, such as videos, audio, and images. Any alterations can be detected by comparing them against the blockchain-stored version. For example, if a political campaign video is registered on a blockchain at the time of creation, any subsequent modifications—like deepfake alterations—can be easily identified, as the altered version would not match the original blockchain record.

When content is created, it can be "stamped" with a unique identifier on the blockchain, which records details like the creator, time of creation, and any subsequent edits or distributions. This "digital fingerprint" can follow the content wherever it goes online, making it easier to track the spread of deepfakes or AI-generated misinformation and identify the source.

2. Decentralized Content Authentication

Blockchain can be used to build decentralized systems that can automatically verify the authenticity of digital content. By using a decentralized network, the verification process is distributed among multiple nodes. This makes it harder for one individual or group to manipulate the entire verification process.

News organizations and content creators could use blockchain to prove that their content is original and unaltered. Decentralized identity systems can ensure only verified individuals or entities can create or distribute certain types of content. For example, a social media platform could require individuals to verify their identities through a real KYC process before they can distribute campaign materials online. Once verified, viewers can rest assured that what they are seeing or hearing is genuine.

3. Transparent Electoral Processes

Blockchain technology can be used to create transparent and immutable records of votes and other election-related activities. It can store voter identities and voting records and ensure that these cannot be tampered with or altered after the fact. This extends to campaign finance records, candidate qualifications, and more, making it harder for misinformation—such as deepfakes—to influence the electoral outcome.

4. Smart Contracts for Automated Enforcement

Smart contracts—self-executing contracts with the terms of the agreement directly written into code—can be used to enforce content authenticity rules automatically. For example, a smart contract could be programmed to automatically flag or remove content that does not match its original blockchain record, or to penalize individuals who distribute deepfakes without proper disclosure. 

Through these and more, blockchain technology can help stop the spread of AI-generated misinformation during election season. Of course, it’s still up to us to exercise due diligence and think twice before believing anything we see online.

Chelle Louren
Web3 writer

Chelle is a freelance writer exploring where emerging tech and real world problems converge. Everything is a story, and she’s here to show that.

GET MORE OF IT ALL FROM
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Recommended reads from the metaverse