Seeing is not believing (part II) - AI videos spread during the 2024 presidential election in Taiwan

Seeing is not believing (part II) - AI videos spread during the 2024 presidential election in Taiwan

By Wei-Ping Li, PhD

(This article is part of an analysis series of disinformation trends during the 2024 Taiwanese presidential election.)

Last year, in this Taiwan FactCheck Center series on the 2024 Taiwanese presidential election, we examined deepfakes and cheap fakes that targeted presidential candidates or attempted to influence the election. In this piece, we will focus on the content and dissemination routes of AI-generated fake videos circulated during the election, particularly those distributed between late December 2023 and January 2024.

In the first “Seeing is not Believing” article published on December 25, 2023, we examined the trend of distributing audio and visual disinformation related to the presidential election before mid-December 2023, specifically with three examples. We pointed out that, until then, AI-generated disinformation pieces had been seen less than traditional audio-visual files edited with traditional techniques. Moreover, the deepfake cases mostly tried to simulate politicians’ voices or created videos in politicians’ images to make statements, such as presidential candidate Ko Wen-je’s criticism of another candidate or a fake video showing Chinese leader Xi Jinping commenting on the Taiwanese election. 

However, one month before the election, several new videos created by AI appeared on YouTube and Facebook and were further spread into Facebook fan pages or groups among Taiwanese users. Most of the AI-generated videos featured synthetic voices or images of politicians. For example, one fake video circulated on YouTube and Facebook included a voice recording of the Democratic Progressive Party (DPP) candidate and now president-elect, Lai Ching-te, in which Lai referred to himself as “immoral Lai” and claimed that the DPP was fraught with many scandals.

The voice in the video indeed sounded like Lai’s voice, but it was absurd that Lai would make such a comment. Furthermore, there was no other media coverage of Lai’s outlandish remark in the video. The Taiwan FactCheck Center sent this recording to experts for multimedia forensic analysis and determined that the voice was synthesized. 

In the battle against AI-generated disinformation content, it might be easy for fact-checkers or the general public to sniff out signs of lies when audiences are familiar with the context. However, when the video is about foreign personalities or occurrences happening in other countries, it would require more measures and caution to detect the flaws. One of the examples was a video emerging in late December in which U.S. House Representative Rob Wittman openly endorsed Lai and his running companion, Hsiao Bi-khim. 

A screenshot of a video chatDescription automatically generated
A screenshot of a Facebook post shared a TikTok video titled "American online video on the 29th: Vice Chairman Rob Wittman on the House Armed Services Committee was interviewed and openly endorsed the Democratic Progressive Party.” The short video was based on a clip from a U.S. local news station, WUSA 9, dubbed with AI synthetic voices to make a statement that the congressman never said.  

This video was most likely uploaded by a TikTok private account before being reposted on Reddit, Facebook, and PTT, a popular Taiwanese online forum. The Facebook posts that shared this video asserted that “the vice chairman Rob Wittman on the House Armed Services Committee openly endorsed Team Taiwan” and “the vice chairman strongly supported Hsiao Bi-khim, and stated clearly that the Democratic Progressive Party is better than the other two political parties. This is the first time that the United States has spoken out to choose a side [in the election].”

Taiwan FactCheck Center debunked the video and found that the short footage was adapted from a TV interview conducted in 2022 by a U.S. local TV news station based in the Washington, D.C. area. In the original video, Wittman talked about the war in Ukraine and called for more economic sanctions on Russia. However, the creator of this fake video dubbed Wittman’s talk with fake voices stating support for specific Taiwanese presidential candidates. Wittman’s voice in the fake video differed from his genuine voice in the 2022 interview. The lip movements sometimes did not align with the talk, either. However, since most Taiwanese people are unfamiliar with American politics and Rep. Wittman, it was hard for them to discern the flaws in the fake video. 

The AI videos circulated during this period were not limited to short videos or audio clips where politicians made suspicious remarks. Instead, the formats of the videos were more diverse, and some even resembled “storytelling.”

One example of these AI “storytelling” cases was a video attacking Lai. In this video, a female host alleged that Lai had mistresses who also had children with him. The content of this video was based on unsubstantiated old rumors circulated a decade ago. The AI elements in this video were obvious: the lip movements and facial features of the narrator, as well as the blurred shape of objects in the background, revealed clues that this video was an AI-generated product. 

A similar format in which an AI-generated host relayed information was also found in another group of fake videos popping up on social media before the weekend of the presidential election, telling “the secret history” of the outgoing Taiwanese president Tsai Ing-wen. Based on a document surfacing in late 2023 that collected baseless rumors about Tsai, multiple videos used virtual hosts to read aloud the documents. The videos were spread on X, Facebook, and a website primarily devoted to literary works. 

The Taiwan FactCheck Center found that several fake accounts first shared these AI-generated videos on social media. Furthermore, the comments left under the posts were also identical and created by unauthentic accounts. For example, the Taiwan FactCheck Center found 101 accounts that posted, shared, and commented on the fake video claiming Lai had mistresses. Among the 101 accounts, 98 published their first posts between September and December 2022, showing that these accounts were intentionally created in a short time. These accounts published almost no other content. For the very few accounts that published posts, their pictures, comments, and hashtags were copied from other accounts.  

A screenshot of a social media networkDescription automatically generated
Many of the accounts that spread the AI-generated videos were established in the same short period of time. The profile pictures of these accounts were copied from photos of other users or celebrities. 

The Investigation Bureau of Taiwan also noted that the videos were first uploaded to YouTube, Facebook, and other platforms. These videos were then boosted by several Facebook fan pages, including those operated by actors based in countries such as Cambodia and Myanmar, and shared with local Taiwanese Facebook groups that usually discuss topics such as entertainment, religion, and community affairs. 

These AI-generated videos that spread online before the 2024 Taiwanese presidential election highlight a disturbing trend in how AI will exacerbate the disinformation problem. Although media literacy and fact-checking can help combat disinformation, the rapid development of AI technology has brought significant challenges to society. The Taiwan FactCheck Center tested readily available online AI software and discovered that users can create an AI film of nice quality in less than thirty minutes.

Currently, Taiwan has no regulations governing AI-generated content except guidelines instructing government agencies on how to use generative AI. The government is still working on a draft of a basic AI law. To tackle the disinformation disseminated during the election, the Taiwanese government has relied on laws that punish actors who intentionally spread false information to influence the election results. However, in practice, it would be hard to locate the creators and spreaders of the false information, not to mention produce evidence of the actors’ intentions. As AI has shown its potential to disrupt elections, politics, and society, it would be imperative that Taiwanese society develop more tools to tackle the problem.

Wei-Ping Li is a research fellow at the Taiwan FactCheck Center.

Andy Chen (Fact-checker at Taiwan FactCheck Center) contributed to this analysis.