Seeing is not believing—deepfakes and cheap fakes spread during the 2024 presidential election in Taiwan

Seeing is not believing—deepfakes and cheap fakes spread during the 2024 presidential election in Taiwan

By Wei-Ping Li, PhD

With AI-driven technology making big strides, observers have projected that audiovisual disinformation produced with machine learning tools could play a role in interfering with upcoming elections. As the Taiwanese presidential election will be held in less than one month, the Taiwan FactCheck Center has observed that most audiovisual disinformation still relies on “traditional” methods of photoshopping or repurposing videos in the wrong contexts. However, we have also discovered a few instances where malicious actors produced visual or audio content, probably using deepfake technologies, to target presidential candidates or create events that never happened. 

Deepfake disinformation [1] circulated in Taiwan long before the current election, although the previous ones were primarily irrelevant to politics. Many debunked false information showed spectacles to awe audiences, such as “the flower-shaped ice floating on the Songhua River in Northeast China” or “gigantic lobster caught in Scotland.” During the Israel-Hamas war, several images and videos that spread worldwide also made their way into social media popular among Taiwanese, such as the image of refugee tents built by the Israeli government (which was debunked as an AI-made image) or a video in which U.S. President Joe Biden said the U.S. government is drafting males over the age of twenty to fight in the war (which was also debunked as an AI-made video).  

However, in the months before the 2024 election, we at the Taiwan FactCheck Center have seen deepfake disinformation attempting to interfere with the election. In this analysis, we examined three cases that the Taiwan FactCheck Center has identified.

🔎 Case 1: An audio recording fabricated the talk of TPP presidential candidate Ko Wen-je 

Among the earliest pieces of deepfake disinformation targeting the 2024 Taiwanese presidential election was a fake audio recording of the talk by Taiwan People’s Party (TPP) presidential candidate Ko Wen-je. This audio file emerged in August 2023 when several media outlets received an email attached with a file that the sender claimed to disclose “an inside story” about Taiwanese Vice President Lai Ching-te’s visit to the U.S. In this audio file, an individual who sounds like Ke criticized Lai’s visit to the U.S. and asserted that Lai’s team paid each participant eight hundred dollars for attending the welcoming party during his brief stay in the U.S. The total length of the recording is about fifty-eight seconds. The TPP soon denied that Ke had ever made the comment and reported the fake audio file to Taiwan’s Investigative Bureau. The audio file was most likely synthesized using deepfake technology, according to the Investigate Bureau's investigation findings. 

🔎 Case 2: A fake video of DPP presidential candidate Lai Ching-te’s remark on the alliance between KMT and TPP

Another deepfake video surfaced in November when the Kuomintang Pary (KMT) and the TPP were discussing forming an alliance and having their candidates, KMT’s Hou Yu-ih and TPP’s Ko Wen-je, jointly run for the presidential election. When the KMT and the TPP were wrangling over whether Hou or Ke should be the president or the vice president candidate, an obscure YouTube account spread a fake video about Lai’s remark on the alliance. This YouTube account, which mainly uploaded information and comments about politics, did not provide details about the account itself.

In this fake video titled “Lai Ching-te’s Response about the Cooperation between the Blue Party (KMT) and the White Party (TPP)," Lai, who is also a 2024 presidential candidate running for the ruling Democratic Progressive Party (DPP), stated that he supported the cooperation between KMT and TPP. Furthermore, Lai said, “Whether it’s the blue party or the white party, they do represent the mainstream view of Taiwanese people; no matter who will be the president or the vice president, either combination can be the right team.”

A collage of a person in a suitDescription automatically generated
A screenshot of the fabricated video in which the DPP presidential candidate Lai Ching-te appeared to support the cooperation of the KMT and the TPP party in the 2024 Taiwanese presidential campaign. The Taiwan FactCheck Center has debunked this video as untrue.  

The Taiwan FactCheck Center found that this fake video was made using both old image manipulation techniques and new deepfake tools. 

First of all, the manipulator used an original video made on November 16, 2023, when Lai mentioned the KMT-TPP coalition. However, the manipulator edited out specific segments and made Lai sound like he praised the alliance between Ke and Hou. For example, in the original video, what Lai truly said was, “Whether it’s the blue party or the white party, they DO NOT represent the mainstream view in Taiwan.” He also stated that neither Ko nor Hou fits the roles of Taiwan’s leaders, and regardless of whether Ko or Hou is the presidential candidate, they will NOT make a good combination. More importantly, the manipulator failed to match the voice with the movement of Lai’s lips, indicating that the visual portion of the video was created using unsophisticated fake techniques. 

Nevertheless, the voice in this video resembled Lai’s voice to a certain extent, although it was in a lower tone and still had a tint of AI voice if listened to carefully. Experts told the Taiwan FactCheck Center that Lai’s voice in this video could have been made or altered by AI tools. In other words, the malicious actor who created this video tried to distort Lai’s talk by editing the original video and could have processed the visual and audio parts separately.   

🔎 Case 3: A fake video in which the Chinese leader Xi Jinping talked about the Taiwanese presidential election

Since December, a fake video in which the Chinese leader Xi Jinping commented on Taiwan’s presidential election has been circulated on TikTok. In the video titled “#Xi Jinping pointed out the direction for the #Taiwanese presidential election [#习近平 对 #台湾大选 指明方向],” Xi encouraged the Taiwanese to “cherish the right to vote, don’t mess up the voting.” 

One of the earliest TikTok users who posted this video was an account that constantly published videos satirizing or criticizing Xi. This video of Xi’s remark on the Taiwanese election was also labeled "humiliate Bao [辱包]" (the word “Bao” is a reference to "Bao-tz" or “Bun,” Xi Jinping’s nickname invented by Chinese netizens) and seemed to be originally meant to make fun of Xi. Other users, however, used the same video to repurpose it by adding various texts demonstrating that Xi expressed his goodwill for the upcoming Taiwanese election and supported candidates from both the KMT and TPP parties.

The verification by the Taiwan FactCheck Center revealed that Xi's expression and lip movement were most likely altered by deepfake technology. The Taiwan FactCheck Center also found that the video used 2018 news footage when Xi participated in a meeting. In the 2018 video, Xi mainly talked about talents and innovation in China without any mention of Taiwan. A comparison between the 2018 video and the altered one demonstrates evidence that Xi's face was modified. For example, a thin line appeared on Xi’s forehead, and the edge of Xi’s face also showed artificially altered signs.   

There was proof that the video's images had been altered. For example, there was a line on Xi’s forehead. The edge of Xi’s face also shows the traces of images being changed.

🔎 Cheap fakes are still most commonly seen

Among the 2024 presidential election disinformation pieces, “cheap fake” is still the most common form in the audiovisual category. Many cheap fakes use photos or video clips as “evidence” to support false claims. In contrast to deepfakes, which mostly show politicians or presidential candidates making comments that have never been made, some of the most common cheap-fake disinformation pieces are about scenes such as the voting process. 

A few false claims displayed images of photoshopped official documents or, moreover, used real documents that were hardly related to the claim itself, tricking audiences who failed to scrutinize the pictures. Usually, texts or remarks were posted with photos or videos to misrepresent the original content's meaning and mislead the viewer. For example, an old, misleading video that circulated in previous elections, claiming the vote-counting process was rigged, has reappeared in this election with the claim slightly modified.

🔎 The challenge brought by deepfakes and AI-driven disinformation

As of this writing, deepfakes or AI-related audiovisual disinformation pieces targeting the 2024 Taiwanese presidential election are still relatively few. Most audiovisual disinformation was still based on existing old videos or audio files and was repurposed with simple techniques. Nevertheless, this does not mean we can shrug off experts’ warnings about the challenges imposed by deepfake and AI technology. 

Based on the current disinformation pieces that fact-checkers have identified, we found that the deepfakes related to elections were mostly videos or audio files of candidates or politicians making statements. Although these altered videos or audio clips gave away clues that the products were not authentic, they can still arouse negative emotions or deceive those who fail to exercise more caution. More rigorous inspection and patience would be critical for audiences to avoid stepping into the traps of deepfakes. 


Wei-Ping Li is a research fellow at the Taiwan FactCheck Center.

[1] In this analysis, we define “deepfakes” as visual or audio files that were produced through machine learning processes to generate images of events that look real, while “cheap fakes” are manipulated images created with more conventional techniques such as photoshopping, recontextualizing footage, or adjusting the speed of videos. See also Paris, B. & Donovan, J. (2019, September 18). Deepfakes and cheap fakes. Data & Society.