DEMOCRACY ON THE LINE: Deepfakes in elections

- Advertisement -

IN THE chatrooms found under the Dark Web, calls for synthetic media are now widespread. With the US National elections coming up on November 5, 2024, and some form of general elections in other countries including Bolivia, Chile, India, Japan and the Philippines coming up in 2025, there are a lot of both offers and inquiries for deepfake services.

Synthetic media refers to content that is created or modified using artificial intelligence and machine learning technologies. This includes images, videos, audio, and text that are generated or altered in such a way that they appear realistic, even if they do not depict real events or individuals. These can be artificial intelligence (AI) altered images, photos, altered recording, and synthesized voices, as well as video or audio deepfakes.

Deepfakes, which are a type of synthetic media, use deep learning techniques prompted by AI and machine learning (ML) to replace the likeness of one person with another in a video or audio recording, making it seem as if the person is saying or doing something they did not say or do. This technology can create highly convincing forgeries, making it challenging to distinguish between real and fake content.

- Advertisement -spot_img

A recent virtual press conference with experts from IBM confirmed that there is evidence that AI and deepfakes may be used to influence democratic processes. Already last January, snippets of US President Joe Biden’s words were used in a robocall that encouraged voters not to vote in the November polls.

The Philippines ranks highly in social media and web presence with digital media consumption being high and political engagement being intense, the potential for deepfakes to influence public opinion and election outcomes is a critical issue. Already on platforms like X and Instagram, propaganda fueled by already disproven narratives involving the West Philippine Sea, the drug war, and many tired memes by paid trolls and their managers.

Deepfakes can be used in various ways to influence an election.

First, it can be used for character assassination. A deepfake video could show a political candidate engaging in illegal or immoral behavior, damaging their reputation and electoral prospects. Next, it can be used to create videos or a sound byte of opponents making controversial statements or promises, misleading voters about their positions.

The best use case for deepfakes is fake news proliferation. Since it could be used to create convincing fake news stories, exacerbating misinformation and affecting public opinion. Under these deep faking methods, false endorsements can be made, like the Biden robot cast. Now, imagine a scenario where a video circulates showing a popular celebrity or public figure endorsing a candidate. If believed, this could sway a significant number of votes, especially in a country where celebrity endorsements carry substantial weight.

The primary risk of deepfakes in politics is the erosion of trust in the media and democratic institutions. When voters cannot trust what they see or hear, it undermines the democratic process and can lead to polarization, confusion, and a disenfranchised electorate.

Combatting the threat of deepfakes, require several strategies can be employed, using AI detection tools to identify and flag false content, for example. Then by establishing laws that penalize the creation and distribution of malicious content, with clear definitions and standards.

Already being done now are extensive media literacy campaigns are being done by various groups globally. These campaigns encourage critical thinking and skepticism toward suspicious media content. Finally, by collaborating with other nations and international organizations to share best practices, technologies, and strategies to combat synthetic media.

Author

Share post: