AI-generated misinformation poses a rising challenge to democracies founded on the open exchange of ideas, especially when it comes from hostile foreign powers.
Every day it seems like we are up against more phony or manipulated content online, including bots, trolls, and influence efforts. We will soon share digital space with a terrifying variety of AI-generated news stories and podcasts, deepfake photos and videos—all produced at a formerly unfathomable size and speed—thanks to improvements in processing power, smarter machine learning algorithms, and larger data sets. A study found that fewer than 10,000 deepfakes had been found online as of 2018. There are almost certainly millions of deepfakes online today.
Although we find it difficult to envision all the uses that people will discover for this new synthetic media, what we have already observed is worrying. Essays can be written by ChatGPT for students. Stalkers have the ability to produce pornographic videos with photographs of the targets of their obsession. Your boss's voice could be synthesized by a criminal, instructing you to transfer money.
Deepfakes provide security hazards to the country in addition to criminal ones. Russia utilized traditional propaganda techniques to sow discord in the U.S. in 2020, spreading false information about vaccinations and real, but carefully picked, images of Black Lives Matter protest devastation. Such attempts will be elevated by deepfake technology, enabling the construction of a credible other world. For instance, in 2022, Russia published a clumsy deepfake of Ukrainian President Volodymyr Zelensky pleading with his countrymen to lay down their weapons.
Consider the potential applications as technology advances. Jihadists could use convincing videos of French President Emmanuel Macron disparaging Islam to rally recruits. A deep fake of a navy commander in Taiwan ordering his subordinates to let Chinese forces pass unimpeded may signal the start of a Chinese invasion of Taiwan. After reading thousands of contentious or provocative Facebook postings purporting to be from other soldiers but actually coming from ChatGPT, soldiers fighting a war may become discouraged. Such information warfare threatens to outpace the capacity of militaries and intelligence services to defend against it due to its size, speed, and realistic nature.
Deepfakes domestically run the risk of making people skeptical of all information. Real instructions might not be trusted by the troops, and the general public might believe that genuine scandals and outrages don't exist. Politicians and their supporters will be able to discount any unfavorable information that is presented about them as false or exaggerated if there is a culture of widespread distrust.
Such worries have previously been foreseen by China's strong cyberspace management. Beijing started imposing ambitious new controls on deepfake content in January. These policies range from stringent guidelines mandating that artificially created photographs of individuals only be used with those people's consent to more Orwellian prohibitions on "disseminating fake news."
Democratic cultures must begin tackling the potential risks posed by deepfakes as well, but we cannot approach the issue in the same manner as China. We require a reaction that protects the freedom of speech and expression, as well as the sharing of information that enables people to distinguish between real and fake news. Disinformation is harmful because it challenges the concept of truth itself. Bans like Beijing's exacerbate the issue by turning the ability to distinguish between truth and deception into a political and violently enforced government prerogative.
Democracies will have to combine technical, regulatory, and social measures to succeed. On the technical front, work has already started at Intel. A system called FakeCatcher that the company's researchers proposed in November of last year claimed 96% accuracy in identifying deepfakes. Although that percentage is impressive, even a detector that was 99% accurate would miss an unacceptable amount of false information given the enormous amount of synthetic material that can be produced. Also, governments will have access to highly competent programmers, making their deepfakes among the least observable. As innovations in detection are usually used to improve next-generation deepfake algorithms, even the most brilliant detectors will have their limits.
There is a workaround connected to a method that social media companies are already researching that could assist detectors in staying ahead of this cycle. When building algorithms for detecting technologies, developers can concentrate more on how the video or image is being used than on the actual video or image. These kinds of tools are currently used by social media platforms to identify fake accounts that are part of what they refer to as "coordinated campaigns of inauthentic behavior," which refers to the actions taken by Iran, Russia, and other malicious actors to spread misinformation or discredit particular public figures. Such a deepfake algorithm might, for example, tell a Renoir-style deepfake painting of a loved one apart from a deepfake displaying a celebrity in lingerie or drugged-out political figure.
Although the United States government and other democracies are unable to instruct their citizens on what is or is not factual, they can demand that businesses that produce and distribute synthetic media on a large scale make their algorithmic decisions more public. The public should be aware of a platform's regulations and how they are applied. Deepfake-disseminating platforms may even be obliged to permit unbiased, outside researchers to examine the effects of this media and check that their algorithms are operating in compliance with their regulations.
Several institutions in democracies will modify the way they conduct business as a result of deepfakes. For order verification and to ensure that automated systems cannot be activated by prospective deepfakes, the military will need exceptionally secure systems. Delays will be necessary for political leaders responding to crises so that they may verify that the information at hand is accurate and not even partially twisted by a foe. Editors and journalists will need to uphold the practice of fact-checking with various sources and be wary of surprising news items. An outlet may mark some news with prominent "this information not verified" cautions when there is uncertainty.
In the end, it will be up to the general population to decide between trustworthy and deceptive information sources. Media literacy is a problem for many democracies, but Finland is a potential exception. There, libraries have evolved into hubs for adult media literacy teaching, and media literacy is integrated into the curriculum beginning in preschool. Finland currently holds the top spot globally for resistance to false information.
Democracy is by its own nature incapable of being effectively controlled by a single measure. In contrast to China's Ministry of Truth strategy, free countries should place a higher premium on maintaining open conversation and respecting the judgment of citizens in order to address the issue. The important thing is to start this process before deepfakes infiltrate and overrun our information ecosystems. Once they do, it will be much harder to contain suspicion and uncertainty.
As a leading independent research provider, TradeAlgo keeps you connected from anywhere.