China is implementing new rules to restrict the production of deepfakes.
China is implementing new rules to restrict the production of deepfakes. Deepfakes are media generated or edited by artificial-intelligence software that can make people appear to say and do things they never did. These new rules will help to prevent the spread of false information and protect people's privacy.
Starting Tuesday, Beijing's internet regulator, the Cyberspace Administration of China, will begin enforcing a regulation on "deep synthesis" technology, which includes AI-powered image, audio and text-generation software. This marks the world's first comprehensive attempt by a major regulatory agency to curb one of the most explosive and controversial areas of AI advancement.
The technologies that power popular applications like ChatGPT and Lensa also pose new challenges. These technologies can generate more deceptive media, which could fuel misinformation and cast doubt on the veracity of anything in the digital realm.
The new regulations prohibit the use of AI-generated content for spreading "fake news," or information deemed disruptive to the economy or national security. They also require providers of deep synthesis technologies, including companies, research organizations and individuals, to prominently label images, videos and text as synthetically generated or edited when they could be misconstrued as real.
The new rules, which were first published on December 11, are designed to govern the algorithms that underpin the world’s most powerful internet platforms. These rules follow the unveiling of similar rules in August.
The new rules on deepfake-generation tools and algorithms will offer a test of Beijing's ability to manage a fast-evolving set of new technologies that is befuddling regulators the world over, technology policy analysts say.
U.S. lawmakers have sought to address the proliferation and potential abuse of deepfakes, but those efforts have stalled over free-speech concerns. In the European Union, regulators are further along but have taken a more cautious approach than China, strongly recommending that platforms find ways to mitigate the ability of deepfakes to spread disinformation, without banning them outright, says Matthias Spielkamp, executive director of Berlin-based AlgorithmWatch, a nonprofit research and advocacy organization. Spielkamp notes that the EU's approach is more in line with protecting freedom of expression, while China's approach is more focused on preventing the spread of misinformation.
China's attempt at regulation indicates that Beijing is heavily influenced by the global debate surrounding the technology, said Graham Webster, a Stanford University research scholar who runs the DigiChina Project, which tracks China's digital-policy developments.
As the world learns more about the potential impacts of climate change, China is moving forward with mandatory rules and enforcement. This provides an opportunity for people around the world to observe the effects of these measures.
Deepfakes are images that look like photographs of people or objects that don’t exist. They were created using an open-source algorithm in 2014, and have since been used for both research and abuse. For example, some people have used deepfakes to create pornography videos without the consent of the people whose faces are being used.
Generative AI technologies have continued to advance and expand into different mediums, including illustrations, video, voice, text and chat conversations. These technologies have been driven by their growing utility in commercial and entertainment applications.
There are a variety of apps that use generative algorithms to create face-swaps and other effects. A 2021 documentary about the late celebrity chef Anthony Bourdain used a deepfake re-creation of his voice to bring to life words that he had written but not spoken.
Generative technologies have the potential to create deepfakes, which can sow chaos during elections or wars. The mountains of data used to train newer-generation AI software such as ChatGPT and Lensa have sparked widespread concerns about data privacy and consent.
While the U.S. has been at the forefront of generative AI advancements, several Chinese companies and institutes have also been working to develop their own algorithms or repurpose existing ones for entertainment or commercial purposes. These include Baidu Inc., Tencent Holdings Ltd. and the government-backed Beijing Academy of Artificial Intelligence. Their efforts have resulted in the creation of Chinese-style paintings and digital avatars, among other things.
As facial recognition technology has become more advanced, there have been more reports of it being used to defraud and scam people in China. In one case, a man in the port city of Wenzhou alleged that criminals used face-swapping technology to pose as a friend, swindling him of roughly $7,200. This highlights the need for caution when using such technology, as it can be easily misused.
After an AI consumer app went viral in China for animating still photos of people’s faces into comedic videos, the internet regulator summoned 11 companies, including Alibaba Group Holding Ltd., Tencent and TikTok operator ByteDance Ltd., to better understand security issues around deep synthesis technologies, according to a statement from the agency.
The challenge of regulating deepfakes in the U.S. and EU has been to find a balance between mitigating the technology’s negative effects and preserving legitimate forms of speech such as political satire, according to Sam Gregory, program director of New York-based human-rights nonprofit Witness. He notes that the Chinese regulations take a different approach, extending Beijing’s already-restrictive controls on speech to the new medium.
"The new regulations are clearly framing a vast range of use cases where satirical speech would not be acceptable," said Mr. Gregory, who has studied them.
According to Mr. Gregory, some aspects of China's regulations are in line with emerging norms in other parts of the world. For example, Beijing's new rules require the visible labeling of AI-generated content for users as well as digitally watermarking them. These measures are seen as among the most effective ways to counter the deceptive impact of deepfakes and enable internet platforms to address those deemed to violate rules.
The new rules in Beijing will provide a valuable case study for observers outside of China on how such rules can work in the real world, said Stanford professor Mr. Webster. "It's one of the world's first large-scale efforts to try to address one of the biggest challenges confronting society."
As a leading independent research provider, TradeAlgo keeps you connected from anywhere.