A draft update of rules would dramatically increase the power of China’s censorship machine, but platforms will pay the price.
China is fine-tuning its censorship machine, this time proposing changes in how to regulate the billions of online comments posted in the country every day.
On June 17, the internet regulator Cyberspace Administration of China (CAC) published a draft update on the responsibilities of platforms and content creators in managing online comments. One line stands out: all online comments would have to be pre-reviewed before being published. Users and observers are worried that the move could be used to further tighten freedom of expression in China.
The new changes affect Provisions on the Management of Internet Post Comments Services, a regulation that first came into effect in 2017. Five years later, the Cyberspace Administration wants to bring it up to date.
“The proposed revisions primarily update the current version of the comment rules to bring them into line with the language and policies of more recent authority, such as new laws on the protection of personal information, data security, and general content regulations,” says Jeremy Daum, a senior fellow at Yale Law School’s Paul Tsai China Center.
The provisions cover many types of comments, including anything from forum posts, replies, messages left on public message boards, and “bullet chats” (an innovative way that video platforms in China use to display real-time comments on top of a video). All formats, including texts, symbols, GIFs, pictures, audio, and videos, fall under this regulation.
There’s a need for a stand-alone regulation on comments because the vast number makes them difficult to censor as rigorously as other content, like articles or videos, says Eric Liu, a former censor for Weibo who’s now researching Chinese censorship at China Digital Times.
“One thing everyone in the censorship industry knows is that nobody pays attention to the replies and bullet chats. They are moderated carelessly, with minimum effort,” Liu says.
But recently, there have been several awkward cases where comments under government Weibo accounts went rogue, pointing out government lies or rejecting the official narrative. That could be what has prompted the regulator’s proposed update.
Chinese social platforms are currently on the front lines of censorship work, often actively removing posts before the government and other users can even see them. ByteDance famously employs thousands of content reviewers, who make up the largest number of employees at the company. Other companies outsource the task to “censorship-for-hire” firms, including one owned by China’s party mouthpiece People’s Daily. The platforms are frequently punished for letting things slip.
Beijing is constantly refining its social media control, mending loopholes and introducing new restrictions. But the vagueness of the latest revisions makes people worry that the government may ignore practical challenges. For example, if the new rule about mandating pre-publish reviews is to be strictly enforced—which would require reading billions of public messages posted by Chinese users every day—it will force the platforms to dramatically increase the number of people they employ to carry out censorship. The tricky question is, no one knows if the government intends to enforce this immediately.
One specific change about “先审后发,” a censoring practice some Chinese social media platforms use to review content before it’s even published, has particularly caught people’s attention. On Weibo, the popular Twitter-like service, such stricter control measures are currently applied only to accounts that have violated content censorship rules before, or when there’s an ongoing heated discussion about a sensitive topic. The 2017 version limited such actions to “comments under news information,” so it didn’t need to be applied universally. But the new update takes out that restriction.
On social media, some Chinese users are worried that this means the practice can be expanded to cover every single comment online. Under one Weibo post about the change, the most liked comment says, “Is this restriction necessary? If only there’s a guarantee it won’t be abused.”
That is an extreme interpretation of the proposed change, says Liu, because censoring every comment would incur astronomical costs to social media platforms. It’s unlikely Beijing will go so far to enforce blanket pre-publish censorship, but Liu says the revisions are more likely intended to force platforms to take more responsibility in moderating the comments section, which has traditionally been ignored.
Whether there is a pre-publish censorship system in place can determine where online social protests break out. In April, a video about the Shanghai covid lockdown went viral on WeChat Channels but not Douyin, the Chinese version of TikTok—partly because the latter platform reviews every video before it’s published, while the former didn’t at the time.
The regulator is now seeking public comments on the proposed revisions until July 1, 2022, and they may not take effect for many months. Right now, discussions about how strictly they will be enforced are only speculative. But it’s clear that China is identifying the Great Firewall’s loopholes and updating its regulations to address them. The most recent changes are “unapologetically part of China’s continued expansion of content regulations beyond mainstream media to now cover user content generated through comments and other interactive features,” says Daum.
The changes will also expand who can censor online comments. CAC now asks that platforms share the power of censoring comments with content creators—in Chinese internet lingo, “public account operators.” Currently, government-affiliated accounts are already empowered to do this on sites like Weibo. If this revision becomes law, creators will also become part of the censorship machine, responsible for identifying “illegal or negative” content and reporting it.
“Although China’s internet is one of the most censored in the world, there is still some space for discussing sensitive topics. People can play a clever cat-and-mouse game with censors and make creative adjustments once posts are censored,” says William Nee, research and advocacy coordinator at Chinese Human Rights Defenders.
“However, the new system could make that next to impossible and tighten the already limited space for freedom of expression on sensitive topics even further.”
Wallpaper Engine has become a haven for ingenious Chinese users who use it to smuggle adult content as desktop wallpaper. But how long can it last?
On key metrics, a VR experience elicited a response indistinguishable from subjects who took medium doses of LSD or magic mushrooms.
Educators who stand up to conservative activists are being harassed and called “groomers” online, turning them into potential targets for real-world violence.
Wildlife crossings cut down on roadkill. But are they really a boon for conservation?
Discover special offers, top stories, upcoming events, and more.
Thank you for submitting your email!
It looks like something went wrong.
We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.
Our in-depth reporting reveals what’s going on now to prepare you for what’s coming next.
Subscribe to support our journalism.
© 2022 MIT Technology Review