TikTok, a short video platform owned by Beijing-based Bytedance, has attracted a huge, mainly young audience around the world, for whom it is currently serving as a vital source of distraction and connection amid the COVID-19 pandemic. It has been described in The Washington Post as “the first Chinese app to truly pierce the global Internet mainstream,” and in a recent New Yorker profile as “the last sunny corner on the Internet.” The former has prompted concern that the service could provide a channel both for Chinese censorship to shape the information diets of young people elsewhere, and for private user data potentially flowing back to Beijing. These fears have been fueled by a series of leaks including, as The Guardian reported in September, rules for handling political content that appear to place Chinese censorship priorities “in a context designed to make the rules seem general purpose, rather than specific exceptions.” The company has repeatedly dismissed such leaks as crude and outdated provisional measures from the platform’s early days.

This week, The Intercept’s Sam Biddle, Paulo Victor Ribeiro, and Tatiana Dias reported on two new TikTok documents which “sources indicated […] were in use through at least late 2019,” and one of which was apparently created only last year.

These previously unreported Chinese policy documents, along with conversations with multiple sources directly familiar with TikTok’s censorship activities, provide new details about the company’s efforts to enforce rigid constraints across its reported 800 million or so monthly users while it simultaneously attempts to bolster its image as a global paragon of self-expression and anything-goes creativity. They also show how TikTok controls content on its platform to achieve rapid growth in the mold of a Silicon Valley startup while simultaneously discouraging political dissent with the sort of heavy hand regularly seen in its home country of China. Any number of the document’s rules could be invoked to block discussion of a wide range of topics embarrassing to government authorities: “Defamation … towards civil servants, political or religious leaders” as well as towards “the families of related leaders” has been, under the policy, punishable with a terminated stream and a daylong suspension. Any broadcasts deemed by ByteDance’s moderators to be “endangering national security” or even “national honor and interests” were punished with a permanent ban, as were “uglification or distortion of local or other countries’ history,” with the “Tiananmen Square incidents” cited as only one of three real world examples. A “Personal live broadcast about state organs such as police office, military etc,” would knock your stream offline for three days, while documenting military or police activity would get you kicked off for that day (would-be protestors, take note). […] Multiple TikTok sources, who spoke with The Intercept on the condition of anonymity because they feared professional and legal reprisal, emphasized the primacy of ByteDance’s Beijing HQ over the global TikTok operation, explaining that their ever-shifting decisions about what’s censored and what’s boosted are dictated by Chinese staff, whose policy declarations are then filtered around TikTok’s 12 global offices, translated into rough English, finally settling into a muddle of Beijing authoritarianism crossed with the usual Silicon Valley prudishness. [Source]

The Intercept’s report describes guidelines similar to some previously reported by Germany’s Netzpolitik.org, which involved limiting the audiences of overweight, disabled, and other groups of users, purportedly to avoid exposing them to potential bullying. That rationale itself drew criticism, but in The Intercept’s documents, the policy is explained instead as intended to help the platform “retain an aspirational air to attract and hold onto new users.”

While these and other aspects of TikTok’s content management have sparked criticism, suspicions of political censorship and abuse of user data carry the heaviest risk for the company, already bringing sharp scrutiny from U.S. legislators and regulators. The firm has responded with an array of measures including hiring lobbyists and commissioning outside reviews of its practices. On Sunday, The Wall Street Journal’s Yoko Kubota, Raffaele Huang, and Shan Li reported the company’s latest attempt at reassurance with the completion of its shift away from using China-based moderators to review content from other countries:

The decision will result in the transfer of more than 100 China-based moderators to other positions within the company, according to people familiar with the matter. The move is the latest effort by TikTok’s owner Bytedance Inc. to distance itself from concerns about it being a Chinese-operated company. The soaring popularity of TikTok has attracted the attention of some American lawmakers worried about its Chinese roots. TikTok is Bytedance’s short-video app for markets outside of China. While much of TikTok’s content moderation procedures have been localized over the past year or two—including in the U.S., where none of its videos are monitored by moderators in China, according to a TikTok spokesman—some markets such as Germany still rely on human moderators in China to review content. […] It is unclear how the shift will affect the level of content-monitoring by the company. [Source]

In the U.S., TikTok this week announced the first members of a new panel of content moderation advisors. From Reuters’ Elizabeth Culliford:

The council, which it announced in October, will meet every few months to give “unvarnished views” and advice on content moderation policies and evaluate the company’s actions. […] TikTok said its ‘Content Advisory Council,’ will grow to about a dozen members. The council’s first meeting at the end of March will focus on topics around “platform integrity, including policies against misinformation and election interference.” The group will be chaired by Dawn Nunziato, a professor at George Washington University Law School and co-director of the Global Internet Freedom Project. The other six founding members include Hany Farid, an expert on deepfakes and digital image forensics, tech ethicist David Ryan Polgar, and experts on issues from child safety to voter information. [Source]

TechCrunch’s Sarah Perez reported last week on another new measure, the launch of a “transparency center” at the platform’s North American headquarters in Los Angeles.