Google, Facebook and LINE claim to have toolkits to stop the spread of disinformation on their platforms. In reality, content farms and propaganda peddlers are way ahead of the game.

By Jason Liu (劉致昕), Ko Hao-hsiang (柯皓翔) and Hsu Chia-yu (許家瑜)

Photography by Su Wei-ming (蘇威銘)

Design by Brittany Myburgh

Translation by Harrison Chen

This piece first appeared in The Reporter (報導者) and is published under a creative commons license.

In part one of this series, we met the uncles and aunties who fall prey to Chinese disinformation on messaging app LINE. In parts two and three, we learned about the scammers and agitprop hucksters who produce the misleading content. In part four, we talk to the social media companies that host the fake news and hear about their solutions to quash the problem.

BUSINESS OPPORTUNITIES IN THE INFORMATION WARFARE WHACK-A-MOLE

From a single Google Analytics ID and a private Telegram group of 481 people, a mysterious figure named “Boss Evan” was able to rule a content farm empire that spanned Taiwan, Malaysia, Singapore, Hong Kong and China. He abetted the careers of fellow content farm operators, like the owner of the Ghost Island Mad News site, who set up their own series of fan pages and websites.

There’s also different kinds of gold prospectors working the disinformation streams, from self-employed operators to teams of 500 or more. But Boss Evan's massive network of websites, which we learned about in part three, is only the tip of the iceberg, and there are at least six other major content farm networks. Add in the political parties who contract operators to produce content during election campaigns, then it's nearly a guarantee that content farms will continue to find our attention.

HOW POLITICS SQUIRMS BETWEEN THE THIN LINE OF INFO AND NEWS

Amidst this chaos, many are amazed there’s even a market for this business, and wonder what kind of impact they really have in Taiwan?

Cheng Yu-chung (鄭宇君), an professor at National Chengchi University, uses trends from the past ten years to explain the rise of content farms. In the past, it was easy to distinguish between news reports and advertisements on the page of a newspaper. But as the media engaged in the practice of advertorials and sponsored content, the line gradually thinned, and people's ability to recognize them decreased.

Second, when people use social media to receive their news, whether or not they decide to read something usually depends on who shared it. That is, they pay less attention to what media company or which author wrote it. This is a boon for content farms and their anonymous authors. Cheng stresses that an accountability mechanism must be established for media and key opinion leaders, otherwise they cannot be held accountable. In an environment where one cannot distinguish between news and information, “it becomes a loophole that Chinese party-state media can exploit.”

The final reason is the way platform algorithms work. “Google’s algorithms put a high value on links and click rates. Content farms create articles with SEO (search engine optimization) in mind, but news is created with the intention of reporting facts... When people use these methods to attack Google, they have no way to tell what is real and what is fake.”

On this point, web companies insist they are constantly looking for solutions, including tweaking their algorithms and suspending violators. Taiwan's top three platforms for disinformation -- Google, Facebook and LINE -- have all introduced policies for greater transparency and educating users. At the same time, in order to protect freedom of speech, the task of identifying real and false information has been mostly left to volunteers from a handful of third parties and nonprofits.

FACT-CHECKERS CAN’T KEEP UP

These third-party fact-checkers have few resources and little manpower. In the battle against disinformation, this frontline is the most thinly stretched and fatigued.

A worker at one of these third-party organizations told us that in the worst-case scenario, it can take up to two months to verify a suspected piece of false information, and that it is even harder if the information contains images or video. The list of content that Facebook sends to them is ranked by priority, but they are not told how this priority is determined nor where the content comes from. As for LINE, the fact-checkers now suspect that producers of disinformation have found a way to stuff their fact-checking lists with actual news, making work even more difficult for them.

One nonprofit told us that cutting off Google AdSense money flows is the key to controlling disinformation, but their attempts to use Google’s reporting mechanism have so far received no response. They warn that even though YouTube has gotten more proactive in fighting disinformation in Taiwan, they cannot keep up with the volume and methods of disinformation.