Oh, great. Bots are buying ads now.

Facebook said this week that an internal investigation uncovered 470 fraudulent Facebook accounts and pages linked to a Russian "troll farm" that bought 3,300 Facebook ads costing around $100,000 during the recent U.S. presidential election.

Facebook found another 2,200 ads, costing $50,000, that appear to have originated in the U.S., but with the user language set to Russian.

These ads reached between 23 million and 70 million Facebook users, according to one report, and wasn't discovered until more than a year after the first ads were placed.

Facebook refused to share the content of the ads with Congress and the media, saying the disclosure would violate its privacy rules (despite its admission that the accounts were fake - Facebook is protecting the privacy of users who don't exist).

The ads "appeared to focus on amplifying divisive social and political messages across the ideological spectrum," according to Facebook.

Wait, what's a 'troll farm'?

The organization that spent all that money and placed all those ads is formerly known as the Internet Research Agency. It's a Russia-based company profiled two years ago in The New York Times that has, according to a source quoted in the story, "industrialized the art of trolling." The Agency bases its pay on troll performance, and even offers English grammar classes so employees' posts can be more easily accepted as coming from Americans.

[ To comment on this story, visit Computerworld's Facebook page. ]

After the U.S. election, it appears that the Internet Research Agency changed its name to Glavset and may have spun out a sister company called Federal News Agency, which reportedly publishes 16 propagandistic news websites and employs more than 200 full-time journalists and editors.

Lost in the chatter about this "troll factory" and its connection to the Kremlin is the fact that it's a private company owned by businessman and restaurateur Evgeny Prigozhin.

His "product" is literally disinformation as a service (DaaS).

Another investigative report published Thursday by The New York Times, researched with help from cybersecurity firm FireEye, found that Russian bots, users or organizations created hundreds or thousands of "fake Americans" in the form of fake accounts. These non-existent Americans then tried to influence online conversation about the election. Many of the phony accounts fired off "identical messages seconds apart - and in the exact alphabetical order of their made-up names."

Welcome to the world of 'computational propaganda'

Computational propaganda is the use of automation, botnets, algorithms, big data and artificial intelligence (A.I.) to sway public opinion via the internet.

Disinformation campaigns using computational propaganda techniques emerged globally in 2010, and reportedly went mainstream last year during the U.S. presidential campaign.

"The overall political goal of disinformation is to confuse and to erode your trust in information altogether," according to University of Washington professor and researcher Kate Starbird.

The main targets have so far been governments and the media. The technique by actual fake news outlets is to consistently attack real news outlets as fake news. The objective is to get as many people as possible to throw up their hands and conclude, "it's all fake news, no source can be trusted." That confusion weakens the power of the press to hold politicians accountable and also weakens public trust in all democratic institutions.

The phrase "computational propaganda" is closely associated with the Computational Propaganda Project at Oxford University, which coined the phrase.

The future of fake news

The computer-enhanced disinformation campaigns launched by Russia and others are fairly crude, and the effort to cover their tracks limited. The future of disinformation is likely to be much more sophisticated and harder to defend against.

Disinformation is rapidly going multimedia, for example. Advances in A.I. and CGI will enable convincing audio and video that can make it appear that anyone is saying or doing anything.

University of Washington researchers used A.I. to create a fake video showing former president Barack Obama saying things he never actually said. And Stanford researchers developed something they call Face2Face, which creates real-time faked video, so basically anybody can be shown to say anything in a live video chat.

These techniques aren't perfect. But given time and better technology, they will be.

Adobe, as well as a Canadian startup called Lyrebird, have demonstrated convincing fake voices of famous people, which can be made to say anything at all.

The Stanford and Adobe techniques could enable real-time spoofing of people on the phone or through video chat. That could be a new way to plant fake news in real media, by tricking real journalists with imposter sources. It could also be used for a social engineering technique called "CEO fraud," where an important and well-known person in a company calls an underling and asks them to do something - like transfer funds to an offshore account or send sensitive documents to someone.

Another glimpse at the future of DaaS comes from Cambridge Analytica, which was used to help elect President Trump. The company reportedly performs psychological profiles of individual social media users, then serves them custom ads that appeal to their particular obsessions, fears and aspirations. In the future, every political ad could be unique to each voter and sway public opinion under the radar and beyond scrutiny.

Get ready for Antidisinformation Technology

This publication is all about IT, which of course stands for "information technology."

Enterprise technology is all about managing, protecting, storing, processing, using, accessing, sharing and generally taking advantage of what is assumed to be true, factual information. This true information comes under threat from user error, hackers, natural disasters and other calamities. Security is paramount, because rivals and criminals want to profit from stealing true information - or holding it for ransom.

In the very near future, it's likely that the focus of IT security will be forced to shift from keeping information safe to keeping information true.

Let me give you two dark scenarios.

Industrial espionage - the theft of trade secrets - costs the U.S. economy between $225 billion and $600 billion per year, according to the Commission on the Theft of American Intellectual Property.

Trade secret thieves steal information because they're lagging behind, and want to level the playing field quickly and cheaply. But what if they could level it by scrambling the trade secrets of competitors, and falsifying data internally? What if they could manufacture evidence of criminal wrongdoing by planting email conversations that never occurred or use fake information to blackmail corporate officers? What if they could dry up investment by feeding false information to investors?

And disinformation could be massively useful for negative marketing.

In fact, this is what's happening in politics. During the Cold War, the West said "Democracy is better!" And the Soviet Block said "No, Communism is better!"

But now that's been reversed. Russian citizens know that Russia has major problems, such as official corruption, and are inclined to believe that America and Western Europe are better governed. But rather than convince Russians that Russia is better, they've set out instead to both demonstrate that the West is in chaos and also to actually try to throw it into chaos through computational propaganda that undermines democracy.

The hit on business

The same could work with marketing. Let's say some unscrupulous company in Kleptocrastan makes smartphones that compete with Apple's iPhone. Instead of touting the benefits of their phone, they could burn Apple's reputation globally through computational propaganda. Using Facebook's analytics, they could find every potential iPhone buyer, and do a psychological profile on each person. They could then serve up relentless posts from fake users sharing fake news about Apple's environmental abuses to this user, and electrical hazards to that user, and the high cost of repairs to yet another user - exploiting each individual person's most pressing concerns.

They could fabricate data on iPhone reliability, plant stories about violent attacks against iPhone users and create fake videos showing Apple executives abusing workers. All this false information could be guided along by A.I. and delivered invisibly, behind the scenes and under the radar.

All this could be accomplished by a growing industry providing disinformation as a service who specialize in stealth and client confidentiality.

The DaaS company that bought those Facebook ads on behalf of their client isn't some weird Russian anomaly that will go away. They're just ahead of the curve.

Disinformation is about to become big business – and one that targets big business.