Other bloggers (dolboeb, man_with_dogs, and aiden-ko, to name just few) got to the core of the issue and came up with a research-like posts [ru] on their LiveJournals. They talked about a peculiar posting on Free-lance.ru, a Russian website with job ads for people working in IT field. The posting has been deleted but man_with_dogs has a saved screenshot [ru] of the original.



“I need 5 people,” the ad says. “Each of them will leave 70 comments a day from 50 different accounts (the accounts need to be live). Urgently. The job is 5 days a week. The duration of this project is 3 months. The payment is every 10 days (Webmoney, Yandex Money [methods of payment - GV]). Total: 12,000 rubles [around $400-G.V.] a month.”



The author of this ad, someone named Vladimir Alekseev (probably a fake name since it sounds too conventional) also provided the details of the “job.” The human bots need to target the blog of navalny.



The task is to create the maximum believable wave of comments to degrade the rating of the journal's author and to form a negative attitude toward him. You need to comment each new post correctly and persuasively. It is also important to create a positive image of “United Russia” party [the ruling party in Russia - G.V.]. Can you do it?

Human bots in Russia are more effective than good old automatic spam bots. They have a soul and a brain. They logically react to blog posts and they strength in their number. Evgeny Morozov's idea of “spinternet” can be well applied here with the whole practice of promoting certain points of view online becoming more and more prevalent in Russia.

The existence of China’s 50-cent party is well known. But now it seems Russia is attempting to form its own army of online contributors, who are paid a small sum to comment on articles or in forums critical of the ruling elite. Vadim Isakov at Global Voices has a post shedding some light on this new wave of "human bots," after a number of Russian opposition bloggers noticed numerous critical comments coming from users with accounts created on-the-fly:The key to controlling the narrative is in the word "believable." Automated comment bots do exist and have a lot of success, particularly on websites without CAPTCHAs, the little box of numbers or letters you enter to confirm you are human. (Some smarter comment bots can get around CAPTCHAs.)They’re also increasingly sophisticated -- instead of just a semi-literate line peddling porn or penis enlargements, they can give the impression of being tangentially on-topic -- until, that is, you click on the included link and end up with porn or homeopathic remedies for gout. But even at their best, bots just seem too anomalous, a gigantic non-sequitur of a conversation partner.Thus, it looks like the Russian authorities want something more meaningful and realistic to appear in comments of critical articles. As Isakov writes:The Russian Internet is remarkably free in terms of filtering, with the authorities preferring to shape the narrative instead of banning dissent (for instance by calling for Facebook and other online forums to be regulated during election time or by setting up schools for bloggers and hackers). The Kremlin also like to create the experience, but not the reality, of a democratic process, via the Internet . And if that fails, they help create a climate of impunity where crusading journalists get their legs broken. If it plays out, "human bots" are also a pretty typical public-private partnership in Putin-Medvedev’s Russia. In addition to the paid commentators, Russia’s 30-ruble army, there are hoards of volunteers -- those Russians who trawl forums and articles because they have an axe to grind and genuinely feel the need to speak up for their motherland. Hard to prove, but the Russian authorities’ approach to cyberwarfare seems to be much of the same. Outsource some of it to professionals and allow nationalist script kiddies to do a lot of the heavy lifting in the DDOS attacks.