Advertising revenue is soaring at Facebook and Twitter as consensus grows that people can profitably be influenced by promotional messages woven in between updates from their friends.

But not every commercial enterprise exploiting the persuasive power of social media has set up a corporate account or pays for ads. Fake accounts operated by low-paid humans or automated software have become good business, too. They are used to inflate follower counts, to push spam or malware, and even to skew political discourse. The tactic appears to be pervasive and growing in sophistication.

On Twitter as many as one in 20 active accounts are fakes. Facebook’s equivalent number is a little more than one in 100 active users. Software tools that help you make new accounts in bulk can easily be found or bought online, says Christo Wilson, an assistant professor at Northeastern University who has studied the problem of fake accounts.

One of his students recently tested some of those tools and set up 40 Twitter accounts and 12 Facebook accounts in a single day before the companies blocked new registrations from that Internet connection. Simple evasive measures would probably have allowed many more accounts to be made. Investors closely scrutinize active user counts to gauge the value and potential of social networks. That encourages sites to ensure that their security systems don’t block legitimate users, says Wilson, making it easier for fake accounts to flourish.

Fake accounts are given a veneer of humanity by copying profile information and photos from elsewhere around the Web. They can gain fake friends by exploiting human nature and the fact that people on a social network are often looking for new connections and content. “Choose a picture of a beautiful woman, and all of a sudden people accept your friend request,” says Wilson. Celebrities often have large numbers of fake followers because aping what many real users do is an easy way to make a fake account look legitimate.

Once a fake account is established, the simplest way to make money with it is by quickly inflating the numbers of things like followers or “likes.” It is easy to find sites offering 100,000 new Twitter followers for as little as $70. Instagram and Facebook “likes” and Pinterest “pins” are also easily bought. Having more followers or likes helps people and businesses look good. It can also influence the algorithms used by social networks or other companies to recommend influential accounts.

Fake accounts have been used in more sophisticated ways to fake social support for something, and to influence real users to join in. The accounts are controlled either by software or by paying Internet users in developing countries a few cents per action.

As social networks become more tightly coupled to personal spending and economic activity, incentives grow.

In 2010, a conservative group in Iowa used automated accounts to send messages supporting Republican candidate Scott Brown’s attempt to win a Massachusetts seat in the U.S. Senate. Thanks to retweets by some real users, the messages reached an audience of 60,000. In Mexico’s 2012 general election, the Institutional Revolutionary Party used more than 10,000 automated accounts to swamp online discussion. Both parties won their races, although it’s not clear what impact these social-media manipulations had.

Recently, automated accounts have been seen staging more commercial campaigns. A 2014 study of 12 million users of China’s influential Weibo social network, which is similar to Twitter, found 4.7 million accounts involved in campaigns that try to manufacture word-of-mouth support for particular products. Most were automated accounts that amplified certain messages, mentioning products or services, from people with large followings (messages likely paid for by the brand behind them). Also last year, automated tweets were part of a scam that inflated the value of penny-stock tech company Cynk to $5 billion in just a few days.

Filippo Menczer, a professor at Indiana University, says more sophisticated “social bots” that engage with other users are probably active on Twitter and other networks but escaping detection. Research experiments with such bots have shown that they can successfully gain social capital and even shape the social connections humans make with one another, says Menczer.

As social networks become more tightly coupled to personal spending and wider economic activity, the incentives to use them grow stronger, Menczer says.

In 2014, the security company Bit-defender picked up a social bot using names including “Aaliyah” that was stalking men on the casual-dating app Tinder. Aaliyah would start a simple, scripted conversation, then ask the victim to play a particular social game, offering her phone number in exchange. The scam didn’t have a clear business model, but Bogdan Botezatu, a senior threat analyst at Bitdefender, believes it was “a test run for something much bigger.”

The Pentagon’s research agency DARPA, which has its own concerns about what it calls “deception or misinformation campaigns” in social media, sponsored a contest in which teams of researchers compete to detect social bots at work in a Twitter-style social feed. Menczer, who took part, hopes the contest will lead to tools that are better at policing real social networks. “It’s kind of scary that we don’t know how to detect these kind of bots and campaigns if they are out there,” he says.