Photo

Whoever said, “Money can’t buy you friends,” clearly hasn’t been on the Internet recently.

This past week, I bought 4,000 new followers on Twitter for the price of a cup of coffee. I picked up 4,000 friends on Facebook for the same $5 and, for a few dollars more, had half of them like a photo I shared on the site.

If I had been willing to shell out $3,700, I could have made one million — yes, a million — new friends on Instagram. For an extra $40, 10,000 of them would have liked one of my sunset photos.

Retweets. Likes. Favorites. Comments. Upvotes. Page views. You name it; they’re for sale on websites like Fiverr and countless others.

Many of my new friends live outside the United States, mostly in India, Bangladesh, Romania and Russia — and they are not exactly human. They are bots, or lines of code. But they were built to behave like people on social media sites.

Bots have been around for years and they used to be easy to spot. They had random photos for avatars (often of a sultry woman), used computer-generated names (like Jen934107), and shared utter drivel (mostly links to pornography sites).

But today’s bots, to better camouflage their identity, have real-sounding names. They keep human hours, stopping activity during the middle of the night and picking up again in the morning. They share photos, laugh out loud — LOL! — and even engage in conversations with each other. And there are millions of them.

These imaginary citizens of the Internet have surprising power, making celebrities, wannabe celebrities and companies seem more popular than they really are, swaying public opinion about culture and products and, in some instances, influencing political agendas.

“I’ve been working with these social bots for a really long time, and now they look like real people online — even though they are not,” said Tim Hwang, chief scientist at the Pacific Social Architecting Corporation, a research group that explores how bots and technologies can shape social behavior.

There are a number of different ways to build bots. One of the most popular bot management tools is a program called Zeus, which sells for $700 and offers a simple dashboard from which you can control your bot army. (In addition to creating social media bots, the program is used for more nefarious purposes, like identity theft.) More advanced programmers build their own bot farms from scratch.

Bots often carry the hashtags — online road signs for a particular discussion — of viewpoints that their owners actually oppose, to try to confuse people or muffle or redirect discussions.

During the 2012 presidential elections in Mexico, the Institutional Revolutionary Party, or PRI, was accused of using tens of thousands of bots to drown out opposing parties’ messages on Twitter and Facebook. The PRI is to said have employed a little trickery, parsing and twisting language enough to confuse people about what the opposition really meant to say online.

Over the years in Syria, a number of bot groups have cursed, browbeaten and threatened anyone tweeting favorably about protests or opposition leaders.

In Turkey, where Twitter was briefly banned not long ago, an investigation found that every political party was controlling bots that were trying to force topics to become trends on social sites that favored one political ideal over another. The bots would also use a political group’s slogan as a hashtag, with the intent of fooling people into believing it was more popular than it really was.

A man I spoke with who would identify himself only as “Simon Z” operates Swenzy, which he says is based in the United States. It sells followers, likes, downloads, views and comments on social sites.

He says his company is using artificial intelligence and other digital maneuvers to stay ahead of the bot hunters at big Internet companies like Google, Facebook and Twitter, which spend plenty of time trying to scrub bots from their sites. Sometimes it works — at least for a while.

Before Twitter’s public stock offering, the company scrubbed millions of bots from the service. Over the years, Google has removed hundreds of millions of video views on YouTube attributed to bots.

“There’s an evolutionary process at work where companies have built better spam filters, which has lad to better bots,” said Mr. Hwang.

Simon Z’s bots act like people by acquiring information from real users, including avatars, pictures and other conversations. With all of these tricks, he said, they appear to “emulate human behavior.”

He said he now operates 100,000 very advanced bots that are active on numerous networks, including YouTube, Facebook, Twitter, Vine, Instagram and SoundCloud, an audio sharing site. When buyers make significant orders of bots, he said he can go to “underground suppliers” who operate larger bot farms. (It is not illegal to own a bot, or make them; the legality falls into how people use them. Their use often goes against social media sites’ terms of service.)

His clients include celebrities, musicians and politicians who want to seem more popular than they really are. Governments also use his bots, he said.

“This is all about power and control, the same thing it’s always been, but now it’s digital and you can do a lot more of it,” said Rick Wesson, chief executive of Support Intelligence, a computer security consulting firm based in San Francisco.

For now, these bots are simply deceptive, tricking people into thinking something is popular or pushing an agenda. But as bots become more sophisticated, Mr. Wesson said, they could become nastier.

In March, two students at the Technion, the Israel Institute of Technology, created a swarm of bots that caused a phony traffic jam on Waze, the navigation software owned by Google.

The project, which was a class demonstration, was so sophisticated that the students were able to make bots that imitated Android cellphones that accessed fake GPS signals and were operated by fake humans in fake cars. The Waze software, believing that the bots were on the road, started to redirect actual traffic down different streets, even though there was no traffic jam to avoid.

So be careful which bots you befriend. If it’s a bot with a different political viewpoint, your digital buddy may turn on you. Or even try to get you lost.

Email: bilton@nytimes.com Twitter: @nickbilton