One day a few months ago, a "suggested friend" showed up on my Facebook news feed: my old college roommate, a good friend named Paul. It was a little baffling. Had Paul unfriended me at some point? At a glance, it was definitely the Paul I knew: same profile photo, our common alma mater, a bunch of mutual friends (although, weirdly, most of them were my high school buddies).

As you might guess, it wasn't Paul. Someone had replicated his account, complete with his most basic info. It wasn't immediately obvious why someone would want to create a second Paul. Was it a prank? Were the intentions more malicious?

Rami Essaid has a theory. The co-founder and chief product officer of security company Distil Networks has been fighting off bots for years. As the public is beginning to learn, these bots can be used to rapidly perform tasks across the internet, like posting on social media, breaking into accounts, or publicizing politically charged posts.

On Facebook, Essaid says, hackers can create a profile that's a replica of a real person. Then, using bots, they'll send requests to friends of the real person's friends--which explains why, when I spotted Fake Paul, most of the friends we had in common were my high school pals. Once the fraudulent profile builds out a network of friends, it will begin--again, using bots--Liking and sharing posts to get them to appear in other people's news feeds.

And this is far from the only route a hacker can take to get as many eyes on propaganda as possible. They could create a new person from scratch--often with an attractive photo--then seek out people with similar hometowns or college backgrounds with the hope that a few suckers accept.

It's even easier on Twitter, where people are far less selective about who they follow. There, hackers can create bots--complete with convincing bios and stock photo avatars--to follow people with similar profiles, knowing that a sizable fraction will follow back. Then the bots can get to work, retweeting stories that align with their programmed inclinations and using authentic-sounding language to criticize ones that don't.

It might seem like a lot of work just to influence a handful of people at a time. But it's not a lot of work--and therein lies the purpose of the bots. All hackers have to do is write the initial code and the bots do the rest, spreading information, or misinformation, far and wide. In an election decided by 80,000 votes in three states, that can be meaningful.

And manipulating public opinion isn't all that bots can do; they can spam review sites, assist ticket scalpers, or test out stolen user names and password combinations on hundreds of websites at once.

The bot fighters

That's where companies like Distil Networks come in. The San Francisco-based startup tries to differentiate between which actors on the internet are people and which are not. It uses a set of criteria to establish whether or not the user is behaving the way a human would, then blocks the ones determined to be bots from performing an activity or subjects them to tests to let them prove they're real.

It's a big task, and one that the startup has been at since 2011, long before most people had heard the word "bot" in this context. The company has high-profile clients--StubHub, Yelp, Bank of America, to name a few--and it's working to prevent the bots from crippling businesses and undermining everyday people.

But some of its hardest work is still ahead.

Essaid had a revelation while working for a cloud security company six years ago. "I kept hearing that bots were impacting our customer base in a way that we weren't helping our customers with," he says, "like Web scraping, account takeovers, fake news, and artificial social-media posts." He did some research and couldn't find any solutions that specifically targeted the Web robots behind those security issues. "There just really wasn't anybody in the market," he says, "that could defend you against bots."

He called up Andrew Stein and Engin Akyol, two of his friends since 7th grade. The trio had gone to high school together, then college at North Carolina State, and were now each doing their own things. He pitched them on teaming up to create a bot-fighting solution.

They agreed. Essaid quit his job and emptied his savings account, and the three went to work on building a system. The next year, they were accepted into startup accelerator TechStars and received a grant that helped them get off the ground.

Who's real and who's fake?

Just a few years ago, online bots looked more or less like you might expect them to: lines of code scrolling across terminal screen somewhere. "Now," Essaid says, "they're running on real laptops and on real mobile devices. They're moving a mouse, they're simulating keyboard clicks, they're simulating touches on a device." Essentially, they behave like humans do.

To determine who on the internet is real and who is fake, Distil's system studies a range of variables. Does the actor's cursor wander around the page like a person's would, for example, or does it make a beeline for its target? When it clicks a button, does it press right in the center every time? Do its web browsing patterns across other web sites resemble those of a real person? After quickly collecting that data, the system uses a series of algorithms, strengthened over time by machine learning, to make a determination as to whether the actor is real or a bot.

Distil, which brought on former Symantec and FireEye exec Tiffany Olson Jones to be its new CEO this October, now has $60 million in funding and 170 employees. The company's client list includes institutions with huge swaths of sensitive information, like banks and health care providers.

For clients like Bank of America and Aetna, Distil's job is to prevent bots attempting to use stolen user name and password combos from entering. StubHub, another client, uses the startup's services to try to deny scalpers, who can use bots to buy and re-sell tickets by the thousands, earning a several-dollar profit each time. And Yelp employs the company to fight off spammy, computer-generated positive reviews, as well as negative ones meant to bring down rivals' ratings--critical for the site, since the authenticity of its comments are its lifeblood.

"Bots allow bad actors to operate at high speed and at scale," Essaid says. "They've changed the equation."

Hijacking politics

While Twitter bots didn't enter everyday conversation until after the election was over, Essaid says he tried to warn people in the months leading up to it. On October 4, 2016, he published a guest post on Venture Beat entitled, "Are political bots stacking the deck in the presidential race?" In it, he wrote that "political bots are being used to exaggerate a candidate's popularity on Twitter and manipulate public conversation." The scale of their influence in the presidential election, he wrote, "is unprecedented, and it shouldn't be taken lightly."

It's important to note that both sides of the political spectrum make use of bots on social media. But that's not to say they use them equally. A study by Oxford Uiversity's Computational Propaganda Project found that, in the hours and days following last September's first presidential debate, more than four times as many pro-Trump tweets came from automated accounts than pro-Clinton tweets.

Essaid has tried to sound the alarm again since then. After the FCC began accepting public comments on Net Neutrality this spring, he noticed a large number of seemingly computer-generated comments on the organization's site--almost all of which were railing against Net Neutrality.

"We went to the FCC and said, 'Hey, for free we would love to help you solve your bot problem,' " Essaid says. " 'The FCC was like, 'We're not sure that we're allowed to filter out free speech.' I'm like, 'This isn't free speech. These aren't real people.' " The FCC did not respond to Inc.'s request for comment.

In October, Essaid's warning was lent some credence: A report from data analytics firm Gravwell found that 80 percent of the more than 22 million Net Neutrality comments on the FCC's site were computer-generated. For the entrepreneur, the experience was familiar.

"People just have this aversion to trying to tackle this," he says. "It's crazy. It was a fight to give them free service. Sometimes we want to help, and we can't even help these companies help themselves."

Facebook in recent months has publicly discussed its own internal initiatives to combat fake accounts, including the deletion of 30,000 fake accounts in France in the weeks before the country's presidential election. Twitter's bot-fighting systems are, as Essaid says, "less sophisticated." A June blog post from the company says that Twitter is expanding its efforts to fight automated accounts. (Although, as a Bloomberg report highlighted last month, the company has little financial incentive to remove the bots, since more active accounts means better numbers to report to Wall Street.)

Essaid says he has tried to obtain Twitter as a client, but to no avail. The company did not respond to Inc.'s request for comment.

"Bots are really putting their thumb on the scale of a lot of online conversations," Essaid says. "We've been trying to shine a light on this for two years now," he adds, pointing to the talks he's given and the email campaigns the company has run.

While companies like Akamai and ShieldSquare offer similar bot-fighting services, Essaid doesn't point to the competition as the startup's biggest obstacle. "We've had to do a lot of educating," he says. "This has been a long-standing problem of ours. We're always trying to teach our potential customers that bots are doing the things they're doing."

As the bots continue to become more sophisticated, Essaid thinks that a wide-scale solution is only possible if the public shifts its mentality.