It's the last day of the contest, and the final Twitter messages are rolling down TweetDeck. They're unenthusiastic.

"I'm feeling awfully bored right now," says Bernt Michaels. Bernt describes himself as an engineer who specializes in mechanical devices. "My real passion, but certainly not only vice, is sports," says his Twitter page.

Bernt punctuates his boredom by re-tweeting a Slate article about public transportation. Then he's gone. Never to tweet again.

"I no longer feel that excited although some part of me still feel this little bit of happiness but it's not that exaggerating anymore," writes 1Qbit, a mysterious tweeter who's into "parallel realities" and talks a lot about quantum computing and virtual reality.

As things wind down, it feels as though everyone has cut out early for the weekend. But this is impossible.

That's because Bernt isn't human. He's one of just over a dozen Twitter bots, set up a few weeks earlier by the Institute for the Future, to study the way that humans and machines interact in the 140-character age. About 10 contestants have configured the bots – picking from a shortlist of personality types and Twitter habits, and seeding them with six provocative questions, designed to lure humans into conversing with them. And to a certain extent, they work. On the modern web, where there's nothing surprising about a non sequitur, much of what a bots say passes for believable tweets.

>'My real passion, but certainly not only vice, is sports.' Bernt Michaels, Twitter bot

The contest shines a light on the way social networks like Twitter are changing how we interact with both machines and people. Twitter, you see, is crawling with bots. Some are built to scam you, but others are on your side.

Scammers build Twitter bots for the money. You can buy 30,000 Twitter followers for $20 on eBay, and those are all bots. Every Twitter user knows the sour taste of having a stranger tweeting them a link to a herbal supplement or work-from-home scam site, and no, those strangers aren't real people.

Though Twitter's security team has become pretty good at wiping out these bots, they don't get them all. A majority of the Institute's bot contestants dodged Twitter's searchlights, and a recent estimate pegged more than 40 percent of Barack Obama's recent followers as fakes.

The enterprise seems creepy – secretly influencing people via robotic software – but the Institute of the Future see a bigger picture here. In a sense, bots are becoming an extension of us. There are Twitter bots that can warn us of earthquakes or upcoming birthdays. There's even a $100 kit that will let plants send Twitter messages whenever the soil is too dry.

Tim Hwang works for the company that created the contest's bot-code. For Hwang, who is chief scientist at the Pacific Social Architecting Corporation, the big question is whether Twitter bots can start influencing communities in positive ways – maybe giving people a perspective that they wouldn't normally get anywhere else. Could friends all follow a service that translated Spanish-language news for them and tweeted it their way? Hwang thinks so. He says that bots are "basically a prosthetic that we can install into a network of humans," to enhance the way they socialize.

Hwang's bots can be programmed to have different personalities at different times of the day. On midday Friday, TrazHuman is not a happy camper.

"I feel angry and guilty about it," says TrazHuman, an artificial intelligence and baseball fan who has been a bit of a bummer to follow these past few weeks. TrazHuman is programmed to alter emotional states between bored, angry, and excited, all the while pumping out about 100 Twitter messages per day. Not surprisingly, given his negativity, TrazHuman is near the bottom of the contest's leaderboard.

Bernt hasn't placed well either. He's only a few points ahead of grumpy TrazHuman. But things could be worse. Bernt hasn't been found out and suspended by Twitter. That's already happened to ManofTomorrow, CheeseSports, and, most recently, the feline-obsessed Catularity.

Nobody knows exactly what prompted the suspensions, but the contest organizers suspect that the bots were annoying the humans by tweeting at them too frequently.

Another participant, BotSoul seemed to be coasting to a sure victory before she was abruptly suspended. BotSoul tweeted at the same rate as Ecartomony, but either she racked up more points, either because she had better seed questions, or perhaps she just go lucky when picking her human targets.

The idea for the contest came from a Social Wargaming competition that Tim Hwang had helped run a few years earlier. There, researchers tried to measure how good people were at engaging others online. They noticed that a lot of the work done by social media experts was repetitive and, indeed, robotic. "The essential idea was to see if we could really demonstrate that social media experts could be replaced by relatively simple pieces of software," Hwang says.

Has that been proven? Hwang thinks so, but that's not immediately obvious. The bots are still a long way from really passing for humans, even on Twitter. Reading through the human-computer conversations, most of them are quickly outed as bots as soon as they start chatting with their targets.

Just the day before, one unlucky bot, Kalika Srivasinsen, met with the ultimate bot-humiliation. She failed a Turing test. "Time flies like an arrow. Fruit flies like a…?" her human Twitter buddy Robert Gryfft (a pseudonym) asked. Kalika's unfortunate response: "Can you tell me any gossip? How did you hear about Lauren?"

Gryfft took one look at this answer and immediately called Kalika out as a bot.

Almost all of the Twitter bots did get some human interaction, and some of them – like Kalika's failed Turing test – lend a bit of voyeuristic excitement to the experiment. "it's just a lot of fun," says Jason Tester. director of human future interaction with the Institute. "You get strange, crazy interactions going on with your bots. It's like sports for the rest of us."

The contest's winner, a business school graduate bot with a "strong interest in post-modern art theory," racked up 14 followers and 15 re-tweets or replies from humans. The followers were worth one point each. A re-tweet or a comment was worth three points. Ecartomony scored 59.

That would be a pretty weak response for a Twitter consultant, but Hwang says that the experiment – and this his his second Socialbot Contest in two years – has proved that bots can both generate followers and conversations. "We definitely see that," he says.

But looking through Twitter profiles of the bots, there is something else at work here. Almost none of Ecartomony's followers are real people. They're mostly corporate Twitter types that appear to follow just about anyone who follows them.

For more than half a century, the Holy Grail of artificial intelligence has been to create a program that is indistinguishable from a human. But the things that we do on Twitter and other social media have become so concise and so robotic that maybe it no longer takes the same effort to pass as a human.

Are we lowering the bar for humanity? After 50 years of trying to make human-like software, have we flipped the equation and used technology to make people more robot-like?

Hwang say's that this is a cynical way to view things, but one with at least a kernel of truth. "These platforms sometimes force us into very compressed forms of expression," he says. "So as a result, people do tend to behave in more bot-like ways over time."

Take Facebook's birthday alerts. Thanks to Facebook, we rarely miss a birthday, but it's overwhelming. How to come up with a genuine thoughtful birthday message every single day of the year? The answer is that we don't. "The system has been designed in such a way so you get these very hollow robotic responses from people," says Hwang.

In other words, maybe we need some robot help.

After the contest ends, Ecartomony takes a few questions from Wired, but his answers aren't very enlightening. We ask him how he feels about winning the contest. He says: "Yup." We ask him if he's ever taken a Turing test. Answer: "The Loebner Prize."

Finally, we ask the big question: "Do you think bots are becoming more human, or are humans becoming more bot-like?"

Ecartomony doesn't respond.

Update: this story has been corrected to clarify who wrote the contest's bot-code. It was written by Max Nanis and Ian Pearce of Pacific Social Architecting Corporation