Someone who’s thought a great deal about the design of our interactions in social networks is Nicholas Christakis, director of Yale’s Human Nature Lab, located just a few more snowy blocks away. His team studies how our position in a social network influences our behaviour, and even how certain influential individuals can dramatically alter the culture of a whole network.

The team is exploring ways to identify these individuals and enlist them in public health programmes that could benefit the community. In Honduras, they are using this approach to influence vaccination enrolment and maternal care, for example. Online, such people have the potential to turn a bullying culture into a supportive one.

Corporations already use a crude system of identifying so-called Instagram influencers to advertise their brands for them. But Christakis is looking not just at how popular an individual is, but also their position in the network and the shape of that network. In some networks, like a small isolated village, everyone is closely connected and you’re likely to know everyone at a party; in a city, by contrast, people may be living more closely by as a whole, but you are less likely to know everyone at a party there. How thoroughly interconnected a network is affects how behaviours and information spread around it, he explains.

“If you take carbon atoms and you assemble them one way, they become graphite, which is soft and dark. Take the same carbon atoms and assemble them a different way, and it becomes diamond, which is hard and clear. These properties of hardness and clearness aren’t properties of the carbon atoms – they’re properties of the collection of carbon atoms and depend on how you connect the carbon atoms to each other,” he says. “And it’s the same with human groups.”

Christakis has designed software to explore this by creating temporary artificial societies online. “We drop people in and then we let them interact with each other and see how they play a public goods game, for example, to assess how kind they are to other people.”

Then he manipulates the network. “By engineering their interactions one way, I can make them really sweet to each other, work well together, and they are healthy and happy and they cooperate. Or you take the same people and connect them a different way and they’re mean jerks to each other and they don’t cooperate and they don’t share information and they are not kind to each other.”

In one experiment, he randomly assigned strangers to play the public goods game with each other. In the beginning, he says, about two-thirds of people were cooperative. “But some of the people they interact with will take advantage of them and, because their only option is either to be kind and cooperative or to be a defector, they choose to defect because they’re stuck with these people taking advantage of them. And by the end of the experiment everyone is a jerk to everyone else.”

Christakis turned this around simply by giving each person a little bit of control over who they were connected to after each round. “They had to make two decisions: am I kind to my neighbours or am I not; and do I stick with this neighbour or do I not.” The only thing each player knew about their neighbours was whether each had cooperated or defected in the round before. “What we were able to show is that people cut ties to defectors and form ties to cooperators, and the network rewired itself and converted itself into a diamond-like structure instead of a graphite-like structure.” In other words, a cooperative prosocial structure instead of an uncooperative structure.

In an attempt to generate more cooperative online communities, Christakis’s team have started adding bots to their temporary societies. He takes me over to a laptop and sets me up on a different game. In this game, anonymous players have to work together as a team to solve a dilemma that tilers will be familiar with: each of us has to pick from one of three colours, but the colours of players directly connected to each other must be different. If we solve the puzzle within a time limit, we all get a share of the prize money; if we fail, no one gets anything. I’m playing with at least 30 other people. None of us can see the whole network of connections, only the people we are directly connected to – nevertheless, we have to cooperate to win.

I’m connected to two neighbours, whose colours are green and blue, so I pick red. My left neighbour then changes to red so I quickly change to blue. The game continues and I become increasingly tense, cursing my slow reaction times. I frequently have to switch my colour, responding to unseen changes elsewhere in the network, which send a cascade of changes along the connections. Time’s up before we solve the puzzle, prompting irate responses in the game’s comments box from remote players condemning everyone else’s stupidity. Personally, I’m relieved it’s over and there’s no longer anyone depending on my cackhanded gaming skills to earn money.

Christakis tells me that some of the networks are so complex that the puzzle is impossible to solve in the timeframe. My relief is shortlived, however: the one I played was solvable. He rewinds the game, revealing for the first time the whole network to me. I see now that I was on a lower branch off the main hub of the network. Some of the players were connected to just one other person, but most were connected to three or more. Thousands of people from around the world play these games on Amazon Mechanical Turk, drawn by the small fee they earn per round. But as I’m watching the game I just played unfold, Christakis reveals that three of these players are actually planted bots. “We call them ‘dumb AI’,” he says.

His team is not interested in inventing super-smart AI to replace human cognition. Instead, the plan is to infiltrate a population of smart humans with dumb-bots to help the humans help themselves.

“We wanted to see if we could use the dumb-bots to get the people unstuck so they can cooperate and coordinate a little bit more – so that their native capacity to perform well can be revealed by a little assistance,” Christakis says. He found that if the bots played perfectly, that didn’t help the humans. But if the bots made some mistakes, they unlocked the potential of the group to find a solution.

“Some of these bots made counter-intuitive choices. Even though their neighbours all had green and they should have picked orange, instead they also picked green.” When they did that, it allowed one of the green neighbours to pick orange, “which unlocks the next guy over, he can pick a different colour and, wow, now we solve the problem”. Without the bot, those human players would probably all have stuck with green, not realising that was the problem. “Increasing the conflicts temporarily allows their neighbours to make better choices.”

By adding a little noise into the system, the bots helped the network to function more efficiently. Perhaps a version of this model could involve infiltrating the newsfeeds of partisan people with occasional items offering a different perspective, helping to shift people out of their social media comfort-bubbles and allow society as a whole to cooperate more.