Step in front of an autonomous car, and it should stop. Cut one off while you’re driving, and it should hit the brakes. These are obvious safety features to build into robotic vehicles—but they also leave open the possibility for humans to game their behavior. It's easy to imagine how cyclists might rule the roads of New York City if all taxis are driverless.

That’s certainly a fear for Volvo. Speaking to the Guardian, the company’s senior technical leader, Erik Coelingh, explained that the automaker plans to leave its self-driving cars unmarked during upcoming London trials so that human drivers aren’t tempted to take advantage. “I’m pretty sure that people will challenge them if they are marked by doing really harsh braking ... or putting themselves in the way,” he said.

In fact, Google has already experienced similar problems firsthand. Some of its cars found it difficult to pull away from stop signs, because they were too timid: other cars simply whistled by while they sat stranded. That particular problem was overcome by having the car inch forward at the junction, much the way a human would, to indicate its intention.

If you want to make a driverless car stop, just drive right in front of it.

But dialing up how daring the cars are to match human drivers can only go so far—not least because there will always be people who drive aggressively in order to get an edge. In fact, Discover points to a study carried out by the London School of Economics, which found that drivers who are “combative” on the road are more welcoming of autonomous cars. That could be because they think they’ll be pushovers.

Pedestrians may think similarly. A new study from the University of California, Santa Cruz, has modeled how pedestrians and autonomous vehicles might interact using game theory—in essence applying a little academic thinking to the everyday game of playing chicken with traffic. The conclusion? “Because autonomous vehicles will be risk-averse ... pedestrians will be able to behave with impunity, and autonomous vehicles may facilitate a shift toward pedestrian-oriented urban neighborhoods,” writes the author, Adam Millard-Ball.

The ability to take advantage of autonomous cars’ caution is likely to extend to all road users. Google, for instance, has explained in the past that its AI systems are able to detect cyclists, with the cars being “taught to drive conservatively around them.” But one cyclist in Austin reported that a Google vehicle found itself unable to set off because of its overcautious approach around the bicycle.

There is, of course, a need for some caution on the side of humans. Until autonomous cars are pervasive, stepping into traffic remains a dangerous choice, as it’s hard to tell from a distance whether a car is autonomous or not. In fact, researchers will probably be able to overcome some of these problems by simply making their self-driving cars act more like humans—with, say, smoother driving or authentic horn-honking.

But unless it comes down to some kind of ethical dilemma, autonomous cars will be trained to avoid accidents. It seems implausible that humans won’t be tempted to take advantage.

(Read more: Guardian, Journal of Planning Education and Research, Discover, “Novelty of Driverless Cars Wears Off Quickly for First-Timers,” “How to Help Self-Driving Cars Make Ethical Decisions,” “Outta My Way! How Will We Translate Google’s Self-Driving Honks?”)