Imagine this future scenario: Self-driving cars form an orderly procession down a highway, traveling at precisely the right following distance and speed. All the on-board computers cooperate and all the vehicles travel reach their destinations safely.

But what if one person jailbreaks her car, and tells her AI driver to go just a little faster than the other cars? As the aggressive car moves up on the other vehicles, their safety mechanisms kick in and they change lanes to get out of the way. It might make the overall efficiency of the transportation lower, but this one person would get ahead.

This is but one of many scenarios that Ryan Gerdes of Utah State University is exploring with a $1.2 million grant from the National Science Foundation to look at the security of the autonomous vehicle future.

"The designers of these systems essentially believe that all of the nodes or vehicles in the system want to cooperate, that they have the same goals," Gerdes said. "What happens if you don't follow the rules? In the academic theory that’s built up to prove things about this system, this hasn’t been considered."

While Google is out to create a fully autonomous vehicle some years into the future, the major carmakers are taking more incremental steps toward autonomy. Nissan, Volkswagen, Daimler and others all have programs. Just this week, Cadillac announced that it would include "super cruise" that would allow for "hands-free" driving on highways in a 2017 car.