Sorry, this video is no longer available

Self-driving cars have the headlines recently for everything from navigating busy streets to their ability to park themselves.

But researchers at Stanford University in California are moving faster, with a self-driving racing car, capable of speeds up to 120mph (190km/h). Engineers at the university have have developed an autonomous Audi TT-S, named Shelly, which can drive at the limits of vehicle performance.

The car recently powered around the Thunderhill Raceway in California, clocking a time only a few seconds slower than a human. Although an impressive feat, the project has no ambitions to replace Formula 1 or Indy car drivers any time soon. Instead, the researchers want to learn from professional drivers to make future cars – autonomous or not – more capable and ultimately safer.

Professor Chris Gerdes, the main researcher behind the project, recently joined me on stage at the Atlantic Big Science Summit in Silicon Valley. He told me about why he wants to develop a robotic race car, and how monitoring the brains of human racing drivers can help. What follows is an edited transcript of our conversation:

Jon Stewart: Is this car completely autonomous - there’s no remote control or human interference?

Chris Gerdes: That’s absolutely correct. All we do is hit a button and let it go. We do have people stationed around the track with kill switches - and quickly beating hearts - but the car is completely unpiloted and is making all of its own decisions about steering and braking while it’s out there.

JS: Tell us about some of the technology it uses to do that. What information does the car have coming in to figure out where it is, and what the best course to take is?

CG: We’ve tried to model this on what we’ve learned from some of the best racecar drivers. We give the car a sense of where the road boundaries are, and the car tries to figure out a way through those boundaries as quickly as possible.

It’s using precision GPS to localise itself, and a variety of inertial sensors to pay attention to: what is its attitude? Is it sliding? How much is it using the grip between the tyres and road? Does it have the capability to go faster? And it’s always adapting to that as it goes around the track.

JS: How does it compare to a human driver?

CG: At the moment we’re a little slower, but our path around the track is very similar. What the human drivers do amazingly well is consistently feel out the limits of the car, and push it just a little bit further. That is where they have an advantage, and we are trying to learn from them now.

JS: Modern technology can give us an insight into what is going on in people’s brains – is that something you can apply here?

CG: That’s one of the things we are trying to do now. We are measuring [electrocardiography] signals from race-car drivers, and trying to figure out what is going on in the brain. It’s a difficult challenge, because obviously there’s a lot going on in the brain which may or may not be associated with racing. We’re getting the signal of just a few electrodes, so it’s an ongoing challenge to make sense of it.

JS: Are you putting electrodes on driver’s scalps and sending them out on a racetrack, or is it something you are doing in a simulator?

CG: We’re putting the electrodes on their scalps on a track – it’s a little bit of a challenge trying to get them under a helmet and making sure they stay in. We’ve been working with great drivers who are very tolerant of being wired up by students and having all of their actions measured.

JS: And you are seeing that some of what they do is instinctive?

CG: That’s what the brainwaves suggest. There are certain patterns that are really suggestive of cognitive workload, and there are other times we can see that signal is absent, and that would suggest to us that it really is a reflexive motion. ‘Car going sideways? Sure – I’ll correct that – just one more day at the office.’

JS: How does what you’re learning from human drivers feed back into how you program a robotic car to do the same task?

CG: We are developing algorithms based on what the humans do. For example, by looking at what they are doing with the steering and the brakes and the throttle. A normal thought is ‘I want to turn the car – I’ll use the steering wheel’. But once you run out of friction at the front tyres, anyone who has done racing knows that doesn’t work anymore. In fact they’re actually steering the car with the brakes and the throttle. So there are some counterintuitive behaviors that racecar drivers are very good at that, that we are learning to encode.

JS: How does this research feed back into the development of autonomous vehicles that we might one day drive – or have drive us – on the roads?

CG: One of the nice things about vehicle dynamics is that it is a scalable problem. At the end of the day, you are limited by how much friction you have between the tyres and the road. In our case we are running out of friction because we are going really fast on a racetrack. But you could run out of friction simply because you are driving on a road that has become icy. So what we are learning on the racetrack translates exactly into the sorts of situations you might get simply on a wet road. Mathematically it’s really the same problem. So we can take something that is visually very exciting, that gets students charged up, we can develop the mathematics behind that, and the control systems, and then port it over into situations that you would encounter in an everyday drive.

If you would like to comment on this article or anything else you have seen on Future, head over to our Facebook page or message us on Twitter.