Urmson: You would. And this is in no way a knock against Dmitri. That is an amazing team and they are doing great work. And they are clearly way out ahead of everybody. But you would. When you have a certain set of architectural precepts that are in there that everything is structured around. You are smart and you iterate and you change, but there are bones that are there. We get to say, actually, let’s now, knowing everything we do about where machine learning is, and the availability of cloud computation and understanding how hard the problem really is, let’s set up to be able to tackle those problems from day one.

It doesn’t mean that we’re going to get there before Waymo does, but what it does mean is that we’ll be able to cover the ground more quickly. And that we’ll be able to help our partners bring something safe and ultimately much more robust to market.

Madrigal: As you’ve worked anew through this set of problems, has there been anything that was really hard back in the day that now you’re just like: Shit, we did that in a month?!

Urmson: Some of the things where we’ve been applying machine learning to, object tracking, for example. Very quickly, we’ve been able to get versions of that up and running. That’s exciting. And that’s a function of the ecosystem and the world we live in. TensorFlow wasn’t a thing when we started at Google.

Madrigal: Another “Waymo-way” precept was deciding not to treat self-driving technology as a form of driver assistance. I personally have heard you make the case against the hand off from car to driver.

Urmson: Still believe it! That’s not to say you can’t have a steering wheel in a vehicle and you can’t have the vehicles drive when they want to. But the distinction that I would make is that: The car should never require the person in the driver’s seat to drive. That hand back is the hard part.

If you want to drive and enjoy driving, God bless you, go have fun, do it. But if you don’t want to drive, it’s not okay for the car to say, “I really need you in this moment to do that.”

People talk about what are called level-three systems. This idea that it’ll drive and then it’ll give you notice that you should come back. It turns out that if you don’t respond to the notice, it still has to do the right thing, so at that point, it’s effectively a very limited level-four system. And to do that, the complexity of implementing it is high enough that the sensor suite is gonna get pretty expensive.

Madrigal: How are you benchmarking your progress here?

Urmson: Right now, we’re really about building it right. Our partners would like to see a 2020 or 2021 kind of time frame. So, we’re moving as quickly as we can to support that. At that time frame, we’re talking tens of thousands of vehicles, which is huge compared to the thousandish-maybe vehicles that are around today. But that will just be the beginning of the deployment, when we think about impact in the world. That was part of the thinking with Aurora was it’s gonna take so many years to get the technology to work, and it takes a similar kind of number of years to build the cars that the technology is going to come into. So, if we can find partners that will develop the two in parallel, then we can go out there and have the scale impact that we want more quickly than others will be able to.