In an interview released on Friday, Jesse Levinson, the Chief Technology Officer of the self-driving car startup Zoox, made a remarkable comment about the capabilities of Zoox’s autonomous vehicles:

...we also measure human driving. So, we’ve had humans drive a lot of those same really challenging routes and we measure when humans make mistakes. And what's pretty exciting is a few months ago we got to the point where our AI system is making fewer mistakes than people do on those routes.

Levinson also said that Zoox’s goal is to achieve a rate of at-fault crashes “about an order of magnitude lower than it is for humans.” The company aspires to deploy a driverless vehicle without a steering wheel or pedals by the end of 2021. This implies Zoox hopes to reach significantly superhuman safety by the end of next year.

It’s difficult for me to accept on trust Levinson’s comment about the error rate for human driving vs. AI driving. As a techno-optimist and robotaxi investor, I’m tempted to believe that Zoox is perhaps the second company to pass this major milestone. But I would be a lot more convinced if I knew the metrics Zoox is using to measure safety and the sample size of miles driven.

The only public data we have on self-driving cars is the rate of safety driver disengagements of the autonomous system. Cruise President and CTO Kyle Vogt published a blog post in January that convincingly argued that disnegagements are not a good metric for safety or for apples-to-apples comparisons with human beings. For better insight into how close or far self-driving cars really are to human capability, we need companies to open up about their testing methodologies and what metrics they’re using internally. And, of course, what numbers they're actually getting.

Disclosure: I am long TSLA.