Everything and nothing has changed in 12 years. When I invented the spinning LiDAR sensor leading up to the 2005 DARPA Grand Challenge, I was writing code as we drove across the desert and right up to the starting line. The previous month, I’d spent three days lining all 64 laser channels by hand—for what became our first commercial sensor, the HDL-64. My thumb and forefinger went numb for a month afterward. Felt pretty happy when I got the feeling back.

Since that time, Velodyne has released five additional sensors, each making key improvements in range and accuracy. LiDAR revolutionized the mapping and remote sensing industries and became critical to the autonomous revolution. Meanwhile, my company has grown to over 700 employees. We have our San Jose Megafactory using automation to build up to a million sensors in 2018. At our experimental labs facility in Alameda, we’ve attracted the best minds and top talent in the business, working on game-changing new products to guide this revolution.

Today, I’m releasing Velodyne’s latest and most advanced sensor yet, the VLS-128. It’s over 10 times more powerful but a third the size and weight of the sensor it’s replacing, the HDL-64. To design and build the 128 over the past two years, I probably spent 1000 hours—and my team probably spent ten times that. Luckily, I didn’t have to line the channels by hand this time. Instead, the 128 has our new auto-alignment technology, which will incrementally cascade down to our existing line-up.

In developing the 128—and really from all the models over the past 12 years—I’ve learned a lot about how to build LiDAR sensors. As a company, we’ve learned from our successes and failures, from creating technology, talking to customers, and many hours in the lab. I’ve learned there’s a bit of a trick to it. To create ground-breaking technology that has never existed before—yet needs to integrate seamlessly into our partners’ platforms—I need to take calculated risks while trusting my instincts.

By that, I mean a few things. One reason I always liked the HDL-64—in addition to it being a real labor of love—was that you could see it spinning. The HDL-32 was the same. But, eventually, we took a calculated risk. By moving the mechanics inside the housing, we advanced the technology to create smaller, less obtrusive sensors. And we continue these innovations with the 128. Yes, I did hold out for a while, because I like to see it spin. But, as my children would say, I eventually gave in for the good of society.

Back when Velodyne was just a handful of us in the LiDAR division, working on the shop floor of the audio business, we went through what I call the “dark days.” This was in the years right after the DARPA challenges when autonomy hadn’t taken off yet. While we were building our business, some map makers came to us and said they wanted to use our sensors to map the world in 3D. We started working with all these map companies, including Mandli Communications. They used to go out with laser range finders and survey crews and spend days mapping the height of an overpass and mundane things like that. With our sensors, they could just drive underneath and create a more detailed map with substantially less time and effort. Our instincts said follow that trend and wait for the autonomous revolution to catch up.

At Velodyne, we believe in a method I call “spaghetti against the wall.” We want our LiDAR products to stick, not unlike a spaghetti noodle that’s done cooking. That means investing engineering know-how and time to solve the problems the autonomous driving industry faces and ensure our products gain traction. We think the biggest unsolved problem for autonomous driving at highway speeds is avoiding road debris. That’s tough, because you have to see way out ahead. The self-driving car needs to change lanes, if possible, and do so safely. On top of that, most road debris is shredded truck tire—all black material on a dark surface. Especially at night, that type of object recognition is challenging, even for the LiDAR sensors we’ve previously built. The autonomous car needs to see further out, with denser point clouds, and higher laser repetitions.

So, with the engineering of the 128, we’ve doubled the channels, tripled the channel density, and doubled the zoom resolution. The result is a 300-meter range. I believe this will solve the high-speed roadway recognition problem. The VLS-128, with these breakthrough specs, is how Velodyne hopes to lead the autonomous revolution to highway speeds. I don’t get to watch it spin like the 64, but seeing what the 128 can do for autonomy will be pretty interesting.

I’m excited (and a bit relieved) to put the 128 in our customers’ hands and see what they do with it. In the meantime, I’ll be in the lab with my engineers, following the ongoing revolution—working on further developments in LiDAR to make sure every Velodyne product is a shot in the dark that’s directly on target.