This week, Apple debuted a new iPad Pro. It has a little more power than the previous model and a keyboard with a trackpad. Neat. But its most consequential upgrade is the one that will likely get the least use, at least on a tablet: a lidar scanner.

If you’ve heard of lidar it’s likely because of self-driving cars. It’s a useful technology in that context, because it can build a 3D map of the sensor’s surroundings; it uses pulses of light to gauge distances and locations in a similar way to how radar uses radio signals. In an iPad Pro, that depth-sensing will be put in the service of augmented reality. But it’s not really about the iPad Pro. Apple put a lidar scanner in a tablet to prepare, almost certainly, for when it puts one in a pair of AR glasses.

Speculation is cheap. But at this point Apple’s AR ambitions are hardly speculative; a brisk stroll through the innards of the company’s iOS 13.1 and Xcode 11 last fall referenced a set of smart glasses. A patent the company was granted last summer for a headset-based “mixed reality system” specifically extols lidar as essential to the experience. In November, tech news site The Information reported that Apple executives had targeted a 2022 release of an AR headset, followed by AR glasses the following year.

“I fully expect each of Apple's emerging AR technologies to be dry runs for whatever future headset they're building,” says developer Steven Troughton-Smith, one of the first to spot the headset bread crumbs in iOS 13.1. “While they have interesting uses on iPhone and iPad, they'll really start to make sense when paired with stereo head-mounted AR.”

Even Apple seems pressed to come up with a compelling argument for lidar in the iPad Pro. While its press release ticks off all the perks—speed, accuracy, motion capture—it mainly highlights its Measure app as a beneficiary. It has a Ruler View now. Other apps like Ikea Place and Hot Lava will use lidar to transform your rooms with Swedish commodity design and pulsing hot magma, respectively.

Yes, lidar is well suited for that kind of work. “There are a lot of benefits for AR applications. You need to know where the physical objects are in order to place virtual objects on top,” says Gordon Wetzstein, who heads up Stanford University’s Computational Imaging Lab. “It helps you to understand the environment around you, and understand where’s the floor, where’s the table, where’s the chair, where do I put my Pokémon on these visual objects. That’s something that really requires precise depth definition from the mobile device, ideally with low power and low latency.”