This is just an educated guess. I readily admit that I could be very wrong about this. This is just dot connecting from publicly available information.





This is what I suspect and why.

we would be able to sell the results to anyone.

Smart homes

And the 10-meter LiDAR, you can see how that number of things that would have to be recognized will increase for the device to be able to send a message that says it's your child walking towards an open door versus your dog running through a dog port would be an example of how those differences Maybe it's you walking down the hall past the bookcase, so don't turn the lights on for the bookshelf, or your wife walking towards the bookcase to get a book, go turn the lights on there and illustrate it. So those -- you can see that the language or the vocabulary perhaps of the device would increase and then within the automotive space would increase again.

So that as you shave in the morning not only do you listen to the news, you see it displayed on the washroom wall, and that becomes a little bit more meaningful experience. As you walk down the hall towards the kitchen, our sensing device knows it's you that's walking down the hall. It adjusts the coffee and turns the lights on appropriately. And then an interactive display that's invisible, but when you call it up, it comes out as an Alexa-type device or something of that nature that allows you to interact with it because of the sensing capabilities, gesture recognition and then disappears when it's not required. So we really see this as sort of a suite of solution that helps AI platforms with their user interface.





So, look at what is happening with these companies:

It is. So the capabilities I'm describing in this perceptive element exist within the context of the 1-meter, 1.5-meter interactive display, the 10-meter display and the 30-meter display and perhaps the -- sorry, not display, LiDAR. Perhaps the best way to think of these, Henry, is to think of them as having sort of different levels of vocabulary. The things that the 1 meter display -- or the 1 meter LiDAR will have to recognize will be a relatively small number of things, gestures, point, touch, compression, squeezing the picture, flipping the page.think of our display engine embedded in your voice-only device.there are solutions out there today that do 3D scanning, perhaps as an example, for facial recognition. They require high compute energy and use approximately 30,000 points to do that calculation. Our range of solutions will provide between 5 million and 20 million points per second of resolution in the 10-meter space. So the density of the information we have at the sensor allows us to make simple messaging analytics or messaging content that enables users to do so much more with the device than simply trying to plug them with this plethora of data. It is almost diametrically opposed to the way most entities are solving sensing applications today. Almost everybody is trying diligently to get more information from the sensor, pass it down the pipe to a centralized processor that allows it to do a calculation and figure out what's going on. We have so much information at the sensor. We have the luxury of sending messaging, which just makes it much easier for the entire system to be responsive. And it will be a shame not to capture that.