Magic Leap (ML) finally revealed their first device and people are ecstatic. But what have we really learned from ML’s announcement this week? There is a lot of fuzz going on right now, so I thought it is time for a more educated analysis.

Most obviously (and as known from leaks) ML decided to go with a split design for their first product such that many components - most importantly battery and compute - are in a belt unit where weight, size and heat are less problematic than on your head. That is a good approach, since by today the technology to make a comfortable, lightweight and powerful head-only device simply isn't there. Microsoft did an impressive job with that approach in Hololens, but the results are not pleasing.

So most of what generates heat is in the belt unit but with so many cameras and two high-res and high frame-rate displays there is a lot of data that needs to be sent around. Even with a split design that uses a lot of power on the head unit and hence I wouldn't be surprised of the head unit still warms up noticeably. On the other hand, the head unit seems to be quite large and should therefore have good heat dissipation capabilities. With most of the heavy components in the belt unit this allows for a pretty light weight head unit. E.g. the head unit of our DAQRI SmartGlasses (DSG) weighs only about 300g and hence only about half of a Hololens. It is reasonable to expect that the ML One will have similar weight.

On the head piece ML went with a headband design, which again I agree is a good approach. A glasses design (like ODG’s R7) usually puts a lot of weight on the nose, which quickly becomes quite uncomfortable. A headband on the other hand distributes the weight well around the head making it pretty comfortable to wear over a long period of time.

There has been and still is a lot of speculation on the field of view (FOV) of ML’s device, but in fact this is one of the few pieces where we know at least some details now. In his RollingStone article Brian Crecente states that he tried to estimate the FOV and came up with a size of a “VHS cassette with your arms half extended”. That points towards a horizontal FOV of ~30°, which is similar to Hololens, but much less than Meta’s and a bit less than ours. People rightfully complain about low FOV, as it is today one of the main limiting human factors on optical see-through AR. Unfortunately, it is also one of the hardest areas to make significant progress.

Display technology seems to have always been one of ML’s core strengths. ML likes to quote very broad concepts such as “light fields”, but it is hard to take anything away from that without personally trying it. At the very least it seems their displays are able to render multiple focus planes simultaneously, which would of course be a welcome step up over existing AR devices.

Looking at the published pictures I found the headpiece to have at least 7 imaging sensors. The centerpiece could be a ToF-based depth camera (illuminator + imager openings). From the 4 forward facing cameras I assume 2 being tracking cameras whereas the other 2 might be RGB cameras. Finally, there are those 2 awkwardly placed wide FOV cameras on the side, which could be either additional tracking cameras (Hololens also has 4 tracking cameras, but with much lower FOV) or they could be used for Computer Vision (CV) purposes such as for tracking the handheld controller or estimating the wearer’s body posture, which can be helpful for HCI purposes. Processing that much sensor data is expensive, so I assume ML uses a semi-custom compute unit for CV such as a DSP. It is also rumored that they built their own ASICs for that purpose. Microsoft successfully went with such an approach with their CV-optimized DSP (which they call Holographic Processing Unit).

There has also been quite some debate about whether the posted images are renderings or real, since one cannot see the user’s eyes in the pictures. Indeed, that is a bit awkward, since human eyes are a source of key visual cues - especially on a consumer oriented device. It would have been easy to fix the eye visibility in renderings and I am certain ML wouldn't make such an obvious mistake. Hence, I tend to believe that the optical stack indeed prevents seeing the wearer’s eyes - which is of course problematic. One reason for having such dark lenses would be to overcome problems with the display brightness (by attenuating the real world), but we will not know that before a wide audience can report on these glasses.

In their reveal this week ML highlighted only consumer oriented use cases, similar as Microsoft did in their Hololens reveal at their Build conference in 2016. Consumers are an inherently difficult target audience: They expect very low cost and perfect usability, which are both extremely difficult to achieve in AR today. Not even in the VR space - where technology is comparably simpler - people seem to be satisfied today. It will be interesting to see if ML’s consumer device will actually take off with consumers.

Then there is aesthetics. Naturally opinions vary wide on this topic and I will not add mine here. However, I think it is safe to assume that by end of 2018 a design such as ML One’s will be perceived as bulky.

This brings us to the final topic of speculation, which is sales price and release date. People expect a sales price of 1500$, but I find that highly unlikely. Building a device such as presented by ML is very expensive today: Cameras, displays, compute, battery, manufacturing and more simply don’t allow for that. Different to mobile phones there is not a large enough market yet that allows producing the required components at large numbers and hence low cost. ML seems to aim at a shipping date in 2018, but that is a very large window. Given that they are not comfortable to reveal any specific details on their device yet my guess is on late 2018.

Overall, it’s good that AR is getting so much interest and progress is happening on multiple fronts. 2018 is going to be an interesting year.

Disclaimer: I have no insight knowledge on Magic Leap, so all of above is based on my expertise and publicly available data.