“Can I make a full-field-of-view AR or VR display by directly shining lasers into my eyes?”

No.

Well, technically, you can, but not in the way you probably imagine if you asked that question. What you can’t do is mount some tiny laser emitter somewhere out of view, have it shine a laser directly into your pupil, and expect to get a virtual image covering your entire field of view (see Figure 1). Light, and your eyes, don’t work that way.

Our eyes are meant to capture images of the real world, and to do that, they have to obey one fundamental rule:

Rule 1: All light that originates from a single point in 3D space and enters the eye has to end up in the same point on the retina.

If this rule isn’t followed, there will be no image on the retina. If this rule is even slightly violated, say by light from a point in space forming a tiny disk instead of a point, you get a blurry image. That’s exactly what happens when your eyes are not focused properly, or you are not wearing your prescription glasses.

This means that if you have a tiny laser emitter somewhere in front of your face, and let it shoot directly into your eye, light from the laser will only end up in a single spot on your retina, and nowhere else. If you put a pinprick of light in front of your eyes, you see a pinprick of light, no matter whether that light is from a laser or not (see Figure 2).

This fundamental rule of optics and vision leads to a corollary:

Rule 2: To create an image covering H°×V° of your field of view, you need to have a direct area light source, or at least one intermediate optical element (a screen, a mirror, a prism, a lens, a waveguide, etc.), covering at least those H°×V°.

As an aside, that’s the same reason why real holographic images are not free-standing in the sense many people imagine. You can only see them inside the field of view covered by the holographic plate that creates them (I drew a nifty diagram of that a couple of years ago: compare and contrast Figures 1 and 4 in this old post about the Holovision kickstarter project).

Given these two rules, how then can you use lasers (or other point light sources) to create virtual images? By following rule 2 and placing an optical element between the light source and your eye. For a simplified but to-scale diagram of such a setup see Figure 3. That system (including the mirror or prism or similar element) is called a virtual retinal display.

There are many examples of real-world displays based on this principle of small image source and field-of-view-expanding optical element. All see-through AR headsets and some opaque (VR) headsets use it. Microsoft’s HoloLens, for example, is assumed to use an LCoS microdisplay and a holographic waveguide to inject virtual objects into the real world. CastAR uses retro-reflective mats. Microdisplay-based VR HMDs use complex lens systems.

No confirmed details are known about Magic Leap’s upcoming AR headset, but Magic Leap’s patents describe an oscillating optical fiber that can emit light from a very small spot (< 1mm) over a wide angle (claimed as 120°), and mention a large number of different waveguide technologies or free-form prisms to then bend the emitted light into the viewer’s eyes over some yet-to-be-determined field of view. I’ll say it again: the oscillating fiber projector by itself is not sufficient to create an image; you also need some intermediate optical element. (And, assuming that they in fact do have a fiber that can emit light throughout a 120° cone, that in no way means their display has a field of view of 120°. Those two aspects are entirely unrelated.)

“But wait,” you say, “what about those free-air volumetric displays, like the one in the video below? They are completely free-standing, and don’t need an optical element between the laser and the eye!”

I’m glad you asked. The loophole is that in the case of free-air displays, the laser is not the light source. The air itself is the light source, specifically an area light source as required by rule 2. The downside is that in order to turn air into a light source, you have to super-heat air molecules and convert them to plasma, which in turn emits light that can directly be observed by viewers.

If super-heating tiny pockets of air to serve as pixels sounds slightly too dangerous (or too loud) to employ in your living room or in a near-eye context, then that’s because it is.