Magic Leap Review Part 1 – The Terrible View Through Diffraction Gratings

I have been studying the Magic Leap One (ML1) for a few weeks and have taken over 1,000 photos and made many measurements. I’m bursting with things I want to talk about, so I am planning to release a series of articles. For this first article, I am primarily going to cover the view of the real world through the ML1.

The ML1 uses Diffraction Waveguides which in turn mean you must look at the real world through a diffraction grating. No amount of money and effort can change the laws of physics (or “Moore’s Law”). Looking through a diffraction grating is going to distort the view of the real world as this article will demonstrate. One must ask, “What good is Augmented Reality if it is ruining the view of the real world?”

As I started to write up my findings, I learned that Magic Leap was making a serious effort to market diffractive waveguides to the U.S. military for use by soldiers in the field (see disclaimer at the end). Until I heard about this effort, I had no idea that Magic Leap would have the audacity to push technology so ill-suited for outdoor military use.

A Little on Receiving and Setting Up My ML1

Before I go into the issues with the ML1, let me say a few good things about the ML1. First, I had a problem with my American Express Card denying the transaction due to suspected fraud (maybe someone at Amex had been reading my blog, just kidding). Magic Leap’s customer service via email helped me fix this issue.

Magic Leap is using the delivery and set-up service Enjoy. The service technician was helpful and spent about 1 hour getting me set up. But as others have pointed out, the use of a delivery and setup service is also an indication that the ML1 is not nearly ready to be a consumer product. Also because it requires custom fitting and you can’t wear regular glasses with it (unlike Hololens), sharing the headset with others is highly problematic.

Unlike other reports, I was not given the shoulder strap carrier (what some are calling “The Magic Leap purse”). I don’t know if this was an availability issue or if the Enjoy representative forgot to give it to me.

The color balance in the center of the view looks good, but the colors shift as you move away from the center of the image (more on this in an upcoming article).  But even compared to a cheap LCD monitor, the image quality regarding uniformity, sharpness/resolution, and contrast is not very good.

The demos, particularly the Construct applications, were fun to play with and were captivating. It does have a visceral impact to see objects moving around in a 3-D world as it was with Hololens, but there is more to do out of the box with the ML1. With the FOV being wider than Hololens, it makes for a better first experience.

The hand controller with the ML1 is much better than the use of gestures with Hololens which quickly tire one’s arm. Still, it is a pain to do even simple tasks with both the ML1 controller and Hololens’ gesture inputs. There is a lack of consistency in the way you control the ML1 from application to application, particularly when it comes to closing an application or window (it seems like every application has a different closing method).

Some of the Major Problems

I don’t have time right now to go into all the details, but I would like to briefly list down some of the issues I have found to date:

  • It blocks 85% of the real world light (as measured and as predicted in my analysis of the Shaq video back in February 2018)
    • Blocking so much light helps to hide or reduce other optical problems
    • Hololens blocks about 60% and thus lets through 2.67X (40%/15%) more light than the ML1
  • The ML1 emits only ~210 cd/m2 (commonly referred to as “nits”) compare to Hololens which outputs ~320 nits.
    • Even with all the computer hardware moved to the “Lightpack,” the headset gets very hot, so much so that the Envoy agent advised me to be careful about having the hottest parts touch my head
  • Looking through a diffraction grating blurs and distorts the real world (the subject of this article)
  • The diffractive waveguides soften/blur the virtual image (the image does not make it through the waveguide)
    • The effective resolution of the ML1 is about half that of the 1280×960 pixels Magic Leap has claimed
    • The ML1 displayed image is noticeably blurrier than Hololens (a subject for next the next post)
  • Diffraction waveguide inherent color issues
    • Color ripples (quoting Ryan Smith, CEO of Invrse), “like looking through soap bubbles.”
    • Colors shifting across the FOV particularly on about the outer 15% of the left and right side of the FOV
  • The dual Focus Planes for vergence-accommodation conflict (VAC) are ridiculous (this will have to be a whole article)
    • There are only two, they show only one at a time, and the user sees it jump when it changes
    • The “near focus plane” kicks in from the clipping point of 14.5 inches to about 30-inches and is focused at ~20 inches (50.8 cm)
    • The “far focus plane” anything beyond 30-inches (there is some hysteresis as you move in and out) appears to focus at (only) ~60 inches (1.54 m)
    • Magic Leap appears to have doubled down on the psychobabble in their blog article “Spatial Computing: An Overview for Our Techie Friends” — Nauseating, in more ways than one IMO.
    • Magic Leap did add one other “optical trick that they call “sub-pupils,” but it is not clear how well it works. For reference, sub-pupil gratings are described in Magic Leap Applications  US20160327789 and US20180052277 and seen in Step 10 of the iFixit Teardown)
  • Problem with Binocular Overlap – With an image that fills the FOV, there are dark bands at the left and right side (the size of the bands depends on the eye’s vergence)
    • Eliminating this problem would mean reducing the FOV
  • It uses field sequential color (FSC) but with at least twice the sequence rate of Hololens so the FSC breakup much less noticeable but still an issue for some applications.
  • Very little eye relief and requiring separate purchase of special corrective lenses
  • The Simultaneous Localization and Mapping (SLAM) technology is just so-so. It requires just the right amount of lighting and is blind to anything dark in color. The mapping is not particularly accurate and seems to drift. I’m told by others that have much more experience in this area that it is inferior to the Hololens’ SLAM.
  • The cabling is a definite snag hazard and in your way. One of the biggest dangers is that you will take off the headset, place it on the table and forget that the computer unit is still attached to your pocket or shoulder strap, then and as you turn around or walk away you will drag the headset off the table.

Dim and Fuzzy View of the Real World

Starting at the beginning of my evaluation, I took a picture looking through the ML1 with the unit turned off. The first thing that is obvious is that the ML1 blocks most of the real-world light and block much of a person’s peripheral vision. Back in February 2018 based the video with Shaq, I was able to estimate that the ML1 was blocking about 85% of the light. This estimate agrees with instrument measurements that it blocks about 83% of the light at the top and 86% at the bottom of the waveguide. You will also notice in the picture on the left the effects of looking through a diffraction grating.

By way of comparison, Microsoft’s Hololens blocks ~60% of the light which is significant. Still, Hololens lets through ~2.67 times more real-world light than the ML1.

Quick Background on Diffraction Waveguides

 

Diffraction Waveguides have been discussed on this blog many times including ones with Hololens and Magic Leap. A series of lines spaced apart close wavelength of light, a diffraction grating, will bend light like a prism. But unlike a prism, a grating will bend the light in a series of “orders.” With a diffractive waveguide, only the light from one of these orders is used, and the rest of the light is not only wasted, but it can reduce the contrast of the overall system as it bounces around in the optics. The angle and spacing of the orders that the diffraction bends the light is a function of the wavelength (color) of the light, the grating spacing, and the angle at which light hits the grating.

FM: Magic Leap Application 20180052277

Key to today’s topic is that with a diffractive waveguide you are looking at the real world through a diffraction grating, what is known as the “exit grating and pupil expander.” In the case of a diffraction waveguide, the light is traveling through the glass with total internal reflection at an angle of about 45 degrees (the exact angle depends on many factors). The exit grating is designed to bend the light from ~45 degrees to 90 degrees so it will exit the waveguide’s glass toward your eye.

But light from the real world is passing through the diffraction grating from the opposite side with all manners of wavelengths and directions causing this light to be bent at various angles. The exit grating will both diffract/defocus real-world light that would have otherwise gone straight into the eye and causes prism-like color artifacts. In the cases of the ML1 and Hololens, the exit gratings run horizontal and thus make them susceptible to “capturing” light from above, such as overhead lights.

Worse yet for the ML1, you are looking through six (6) diffraction gratings. While each grating level is designed to bend the light of a specific color, the wavelengths of visible light are close enough together than the gratings affect all visible light.

Real World Viewed Through ML1 and Hololens Exit Gratings

The pictures below were taken through an ML1 (left) and Hololens (right) “exit gratings” with the units off. I adjusted the exposure to compensate for the ML1 blocking ~2.7x the light.  In the two pictures below, you can see how the diffractive waveguides break the light from the lamp above into a rainbow of colors in the middle of the pictures.  Additionally, in the ML1 picture on the left, you should notice the fuzzy, flame-like, double image just above the light fixture’s bulbs. The flare is caused by light from the lamp going through the six layers of diffractions gratings roughly perpendicular to the waveguide. While this effect is barely noticeable with the Hololens (it is still there), it is very pronounced with the ML1.
The ML1 has a less noticeable effect the light grabbed when the light source is within the FOV (see above) when compared to Hololens, but the ML1 has a much bigger problem than Hololens when the light source is above the FOV as demonstrated in the two pictures below. With the ML1, I regularly see “flares” of colored light in the bottom of the view when there is an overhead light source. When viewed through both eyes (as opposed to one lens in the pictures) you will see double the “flares.” BTW, in the pictures, below, the Hololens headset is visible in the picture taken through the ML1 and vice versa.

Remember also that Magic Leap is blocking 85% and Hololens is blocking 60% of the real world’s light. So, these diffraction effects will be much brighter if they didn’t have the dark lenses.

As another point of reference, I took a picture of part of a test pattern on an iPhone 6s Plus (on the left). Once again, I adjusted the exposure to compensate for the ML1 blocking 85% of the light. You should notice the blurry scattering of light that softens the image.

Mixing Real and Virtual Images

On the right is a picture of the Dr. G’s Invader teaser included with the ML1 showing a light in the background. Not only does the real-world light have a fuzzy double image but so too does the virtual image. The text is decidedly “soft”/blurry which I will cover in more detail in a future article. Having the two focus planes with double the diffraction gratings contributes to the problems with both the real world and the virtual image view. The image quality is getting degraded by passing through the many diffraction gratings.

Additionally, you might notice the colors in what should be solid white text. The colors change as you move your head and eyes, what Ryan Smith, CEO of Invrse, described as “like looking through soap bubbles.” There are other color issues including chroma aberrations (colors align with each other).

Conclusion – You Don’t Want to See the Real World Through a Diffraction Grating

Simply put, you don’t want to look at the real world through a diffraction grating, but that is what you are required to do with diffractive waveguides. No amount of money or effort is going to change the physics of diffraction gratings.

The ML1 is noticeably worse than Hololens regarding blurring one’s view and causing color artifacts from light sources in the real world, but it is a contest between bad and worse. The adage, “there is no right way to do the wrong thing,” applies. At least part of what is making the ML1 worse is the support of two focus planes. It requires you to look through double the number of waveguides

The diffraction grating “rainbow flare” effects and the darkening of the real world are the most obvious problems. But one should also consider that they are very optically inefficient with only a small percentage of the light making it to the eye. Thus, ML1 headset gets very hot while producing only about 200 nits.

What good is it to augment the world when you are ruining the view of the real world?

Next Time – Images Captured from the ML1

In the next article in this series, I plan on delving deeper into image quality (or lack thereof) with the ML1.

Disclaimer

It was recently reported in Bloomberg News that Magic Leap is going after a $500M military contract for use with troops in the field (as opposed to training). I recently joined RAVN who is developing technology that supports field-use military applications that utilize head-worn displays. Knowing that Magic Leap was using diffractive waveguides and the many inherent physics issues with looking through a diffraction grating among other severe problems with them for military use (only some of which are included in this article) and I welcome anyone to present any evidence to the contrary. As such, I never considered Magic Leap a direct competitor to RAVN. 

I find it incredible (I’m struggling for a polite way to say this) that Magic Leap has a significant marketing effort pushing Magic Leap for military field (outdoor) use. I would never want to subject our troops to looking through diffraction gratings in the course of their duty. So even though I still consider it silly on a technical basis to consider them a competitor, they are certainly finding the same market to be attractive as a business. 

Please also note that this blog reflects my personal opinions and not those of RAVN.

Acknowledgment

I would like to thank Ron Padzensky for reviewing and making corrections to this article.

Karl Guttag
Karl Guttag
Articles: 256

43 Comments

  1. I don’t want to start the funeral just yet as the amount of money they have raised will probably keep them kicking for years to come but Magic Leap seems to be at a dead end as far as any sort of mass market product.
    I’m struggling to find the lessons in this whole endeavor. Is it that you can turn a good idea into a lucrative venture with just the right amount of spin? Certainly they have been successful in that respect. It is almost admirable.
    Is it that all the good ideas and money can only take you to the current state of the art?
    Is it that every breathless endorsement of a particular tech should be taken with an avalanche of salt? Well, obviously that’s the case but if we don’t let a little of the magic in, it seems a bit of a grind.
    I appreciate your take, Karl. Correct me if I’m wrong, but your position seems to be to focus on what is possible and maybe just a little bit beyond. Is that a typical mindset for an engineer? Is there a place for hype?

    • Thanks,

      While Magic Leap has raised a lot of money they are also burning through it rather quickly. They have a lot of employees spread out all over the world and are spending massive amounts on marketing as well as on their own optics manufacturing line.

      Every technology has is own potential “learning curve.” Some technologies will benefit more with money and talent being applied and with others, you are just pushing on a large rock. A bit part of the equation is how close you are getting to the physical limits. Diffractive waveguides are bumping up against physics both on the pass-through side (the subject of this article) and on the ability to get an image through them.

      A big problem optical see-through with AR is that you have to consider the effects of the reflection of the virtual image toward the eye and that of the real world light coming from the other side. You just don’t want to be looking through a diffraction grating.

      There is a lot of fantastic science and engineering that went into Magic Leap, but they choose to go in an unsolvable direction with their optics.

  2. Another thorough and understandable write up Karl. Many thanks for putting the time in to write it. If I might as a question: if diffraction gratings are not the way forward for AR, what are the alternatives?

    • The most obvious answer is some kind of semi-mirror combiner, either broad spectrum such an aluminum alloy coated surface or with color specific dichroic. Additionally, Lumus waveguides (which have multiple semi-mirrors) have much better resolution and color uniformity than diffractive waveguides but they are not without their issues.

  3. Hi , KarlG,
    According to your analysis, is diffracting method(surface relief gratings , holographic gratings ) a future-less way in AR solutions?
    As far as I know, there are many physical/optical limitations with gratings in AR(chromatic dispersion , low efficiency, etc.), but it seems no other better way than the grating-waveguide AR now.
    What’s your opinion about that?
    Thanks a lot!

    • I don’t know how they ever get around having the user look through a diffraction grating at the real world. With diffraction waveguides, you look through an exit grating. If you have to look through a grating, it is going to do what gratings do. Lumus has much better color uniformity and resolution than diffractive grating but it seems to have other issues which may or may not be solvable but at least they are not fundamentally flawed that I know. The other techniques all involve some form of “semi-mirror” combiner either with aluminum alloy (or other metal) coatings or dichroic (color specific) mirrors. I don’t like using polarization in the combiner as it affects the user’s view of LCD type displays in the real world.

  4. With the use of gratings being such a fundamental barrier to clarity/quality/enjoyment – why do you think Hololens are sticking with them for upcoming v2 ? Clearly ML1 has failed miserably in delivering anything new here, and the hype train is meeting a wall of reality, but i find it odd Microsoft would persevere if they didn’t feel some engineering and iteration on the technique could overcome most of the shortcomings? Roll on the next instalment of this review!

    • Microsoft (and others) are desperately searching for the “next big thing” to get the market position that Apple has with the iPhone. This leads to massive amounts of money being spent just to buy a lottery ticket. Most scientist and engineers will end up follow the money if the research is interesting. The engineers and scientist are not making a value judgment on whether the product will succeed, they just figure it is someone else’s job higher up to figure out if there is a market for what will result.

      I have always looked at Hololens as a project that “escaped the lab” before it was ready. Magic Leap, IMO, is even more of a lab project with a fancy case around it (a large cord sticking out).

      Consider, we have no idea as to whether the ML1 costs more or less than the Hololens. The price was set by marketing rather than making a profit as both efforts are losing hundreds of millions of dollars per year to ship about 25K units/year (Hololens’ average for the last 2 years). They could both be wrapping money around each unit. Whether the price is $2,000 or $4,000 is a rounding error in the losses.

      Diffractive waveguides get a lot of attention because they are relatively thin, but being thin is only one aspect of a multidimensional problem. I’m fond of saying, that companies start out talking about having the image quality of an OLED TV and the size and shape of Oakley (or Ray-Ban) sunglasses and end up with Hololens.

      If someone had solved the problems of looking through a diffraction grating, they would have published a paper about it.

      • Hi Karl,
        I think for diffractive waveguide another important and irreplaceable merit beside thinness would be 2D EPE. Diffractive waveguide is the only MP available technology for 2D EPE. You could achieve a very reliable 2D EPE by using NIL, while Lumus spent so much time on improving coating/gluing/polishing process for their 1D EPE half mirror glass waveguide yet the yield is still terrible. They are also trying to make 2D EPE version of their half mirror waveguide technology recently but I guess the difficulty of the process would be hell if you call 1D EPE version hard.

        2D EPE is vital for making a glass that people wear it very casually(or the glass moves down along one’s nose after wearing for some time) and still be able to see the whole image without any loss of FOV. I’m not sure but I kind of feel that the eyebox of both hololens and ML1 is not big enough.

        I’m not aware of NED architecture other than waveguide could do 1D/2D EPE or has eyebox big enough natively. Please correct me if I’m wrong.

        P.S.: One other feature that we need to make to C level AR glass product would be incorporate vision correction into the waveguide combiner itself. I’m from east asia so I know how important it is. There is fundamental issue to do it but it is really really important…

      • I’m far from an expert on Exit Pupil Expansion (EPE), but the need in waveguide-based near-eye displays (NED) is a direct result of a waveguide requiring collimated light supporting near zero eye box to be injected.

        I fail to see both in theory and with the diffractive waveguides in practice, how they are ever going to support good angular resolution. Fundamentally the gratings spacings are too close to the pixel sizes. Normally with a good optical system, you want to be away from the diffraction limit and not deliberately being inside the limit. Thus you are always going to have problems with resolution and diffractions artifacts when looking at the real world.

        Vision correction inside the waveguide would seem to be tremendously complex. There is always the issue of how you correct for both the virtual and real world images.

  5. Hi Karl, Could you kindly share you technique on how to avoid color uniformity effect while taking photos of FSC LCOS generated image?

    • In my experience, there are two main factors to reduce field sequential artifacts when taking still pictures.

      1. The way the camera sensor takes the picture. Back years ago, I went to the store and tried out different cameras and there was a significant variation, particularly with a “rolling shutter” versus full frame camera.

      2. Shoot at a slow shutter speed of 1/30th of a second or slower. This allows you to average out over many frames. Typically, I shooting at a very low ISO. I will turn the brightness down on the device and/or use a neutral density filter so I can shoot at a low shutter speed if necessary. I prefer to shoot at a moderate F-number F-4 to F-8 which is in the range of the human eye (which is typically about F2 to F8).

      For the pictures in this article, I used an Olympus OM-10 III (https://www.amazon.com/Olympus-camera-14-42mm-Camera-enabled/dp/B0751B835L/) which is a mirrorless 4/3rds camera. I bought this cameras specifically because it gave me full control of the shutter speed, ISO, and aperture (and sometimes manual focus) and was small enough that when turned sideways it is no wider than the distance from my eye to the side of my head so it can fit into headsets with rigid headbands (such as the ML1) without disassembly; this is key so you get the camera lens centered on the display where the person’s eye would be. The 4/3rd camera is a bit smaller than the Mirrorless Canon and Sony camera which led to me getting it (it had to fit). The kit lens on the OM-10 II is 14 to 42mm which turns out to be a very workable range with the unit turned in portrait mode. A cell phone camera generally lacks the control you want although it will sometimes work. I noticed that one of the reviews used a GoPro camera (not sure which version) but I am going to try that out at some point.

  6. Thanks for another insightful post.

    As someone who doesn’t think highly of marketing speak and spin, I’m glad someone like you can tell the real story about Magic Leap’s product.

    However as someone who looks forward to an AR future, it’s disappointing to hear that large companies with almost unlimited funds are unable to wield advanced technologies to bring it about.

    There has to be a way, but how long will it be before it’s discovered?

    • Unfortunately, it is a harder problem than most people would think. AR (see-through) makes the problem much harder as you need some optical element, the combining element, that will direct the image to the eye without significantly damaging the view of the real world.

      • Indeed; so, do you think the next big step will be some kind of breakthrough in this area, or should we focus on something simpler like cameras paired with microdisplays?

  7. Dear Karl,
    thank you for the excellent article. How do diffraction grating waveguides behave under sunlight on a street? Is it worse than in a room?

    • Thanks,

      To a first approximation everything scales. If the sun is anywhere in the direction you are looking you will see a bright, mostly blue streak across the FOV. If they didn’t block so much light the diffraction effects would be about 6.7 times worse/brighter.

      The diffraction gratings are affecting reflected light as well and softening the view. It is just that light sources make the problems more obvious.

  8. Thanks for the breakdown Karl. If you don’t think diffraction waveguides are going to cut it for AR, what in your mind is currently the most promising display technology for wide FOV AR in a wearable form-factor?

  9. Magic Leap has revealed itself to be ‘almost always wrong’ in its application of two fixed focal depths. Secondly, eye tracking is becoming more prominent.

    Given these facts, is it time to reconsider addressing VAC by means of a variable focus system, whose depth is dictated by the virtual object being gazed upon?

    Yes, it won’t be perfect as one may be gazing *at* a window versus *through* a window, but is this not a generally much more accurate optical experience over current offerings?

    • Magic Leap started with two concepts, “focus planes” and “scanning fiber display” but of which are dead ends.

      I think Magic Leap’s eye tracking may be crude, just looking at the pupils from one angle. Some are looking as patterning on the retina (some with IR) to see where the image is landing but this is much harder It is very hard to see with a camera into the eye with a transparent display.

      Oculus appears to be experimenting with something like what you are talking about and I am sure many other researchers are as well. I have heard different opinions about whether you only need to focus on where the eye is looking and then software blur the rest. Apparently, the eye, with all its saccadic movement, can tell the difference between out of focus blurring and software blurring.

      I do think eye tracking is an important subject. You really want to cull/reduce the processing and image generation. The eye is only sharp in a very small part of the FOV and you want to leverage that. Everyone in the field knows this, but doing something about it that works well is difficult. At least for head-worn displays, I think something like focus-planes, and worse true light fields, are a waste as you computing and generating huge amounts of image information that nobody will see. You might generate some “local light fields” or the light based on eye tracking (sort of a hybrid approach, but I can’t see generating full light fields as being practical for decades.

  10. Dear karl,
    I am learning the principle of Magic Leap One and have read your blog about that. I can not figure out two questions and I beg your comments. (Maybe not suit for this blog)
    First, for the collimated light propagating in the waveguide by TIR, either the grating itself or some optic element after the exit grating slightly de-collimates the light so it acts like light rays from closer. Is there some more specific information about that process? Or some patents about that?
    Second, the transparency of ML1 is terrible and about 85% light from the world is keeped out. I wonder whether it is caused by the diffration of output grating(only 0 order ray can transmit to the eye and more waveguide lead to lower efficiency) or it is set artificially (maybe some coatings)?
    Thank you for your time!

    • The Magic Leap patents discuss the light being de-collimated to adjust the focus. This has to happen with exit grating which causes pupil expansion with any waveguide. You end up reducing collimation while expanding the pupil/eye-box. I don’t exactly the mechanism of how they change the collimation but anything that does not act as a pure mirror will de-collimate.

      The darkening is a combination of the light loss through the waveguides and extra darkening. Magic Leap has 6 layers of waveguide (RGB times 2). Each layer is likely (I have not measured it) loosing over 5% of the light. When you multiply that out, the diffraction gratings are loosing between 30% and 50% of the light. The rest of the darkening is simply neutral density darkening filters. They use the darkening filters so the virtual image can compete with the real world so much and not look so translucent. Part of the reason they used so much darkening is that the display is not very bright.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading