Magic Leap Video – Optical Issues and a Resolution Estimate

As per my previous post Magic Leaps display technology what Magic Leap is using in their YouTube through the lens demos may or may not be what they will use in the final product. I’m making an assessment of their publicly available videos and patents.  There is also the possibility that Magic Leap is putting out deliberately misleading videos to throw off competitors and whomever else is watching.

Optical Issues: Blurry, Chroma Aberrations, and Double Images

I have been looking at a lot of still frames from the ML’s “A New Morning” video that according the ML is “Shot directly through Magic Leap technology on April 8, 2016 without use of special effects or compositing.”  I chose this video because it has features like text and lines (known shapes) that can better reveal issues with the optics. The overall impression looking at the images they are all somewhat blurry with a number of other optical issues.

Blurry

resolution-01b-cropThe crop of a frame at 0:58 on the left shows details that include real world stitching of a desk organizer with 3 red 1080p pixel dots added on top of two of the stitches. The two insets show 4X pixel replicated blow-ups so you can see the details.

Looking at the “real world” stitches, the camera has enough resolution to capture the cross in the “t” in “Summit” and the center of the “a” in Miura” if they were not blurred out by the optics.

Chroma Abberations

If you look at the letter “a” in the top box, you should notice the blue blur on the right side that extends out a number of 1080p pixels.  These chroma aberrations are noticeable throughout the the frame, particularly at the edges of white objects.  These aberrations indicate that the R, G, and B colors are not all focused and add to the blurring.

The next question is whether the chroma aberration is cause by the camera or the ML optics. With common camera optics, chroma aberrations get worst the further you get away from the center.

resolution-01b-chroma-cropIn the picture on the left, taken from the same 0:53 frame the name “Hillary” (no relation to the former presidential candidate) is near the top of the screen and “Wielicki” is near the middle. Clearly the the name “Wielicki” has significantly worse chroma aberration even though it is near the center of the image. This tends to rule out the camera as the source of the aberration as it is getting worse from top (outside) to the center. Based on this fact, it appears that the chroma aberrations are caused by the ML optics.

resolution-01b-full-frameFor those that want to see the whole frame, click on the image a the right.

Double Images

Consistently during the entire video there are double images the further down and further left you look at the image. These are different from the frame update double images from last time. as they appear when there is no movement and they are dependent on location.

Below I have gone through a sequence of different frames to capture similar content in the upper left, center, and right (UL, UC, UR), as well as the Middle (M), and Lower (L) left, center, and right and put them side by side. I did the best I could to get the best image I could find in each region (using different content for the lower left).  I have done this over a number of frame checking for focus issues and motion blur and the results are the same, the double image is always worse in the bottom and far left.ml-new-morning-upper-lower-crops2

The issue seen are not focusing nor movement problems. Particularly notice, in the lower left (LL) image how the “D” is a double image is displaced slightly higher and to the right. A focus problem would blur it concentrically and not in a single direction.

Usually double images of the the same size are result of reflections off of flat plates.  Reflections off a curved surface, such as a camera lens pr curved mirror would magnify or reduce the reflection.   So this suggests that the problem has something to do with flat or nearly plates which could be a flat waveguide or a flat tilted plate combiner.

The fact that the image gets worse the further down and left would suggest (this is somewhat speculative) that the image in coming from near the top right corner.   Generally an image will degrade more the further it has to go through a waveguide or other optics.

One more thing to notice particularly on the images on the three on the right side are “jaggies” in the horizontal line below the text.

What, there are Jaggies? A clue to the resolution which appears to be about 720p

Something I was not expecting to see were the stair step effect of a diagonally drawn line, particularly through the blurry optics.  Almost all modern graphics rendering does “antialiasing”/smooth edge rendering with gray scale values that smooth out these steps, and after the losses due to the optics and camera I was not expecting to see any jaggies.  There are no visible jaggies for all the lines and text in the image with the notable exception for the lines under the text of “TODAY” and “YESTERDAY” associated with the notification icons.

In watching the video playing it is hard to miss these lines as the jaggies move about drawing your eye to them.  The jaggies’ movement it also a clue that they are moving the drawn image as the camera moves slightly.

Below I have taken one of those lines with jaggies and then below it I have simulated the effect in Photoshop with 4 lines below it.  The results have been magnified by 2X and you may want to click in the image below to see the detail.  One thing you may notice in the ML Video line is that in addition to the jaggies, it appears to have thick spots in it.  These thick spots between jaggies are caused by the line being both at an angle and with slight perspective distortion which causes the top and bottom of a wider than one pixel thick line be rendered at slightly different angles which causes the jaggies to occur in different places on the top and bottom and results in the thick sections.  In the ML Video line there are 3 steps on the top (pointed to by the green tick marks) and 4 on the bottom (indicated by red tick marks).
resolution-jaggies-02

Below the red line, I simulated the effect using Photoshop on the 1080p image and copied the color of background to be the background for the simulation.  I started with a thin rectangle that was 4 pixels high and then scaled it by making it very slightly trapezoidal (about 1 degree difference between the top and bottom edge) and then rotated it to the same angle as the line in the video, using “nearest neighbor” (no smoothing/antialiasing) scaling; this produced the 3rd line “Rendered w/ jaggies”.   I then applied a Gaussian with a 2.0 pixel radius to simulate the blur from the optics to produce the “2.0 Gaussian of Jaggies” line that matches the effect seen in the ML video. I did not bother with simulating the chroma aberrations (the color separation above and below the white line) that would further soften/blur the image.

Looking at result you will see the thick and thin spots just like the ML video.  But note there are about 7 steps (at different places) on the top and bottom.  Since the angle of my simulated line and the angle of the line in the ML Video are the same and making the reasonable assumption that the jaggies in the video are 1 pixel high, the resolution should differ by the ratio of the jaggies or about 4/7 (ratio of the ML versus the 1080p jaggies).

Taking 1080 (lines) times 4/7 give about 617 lines which what you would expect right if they slightly cropped a 720p image.  This method while very rough and assumes they have not severely cropped the image with the camera (to make themselves look bad).

For completeness to show the difference with what would happen if the light was rendered with antialiasing, I produced the “AA rendered” version and then use did the same Gaussian blur on it. This results, similar to all the other lines in the video where there are no detectable jaggies nor any changing in the apparent thickness of the line.

OK, I can here people saying, “But the Magazine Writers Said It Looked “Good/Great”

I have often said that for a video demo, “If I can control the product or control the demo content, I choose controlling the content.” This translates to “choose demo content that looks good on your product and eliminate content that will expose its weaknesses.”

If you show videos with a lot of flashy graphics and action with no need to look for detail, with smooth rendering, only imaging experts might notice that the resolution is low and/or there are issues with the optics.  If you put up text, use a larger font so that it is easily readable, most people will think you have high resolution sufficient for reading documents; in the demo you are not giving them a page of high resolution text to read if you don’t have high resolution.

I have been working with graphics and display devices for about 38 years and see a LOT of demos.  Take it from me, the vast majority of people can’t tell anything about resolution, but almost everyone thinks they can.  For this reason, I highly discount report from non display experts that have a chance to seriously evaluate a display. Even an imaging experts can be fooled by a quick well done demo or a direct or indirect financial motive.

Now, I have not seen what the article writers and the people that invested money (and their experts) have seen.  But what I hopefully have prove to you is that the what Magic Leap has shown in their YouTube videos is of pretty poor image quality by today’s standards.

Magic Leap Focus Effects
ml-out-of-focus
0:41 Out of Focus
ml-in-of-focus
0:47 Becoming In Focus
ml-in2-of-focus
1:00 Sharpest Focus
ml-back-out-of-focus
1:05 Back Out Of Focus

Magic Leap makes a big point of the importance of “vergence,” which means that the apparent focus agrees with the apparent distance in 3-D space. This is the key difference between Magic Leap and say Microsoft’s Hololens.

With only one lens/eye you can’t tell the 3-D stereo depth so they have to rely on how the camera focuses.  You will need to click on the thumbnails above to see the the focus effects in the various still captures.

They demonstrate the focus effects with “Climbing Everest” sequence in the video.  ML was nice enough to put some Post-It (TM) type tabs curled up in the foreground (in particular watch the yellow smiley face in the lower left) and a water bottle and desk organizer (with small stitches in the background.

Toward the end of the sequence (click on the 1:05 still) you can see the Mount Everest information which is at an angle relative to the camera is highly out of focus on the left hand side and gets better the right hand side, while the “Notices” information which appears to be further away is comparatively is in-focus. Also notice how the stitches in the desk organizer in the real world and which appear to be roughly the same angle as the Everest Information goes from out of focus on the left to more in-focus on the right agreeing with what is seen in the projected image.

This focus rake appears to be conclusive proof that that there is focus depth in the optical system in this video.  Just to be complete, it would be possible to the fake effect just for the video is by blurring the image by the computer synchronously with the focus rake.  But I doubt they “cheated” in this way as outsiders have reported seeing the focusing effect in  live demos.

In the 1:05 frame capture the “15,000 ft” in the lower left is both out of focus and has a double image which makes it hard to tell which are deliberate/controllable focusing effects and which are just double images due to poor optics. Due to the staging/setup, the worst part of the optics matches what should be the most out of focus part of the image.  This could be a coincidence or they may have staged it that way.

Seeing the Real World Through Display

Overall, seeing the real world through the display looks very good and without significant distortion.  It didn’t get any hints as to the waveguide/combiner structure.   It would be interesting to see what say a computer monitor would look like through the display or other light source shining through it.

The the lighting in the video is very dark; the white walls are dark gray due to a lack of light except where some lamps act as spotlights on them.  The furniture and most of the other things on the desk are black or dark (I guess the future is going to be dark and have a lot of black furniture and other things in it). This setup helps the generated graphics stand out. In a normally lit room with white wall, the graphics will have to be a lot brighter to stand out and there are limits to how much you can crank up the brightness without hurting people’s eyes or there will have to be a darkening shades as seen with Hololens.

Conclusion

The resolution appears to be about 720p and the optics are not up to showing that resolution.  I have been quite  of the display quality because it really is not good. There are image problems that are many pixels wide.

On the plus side, they are able to demonstrate the instantaneous depth of field with their optical solution and the view of the real world looks good so far as they have shown.  There may be issues with the see-through viewing that are not visible in these videos in a fairly dark environment.

I also wonder how the resolution translates into the FOV versus angular resolution, and how they will ever support multiple simultaneous focus planes.  If you discount a total miracle from their fiber scanned display happening anytime soon (to be covered next time), 720p to at most 1080p is about all that is affordable in a microdisplay today, particularly when you need one for each eye, in any production technology (LCOS, DLP, or Micro-OLED) that will be appropriate for a light guide.  And this is before you consider that to support multiple simultaneous focus planes, they will need multiple displays or a higher resolution display that they cut down. To me as a technical person who studied displays for about 18 years, this is a huge ask.

Certainly Magic Leap must have shown something that impressed some very big name investors, to invest $1.4B.  Hopefully it is something Magic Leap has not shown yet.

Next Time: Magic Leap’s Fiber Scanned Display

I have been studying the much promoted Magic Leap Fiber Scan Display (FSD).  It turns out there patents suggest two ways of using this technology:

  1. A more conventional display that can be used in combination with a waveguide with multiple focus layers.
  2. To directly generate a light fields from an array of FSDs

I plan to discuss the issues with both approaches next time.  To say the least, I’m high doubtful that either method is going to be in volume production any time soon and I will try and outline my reasons why.

Asides: Cracking the Code
Enigma

I was wondering whether the jaggies were left in as an “image generation joke” for insiders or just sloppy rendering. They are a big clue as to the native resolution of the display device that came through the optical blur and the camera’s resolving power.

It is a little like when the British were breaking of the Enigma code in WWII. A big help in breaking Enigma was sloppy transmitting operators giving them what they called “cribs” or predictable words or phrases. On a further aside, Bletchley Park where the cracked the Enigma Code is near Bedford England where I worked with and occasionally lived for over an 16 year period. Bletchley Park is a great place to visit if you are interested in computer history (there is also a computer museum at the same location).  BTW, the movie the “The Imitation Game” is an enjoyable movie but lousy history.

Solving the Display Puzzles

Also, I am not claiming to be infallible in trying to puzzle out what is going on with the various technologies. I have change my mind/interpretation of what I am seeing in the videos a number of times and some of my current conclusions may have alternative explanations. I definitely appreciate readers offering their alternative explanations and I will try and see if I think they fit the facts better.

Magic Leap’s work is particularly interesting because they have made such a big claims, raised so much money, are doing something different, and have released tantalizingly little solid information.  It also seems that a good number of people are expecting Magic Leap to do a lot more with their product than may be feasible at volume price point or even possible at any cost, at least for a number of years.

Karl Guttag
Karl Guttag
Articles: 260

6 Comments

  1. Hi Karl, I am always impressed by your analyses. Do you still think after this new post that it could be a micro display made by Emagin? For the Resolution: They just announced a 2kx2k micro display, prototypes in December this year and mass production in the first half 2017. This would exactly fit to the likely release plan of Magic Leap: Present something at CES 2017 and sell later in 2017.

    • I would be highly doubtful that Magic Leap would use a 2k by 2k microdisplay for anything but a demo. “Mass production” on a 2K by 2K display might be measured a few thousand per year and hardly what would attract big name investors to Magic Leap. eMagin needs to be more price competitive on 720p and 1080p before I will believe that a 2K x 2K will go to mass production.

      OLED microdisplays may be turning a corner to get toward volume pricing, but this will have to be for 720p and 1080p devices first and LCOS will likely have a big cost advantage for some time.

  2. Hi Karl, so, do you think now that EMagin is not anymore used by Magic Leap? I understood you in that way that LCOS could not be used in the device according your analysis. Could the price drop if EMagin gets a huge order from Magic Leap?

    • I never wrote I thought is was eMagin, but I thought that it was an OLED-Microdisplay. There are about a dozen companies that make OLEDs Microdisplays (see http://www.oled-info.com) but eMagin is just the only “pure play” company that is public where OLED Microdisplays are their only business.

      Since it is just a “demo” it is possible that it is a large LCD or OLED. I don’t see how that video could have been generated with a Field Sequential Color LCOS or DLP UNLESS they did a lot of post processing; the double image is why I don’t think they did a bunch of post processing (or they would have sync’ed everything up and got rid of those too.

      There is evidence in the patents that they are well aware of DLP and LCOS and show ways to use them (and just about everything else including OLED microdisplays) but I don’t see how either of these could be used without seeing some artifact from the field sequential process.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading