Magic Leap: Focus Planes (Too) Are a Dead End

What Magic Leap Appears to be Doing

For this article I would like to dive down on the most likely display and optics Magic Leap (ML) is developing for their their Product Equivalent (PEQ). The PEQ was discussed in the “The Information” story “The Reality Behind Magic Leap.” As I explained in my  November 20, 2016 article Separating Magic and Reality (before the Dec 8th “The Information” story) the ML patent application US 2016/0327789 best fits the available evidence and if anything the “The Information” article reinforce that conclusion. Recapping the evidence:

  1. ML uses a “spatial light modulator” as stated in “The Information”
  2. Most likely an LCOS spatial light modulator and the Oct. 27th 2017 Inside Business citing “KGI Securities analyst Ming-Chi Kuo, who has a reputation for being tapped into the Asian consumer electronics supply chain” claims ML is using a Himax LCOS device.
  3. Focus planes to support vergence/accommodation per many ML presentations and their patent applications
  4. Uses waveguides which fit the description and pictures of what ML calls a “Photonics Chip”
  5. Does not have a separate focus mechanism as reported in the “The Information” article.
  6. Could fit the form factor as suggested in “The Information”
  7. Its the only patent that shows serious optical design that also uses what could be considered a “Photonics chip.”

I can’t say with certainty that the optical path is that of application 2016/0327789. It is just the only optical path in the ML patent applications that fits all the available evidence and and has a chance of working.

Field of View (FOV)

Rony Abovitz, ML CEO, is claiming a larger a larger FOV.  I would think ML would not want to be have lower angular resolution than Hololens. Keeping the same 1.7 arc minutes per pixel angular resolution as Hololens and ODG’s Horizon, this would give a horizontal FOV of about 54.4 degrees.

Note, there are rumors that Hololens is going to be moving to a 1080p device next year so ML may still not have an advantage by the time they actually have a product. There is a chance that ML will just use a 720p device, at least at first, and accept lower angular resolution of say 2.5 or greater to get into the 54+ FOV range. Supporting a larger FOV is not small trick with waveguides and is  one thing that ML might have over Hololoens; but then again Hololens is not standing still.

Sequential Focus Planes Domino Effect

The support of vergence/accommodation appears to be a paramount issue with ML. Light fields are woefully impractical for any reasonable resolution, so ML in their patent application and some of their demo videos show the concept of “focus planes.” But for every focus plane an image has to be generated and displayed.

The cost of having more than one display per eye including the optics to combine the multiple displays would be both very costly and physically large. So the only rational way ML could support focus planes is to use a single display device and sequentially display the focus planes. But as I will outline below, using sequential focus planes to address vergence/accommodation, comes at the cost of hurting other visual comfort issues.

Expect Field Sequential Color Breakup If Magic Leap Supports “Focus Planes”

Both high resolution LCOS and DLP displays use “field sequential color” where they have a single set of mirrors that display a single color plane at a time. To get the colors to fuse together in the eye they repeat the same colors multiple times per frame of an image. Where I have serious problems with ML using Himax LCOS is that instead of repeating colors to reduce the color breakup, they will be instead be showing different images to support Sequential Focus Planes. Even if they have just two focus planes as suggested in “The Information,” it means they will reduce the rate repeating of colors to help them fuse in the eye is cut in half.

The Hololens which also uses a field sequential color LCOS one can already detect breakup. Cutting the color update rate by 2 or more will make this problem significantly worse.

Another interesting factor is that field sequential color breakup tends to be more noticeable by people’s peripheral vision which is more motion/change sensitive. This means the problem will tend to get worse as the FOV increases.

I have worked many years with field sequential display devices, specifically LCOS. Based on this experience I expect that the human vision system  will do a poor job of “fusing” the colors at such slow color field update rates and I would expect people will see a lot of field sequential color breakup particularly when objects move.

In short, I expect a lot of color breakup to be noticeable if ML support focus planes with a field sequential color device (LCOS or DLP).

Focus Planes Hurt Latency/Lag and Will Cause Double Images

An important factor in human comfort is the latency/lag between any head movement and the display reacting can cause user discomfort. A web search will turn up thousands of references about this problem.

To support focus planes ML must use a display fast enough to support at least 120 frame per second. But to support just two focus planes it will take them 1/60th of a second to sequentially display both focus planes. Thus they have increase the total latency/lag from the time they sense movement until the display is updated by ~8.333 milliseconds and this is on top of any other processing latency. So really focus planes is trading off one discomfort issue, vergence/accommodation, for another, latency/lag.

Another issue which concerns me is how well sequential focus planes are doing to fuse in the eye. With fast movement the eye/brain visual system is takes its own asynchronous “snapshots” and tries to assemble the information and line it up. But as with field sequential color, it can put together time sequential information wrong, particularly if some objects in the image move and others don’t. The result will be double images, getting double images with sequential focus planes would be unavoidable with fast movement either in the virtual world or when a person moves their eyes. These problems will be compounded by color field sequential breakup.

Focus Planes Are a Dead End – Might Magic Leap Have Given Up On Them?

I don’t know all the behind the scenes issues with what ML told investors and maybe ML has been hemmed in by their own words and demos to investors. But as an engineer with most of my 37 years in the industry working with image generation and display, it looks to me that focus planes causes bigger problems than it solves.

What gets me is that they should have figured out that focus planes were hopeless in the first few months (much less if someone that knew what they were doing was there). Maybe they were ego driven and/or they built to much around the impression they made with their “Beast” demo system (big system using DLPs). Then maybe they hand waved away the problems sequential focus planes cause thinking they could fix them somehow or hoped that people won’t notice the problems. It would certainly not be the first time that a company committed to a direction and then felt that is had gone to far to change course. Then there is always the hope that “dumb consumers” won’t see the problems (in this case I think they will).

It is clear to me that like Fiber Scan Displays (FSD), focus planes are a dead end, period, full-stop. Vergence/accommodation is a real issue but only for objects that get reasonably close to the users. I think a much more rational way to address the issue is to use sensors to track the eyes/pupils and adjust the image accordingly as the eye’s focus changes relatively slowly it should be possible to keep up. In short, move the problem from the physical display and optics domain (that will remain costly and problematical), to the sensor and processing domain (that will more rapidly come down in cost).

If I’m at Hololens, ODG, or any other company working on an AR/MR systems and accept that vergence/accommodation is a problem needs to be to solve, I’m going to solve it with eye/pupil sensing and processing, not by screwing up everything else by doing it with optics and displays. ML’s competitors have had enough warning to already be well into developing solutions if they weren’t prior to ML making such a big deal about the already well known issue.

The question I’m left is if and when did Magic Leap figured this out and were they too committed by ego or what they told investors to focus planes to change at that point? I have not found evidence so far in their patent applications that they tried to changed course, but these patent applications will be about 18 months or more behind what they decided to do. But if they don’t use focus planes, they would have to admit that they are much closer to Hololens and other competitors than they would like the market to think.

Evergaze: Helping People See the Real World

Real World AR

Today I would like to forget about all the hype and glamor near eye products to have fun in a virtual world. Instead I’m going to talk a near eye device aimed at helping people to see and live in the real world.  The product is called the “seeBoost®” and it is made by the startup Evergaze in Richardson, Texas. I happen to know the founder and CEO Pat Antaki from working together on a near eye display back in 1998, long before it was fashionable. I’ve watched Pat bootstrap this company from its earliest days and asked him if I could be the first to write about seeBoost on my blog.

The Problem

Imagine you get Age Related Macular Degeration (AMD) or Diabetic Retinopathy. All your high-resolution vision and best color vision of the macular (and where high resolution fovea resides) is gone and you see something like the picture on the right. All you can use is your peripheral vision which is low in resolution, contrast, and color sensitivity. There are over 2 million people in the U.S that can still see but have worse than 20/60 vision in their better eye.

What would you pay to be able to read a book again and do other normal activities that require the ability to have “functional vision?” So not only is Evergaze aiming to help a large number of people, they are going after a sizable and growing market.

seeBoost Overview

seeBoost has 3 key parts, the lightweight near-to-eye display, a camera with high speed autofocus, and proprietary processing in an ASIC that remaps what the camera sees onto the functioning part of the user’s vision. They put the proprietary algorithms in hardware so they could have the image remapping and contrast enhancement performed with extremely low latency so that there is no perceptible delay when a person moves their head. As anyone that has used VR headsets will know, this important for wearing the device for long periods of time to avoid headaches and nausea.

A perhaps subtle but important point is that the camera and display are perfectly coaxial, so there is no parallax error as you move the object closer to your eye. The importance of centering the camera with the user’s eye for long term comfort was a major point made AR headset user and advocate Steve Mann in his March 2013, IEEE Spectrum article, “What I’ve learned from 35 years of wearing computerized eyewear”. Quoting from the article, “The slight misalignment seemed unimportant at the time, but it produced some strange and unpleasant result.” And in commenting on Google Glass Mr. Mann said, “The current prototypes of Google Glass position the camera well to the right side of the wearer’s right eye. Were that system to overlay live video imagery from the camera on top of the user’s view, the very same problems would surely crop up.”

Unlike traditional magnifying optics like a magnifying glass, in addition to being able to remap the camera image to the parts of the eye that can see, the depth of field and magnification amount are decoupled: you can get any magnification (from 1x to 8x) at any distance (2 inches to infinity). It also has digital image color reversal (black-to-white reversal, useful for reading pages with a lot of white). The device is very lightweight at 0.9 oz. including cable. The battery pack supports for 6 hours of continual use on a single charge.

Use Case

Imagine this use scenario: playing bridge with your friends. To look at the cards in your hand you may need 2x mag at 12 inches’ distance. The autofocus allows you to merely move the cards as close to your face as you like, the way a person would naturally use to make something larger. Having the camera coaxial with the display makes this all seem natural versus say having a camera above the eye. Looking at the table to see what cards are placed there, maybe you need 6x mag. at 2 feet. To see other people’s eyes and facial expressions around the table, you need 1-2x at 3-4 feet.

seeBoost is designed to help people see so they can better take part in the simple joys of normal life. The lightweight design mounts on top of a user’s prescription glasses and can help while walking, reading signs and literature, shopping, watching television, recognizing faces, cooking, and even playing sports like golf.

Another major design consideration was the narrow design so that it does not cover-up lateral and downwards peripheral vision of the eye.  This turns out to be important for people who don’t want to further lose peripheral vision. In this application, monocular(single eye) is for better situational awareness and peripheral vision.

seeBoost is a vision enhancement device rather it essentially a computer (or cell phone) monitor that you must plug into something. The user simply looks at the screen (through seeBoost), as seeBoost improves their vision for whatever they’re looking at, be it an electronic display or their grandchildren’s faces.

Assembled in the USA and Starting to Ship

This is not just some Kickstarter concept either. Evergaze has been testing prototypes with vision impaired patients for over a year and have already finished a number of studies. What’s more they recently started shipping product. To the left is an image that was taken though the seeBoost camera via its display and optics.

What’s more this product is manufactured in the US at a production line Evergaze set up in Richardson, TX. If you want to find out more about the company you can go their their YouTube Channel or if you know someone that needs a seeBoost, you can contact Pat Antaki via email:

Magic Leap CSI: Display Device Fingerprints


I have gotten a lot of questions as to how I could be so sure that Magic Leap (ML) was using Micro-OLEDs in all their “Through Magic Leap Technology” and not say a scanning fiber display as so many had thought. I was in a hurry to get people to the conclusion. For this post, I am going to step back and show how I knew. When display devices have video and still pictures taking whit a camera, every display type has its own identifiable “fingerprint” but you have to know where to look.

Sometimes in video it might only be a few frames that give the clue as to the device being used. In this article I am going cropped image from videos for most of the technologies that capture their distinctive artifacts as captured by the camera, but for laser scanning the distinctive artifacts are best seen in the whole image so I am going to use thumbnails size images.

This article should not be new information to this blog’s readers, but rather it details how I knew what technology was in the ML through “the technology” videos. For the plot twist at the end, you have to know to parse ML’s words, as in “the technology” is not what they are planning on using in their actual product. The ML “through the technology videos” are using totally different technology than what they plan to use in the product.

Most Small Cameras Today Use a Rolling Shutter

First it is important to understand that cameras capture images much differently than the human eye. Most small cameras today, particularly those in cell phones, have a “rolling shutter.” has a good article describing a rolling shutter and some of its effects. A rolling shutter captures a horizontal band of pixels (the width of this band varies from camera to camera) as it scans down vertically. With “real world analog” movement this causes moving objects to be distorted. This happens very famously with airplane propellers (above right). With the various display technologies they will reveal different effects.

OLEDs (And color filter LCDs)

When an object moves on a display device the same object in the digital image will jump in its location between the two frames displayed. If the rolling shutter is open when the image is changing, the camera will capture a double image.  This is shown classically with the Micro-OLED device from an ODG Horizon prototype. The icons and text in the image was moving vertically and the camera captured contend from two frames. Larger flat panel OLEDs work pretty much the same way as can be see in this cropped image from a Meta 2 headset at right.

From a video image artifact point of view, it is hard to distinguish the artifacts with a rolling shutter camera between OLED and color filter (most common) LCDs. Unlike old CRTs and scanning systems, OLEDs and LCD don’t have any “blanking” where there is no image. They just simply quickly row by row change the RGB (and White sometimes) sub-pixels of the image from one frame to the next (this video taken with a high speed camera demonstrates how it works).

Color Field Sequential DLP and LCOS

DLP and LCOS devices used in near eye displays use what is known as “field sequential color” (FSC). They have one set of “mirrors” and in rapid sequence display only the red sub-image and flash a red light source (LED or laser) and then repeat this for green and blue. Generally they sequence these very rapidly and usually repeate the red, green, and blue sub-images multiple times so the eye will fuse the colors together even if there is motion. If the colors are not sequenced fast enough (and for many other reasons that would take too long to explain), a person’s eye will not fuse the image and they will see fringing of colors in what is known as  “field sequential color breakup,” also known pejoratively as “the rainbow effect”. Due to the way DLP and LCOS works, LCOS does not have to sequence quite as rapidly to get the images to fuse in the human eye which is a good thing because they can’t sequence as fast as DLP.

In the case of field sequential color when there is motion, the camera can capture the various sub images individually as seen above-left of the Hololens that uses FSC LCOS. As seen it looks sort of like print were the various colors are shifted. IF you study the image you can even tell the color sequence.

Vuzix uses FSC DLP and has similar artifacts but they are harder to spot. Generally DLPs sequence their colors faster than LCOS (by about 2x) so it can be significantly harder to capture them (that is a clue to if it is DLP or LCOS). On the right, I have captured two icons when sill and when they are moving and you can see how the colors separate. You will notice that you don’t see all the colors because the DLP is sequencing more rapidly that the Hololens LCOS.

DLP and LCOS also have “blanking” between colors where the LEDs (and lasers maybe in the future) are turned off while the color sub-images are changing. The blanking is extremely fast and will only be see using high speed cameras and/or setting a very fast shutter time on a DLSR.

DLP and LCOS for Use with ML “Focus Planes”

If you have a high speed camera or other sensing equipment you can tell even more about the differences between the way in which DLP and LCOS generate field sequential color. But a very important aspect for Magic Leap”s time sequential focus planes is that DLP an sequence fields much faster than LCOS and thus support more focus planes.

I will be getting more into this in a future article, but to do focus planes with DLP or LCOS, Magic Leap will have to trade repeating the same single color sub-images for different images corresponding to different focus planes. The obvious problem for those that understand FSC, that the color field rates will become so low that color breakup (the rainbow effect) would seem inevitable.

Laser Beam Scanning

Laser scanning systems are a bit like old CRTs, they scan from top to bottom and then have a blanking time while the scanning mirror retraces quickly to the top corner. The top image on the left was taken with DSL at a 1/60th of a second shutter speed that reveals the blanking roll bar (called a roll bar because it will be in a different place if the camera and video source are not running at exactly the same speed).

The next two images were taken with a rolling shutter camera of the exact same projector. The middle image shows a dark wide roll bar (it moves) and the bottom image shows a thin white roll-bar. These variations from the same projector and camera are due to the frame rates generated by the image and/or the camera’s shutter rate.

Fiber Scanning Display (FSD) Expected Artifacts

FSD displays/projectors are so rare that nobody has published a video of them. Their scan rates are generally low and they have “zero persistence” (similar to the to laser scanning) and they would look horrible in a video which I suspect is why no one has published a video of them.

I they were video’ed I would expect a circular blanking effect similar to the laser beam scanning but circular. Rather than rolling vertically it would “roll” from center to the outside or vice versa. I have put a could of very crudely simulated whole frame images at left.

So What Did the Magic Leap “Through The Technology” Videos Use?

There is a obvious artifact match between the artifacts in all the Magic Leap “Through the Technology” videos and OLEDs (or filter LCD which are much less common in near eye displays). You see the distinctive double image with no color breakup.

Nowhere on any frames can be found field sequential color artifacts. So this rules out FSC DLP and LCOS.

In looking at the whole frame videos you don’t see any roll-bars effects of any kind. So this totally rules out both laser beam scanning and fiber scanning displays.

We have a winner. The ML through the technology videos could only be done with OLEDs (or color filter LCDs).

But OLEDs Don’t Work With Thin Waveguides!!!

Like most compelling detective mysteries there is a plot twist. OLEDs unlike LCOS, DLP, and Laser Scanning output wide spectrum colors and these don’t work with the thin waveguides like the Photonic Chip that Rony Abovitz, ML CEO, likes to show.

This is how it became obvious that while the “Through The Magic Leap Technology” videos were NOT using the same “Magic Leap Technology” as Magic Leap is planning to use for their production product. And this agrees with the much publicized ML Article from “The Information.”

Appendix – Micro HTPS LCD (Highly Unlikely)

I need to add, just to be complete, that theoretically they could use color filter HTPS LCDs illuminated by either LEDs or lasers to get a narrow spectrum and fairly colliminated light that might work with the waveguide.  They would have similar artifacts to those seen in the ML videos. EPSON has made such a device illuminated by LEDs that was used in their earlier headsets, but even EPSON that is moving to Micro-OLEDs for their next generation. I’m not sure the HTPS could support frame rates high enough to support focus planes.  I think therefore that using color filter HTPS panels while theoretically possible is highly unlikely.

Magic Leap: Are Parts of Their NDAs Now Unenforceable?

mousy-responseRony Abovitz’s tweet about “mousy tech bloggers” and one of its responses made me realize something I was taught way back about NDA and intellectual property. It is summarized well (with my bold emphasis) in the article, “What You Probably Don’t Know About Non Disclosure Agreements”:

Remember that if you have 99 people sign an NDA and 1 person doesn’t, that person can publish your idea in the Wall Street Journal – and to add insult to injury, when they do, the other NDAs all become invalid since they only apply to confidential information.

Reed Albergotti with “The Information,” was shown demos previously were not open to the public and as best I am aware did not have an NDA or other confidential agreement. Also David M. Ewalt of Forbes Magazine wrote on Reddit:

I didn’t sign an NDA, but I agreed not to reveal certain proprietary details”

So when Arghya Sur  (copied above) in his response asked Rony Abivutz to “publicly reveal and demos (sic)”. So I’m left wondering what is and what is not confidential now at Magic Leap? Has Magic Leap inadvertently already done what Arghya Sur requested? Has Magic Leap at least caused some people to be released from some parts of their NDAs?

Disclaimer: I am not a lawyer and my understanding is that this is contract issue based on the laws for the state(s) that governed the NDAs in question. Also, I have not seen ML’s NDAs nor do I know what they cover. There are likely severability clauses meant to limit the damage if there is a breach of some parts.

And it might be even worse. As I remember it, if a company is generally sloppy in their handling and protecting of what they tell people is confidential material, then you can’t enforce your confidential/NDA agreements. The principle is, how can you expect people to know what is “really confidential” from what is “only marked confidential?”

And Rony Abovitz is not just anybody at Magic Leap, he is CEO and and met with the reporters and presumably had some idea as to what they were being shown. This also goes to why you should not tweet about “mouse tech bloggers” if you are a CEO, it makes them ask questions.

I would appreciate if those with expertise  in this area would weigh-in  with your comments. Please don’t give any legal advice to anyone, but rather let people know how you what you were taught about handling NDA material.


I am always amused and a little shocked when I seen slides at open conferences with “Confidential” market on them. I was taught to NEVER do. If the material is not longer confidential, then remove it from the slides. You probably will not get the “confidential death sentence” for doing it once, but it should not become routine or the company might find all their confidential agreements unenforceable.

Magic Leap Shout Out?: Grumpy Mouse Tech Blogger Here

grumpy-mouse-tech-bloggerI have been very busy the last few days and just realize it looks like I got a shout out Tweet from Rony Abovit, the CEO of Magic Leap. On the evening of Dec. 8th, 2016 he wrote, “To a few of the grumpy mouse tech blogger writers: you too will get to play the real thing when we ship.” As far as I am aware, I’m the only “tech blogger” that has been critical of what Magic Leap is doing and on the off-chance that Mr. Abovit did not know of my blog before, it was the only tech blog that was critical of Magic Leap cited in the “The Information” Article by Reed Albergotti that appeared on the 8th.

Mr. Albergotti is an writer for a legitimate news source and not a blogger. Maybe Mr. Abovit was trying to put him down as a “mere blogger” or was his petulant way to try and put down both of us.

In any event, is this the right way for a CEO who has raised $1.4B to strike back at writers he disagreed with?  Why can’t he be specific about with  whom and what he disagrees?  The best he could muster is an ad hominem attack and a bunch of unverifiable whistling in the dark tweets.

I’ve been laying out my proof in this blog. I was only trying to answer the question, “what is Magic Leap doing?” because as I knew that almost all the existing writing about Magic leap was doing was wrong and I thought it would be a fun to be the first to solve the puzzle. If I figured out they were doing something great, I would have reported it. But what I found as I studied the patents, technical material and the released Magic Leap’s videos combined my technical experience in the field, their whole technical story related to the display started to unravel.

Magic Leap: “The Information” Article

The Information: The Reality Behind Magic Leap

the-information-magic-leap-dec-8-2016-coverThe online news magazine “The Information” released the article “The Reality Behind Magic Leap” on Dec. 8th, 2016, by Reed Albergotti and in the story gave a link to this blog. So you may be a new reader.  The article appears to be well researched and I understand that “The Information” has a reputation as a reliable news source. The article also dovetails nicely on the business side with what I have been discussing with this blog on the technical side. The magazine is a paid publication but there is a summary on The Verge along with their added commentary and a lot of the text from the article has shown up in discussion forums about Magic Leap (ML).

For this blog post, I am going to try put 2+2 together between what I have figured out on the technical side and what Mr. Albergotti reported on the business side. Note, I have not seen what he as seen so I am reading between the lines somewhat but hopefully it will give a more complete picture.

The Magic Leap Prototypes

The article states “Magic Leap CEO Rony Abovitz acknowledged that the prototypes used different technology.” This blog has identified that the early prototypes as:

  1. ml-495-applicationA DLP based prototype that uses a variable focus lens to produce “focus planes” by generating different images for different distances and changing the focus between images and supported maybe 3 to 6 focus planes. This is probably their earliest one and is what the article calls “The Beast” and described as the “size of a refrigerator.”
  2. One or more OLED base variations once again using an electrically controlled focus element where ML made a smaller helmet version. The article discussed only one version, dubbed “WD3” but I suspect that they had variations of this one with different capabilities (as in maybe a WD1, WD2, WD3 and maybe more). I believe based on the video evidence a version that could only change focus was used for their Oct. 14, 2015 through the technology” video.  Their later “A New Morning” and “Lost Droids” videos appear to use an Mico-OLED based optics that supported at least two simultaneous focus planes by running the OLED at 120hz to generate two 60hz sequential “focus plane” images and changing the focus be each one.
  3. ml-slm-beam-splitter-lcos-type-optics-colorThe LCOS version that is using their “Photonic Chip” and supports about 2 focus planes with no moving focusing optics (according to the article); what the article dubbed the “PEQ” prototype.

If you want to get more into the gory technical details on how the above work, I would suggest one of my earlier articles titled “Magic Leap – Separating Magic and Reality“. And if you really want to get dirty, read the ML patent applications they reference but be prepared for a long read as they they cover a lot of totally different concepts.

As this blog has been reporting (and for which I have gotten criticism on some of the on-line discussion forms), the must discussed “fiber scanning display” (FSD) has not been perfected and with it any chance of making the “light field display” ML has talked so much about. Quoting the article,”Magic Leap relegated the fiber scanning display to a long-term research project in hopes that it might one day work, and significantly pared back on its light field display idea.

Possible Scenario – A Little Story

Based on my startup and big company experiences, I think I understand roughly how it went down. Please take the rest of this section as reasonable speculation and reading between the lines of known information. So I am going to play Columbo (old TV series reference) below to give my theory of how it went down.

Startups have sometimes been described as “Jumping out of a plane and sewing a parachute on the way down.” This appears to be the case with Magic Leap. They had a rough idea of what they wanted to do and were able to build an impressive demo system and with some good hand waving convince investors they could reduce it to a consumer headset.

They found Brian Schowengerdt, co-founder and Chief Scientist, who worked on the fiber scanning display (FSD) technology and the issue of vergence and accomodation at the University of Washington to join. Mr. Schowengerdt is clearly a smart person that added a lot of credibility to Rony Abovitz’s dreams. The problem with “university types” is that they often don’t appreciate what it takes to go from R&D to a real high volume product.

The “new optical people” built “The Beast” prototype using DLP’s and electrical controlled focusing lenses to support multiple focus plane, to address the vergence and accommodation issue. They then used the “Jedi Hand Wave Mind Trick” (ordinary hand waving may not be enough) to show the DLP engine, the crude low resolution FSD display from the U of W, some non-functional waveguides, and a mock-up of how wonderful it would be someday with a simple application of money and people (if you can dream it you can build it, right?).

This got them their “big fish,” Google who was attuned to the future of near eye displays with their investment in Google Glass and all the big noise with Oculus Rift. There is phenomenal FoMO (Fear of Missing Out) going on with AR/VR/MR  The fact they got a lot of money from a big name company became it own publicity and fund raising engine. ML then got showered with money and that they hoped could cover the bet. Have Google invest publicly also became its own shield against any question of whether it would work.

All the money gave them a lot of altitude to try and build the proverbial parachute on the way down. But sometimes the problem is harder than all the smart people and money can solve. As I have pointed out on this blog, making the fiber scan display work at high resolution is no small task if not impossible. They came to realize this at some point, probably early on, that FSD were not going to happen in a meaningful time frame.

So “plan B” became to use an existing working display technology to give a similar visual effect, even if much reduced in resolution. The beast is way to big and expensive to cost reduce and then need to have more demo systems that are easier to make.

So then they make the WDx based on OLEDs. But there is fatal flaw with using OLEDs (and it tripped me up at first when looking at the videos). While OLED make the design much easier and smaller the don’t work due to the nature of the they put out with the wonderfully flat waveguides (what ML calls their “Photonics Chip”) that ML has convince investors are part of their secret sauce.

So if they couldn’t use the Photonics Chip with OLEDs and the FSD is a no-go, what do you tell investors, both of your secret sauces are a bust? So in parallel they are working on plan “C” which is to use LCOS panels with LED light sources that will work with some type of waveguide which they will dub the “Photonics Chip”.

But then there is a fly in the ointment. Microsoft starts going public with their Hololens system making Magic Leap look like they are way behind the giant Microsoft that can spend even more money than ML can raise. They need to show something to stay relevant. They start with totally fake videos and get called on the carpet for being obviously fake. So they need a “Magic Leap Technology” (but not the optics they are actually planning on using) demo.

The “Beast System” with it DLP’s and field sequential color will not video well. The camera will reveal to any knowledgeable expert what they are using. So for the video they press into service the WDx OLED systems that will video better. By cleaver editing and only showing short clips, they can demonstrate some focus effects while not showing the limitations of the WDx prototypes. These videos then make ML seem more “real” and keep people from asking too many embarrassing questions.

A problem jhere is that LCOS is much slower than DLP’s and thus they may only be able to support about 2 focus planes. I also believe from 16 years working with LCOS that this likely to look like crap to the eye due to color field breakup; but reapplying the Jedi Mind Trick, maybe two focus planes will work and people won’t notice the color field breakup. And thus you have the PEQ which still does not work well or would be demoing with it rather than the helmet sized WD3.

I suspect that Reed Albergotti from “The Information” had gotten the drop on ML by doing some good investigative journalism work. He told them he was going to run with the story and ML decided to try see if they could do damage control and invited him in. But apparently he was prepared and still saw the holes in their story.

Epilogue: It sounds like Mr. Schowengerdt has been put off to the side having served is usefulness in raising money. They used the money to hire other optical experts who knew how to design the optics they would actually be using. He may be still playing around the FSD to keep the dream alive of a super high resolution display someday and maybe the the next to impossible high resolution light fields (I would suggest reading “The Horse Will Talk Fable” to gain insight into why they would keep doing this as an “R&D” program).

I’m probably a little off in the details, but it probably went down something like the above. If not, hopefully you found it an amusing story. BTW, if you want to make a book and or movie out of this original story please consider it my copyrighted work (c) 2016 (my father was and two brothers are Patent Lawyers and I learned about copyright as a small child at my fathers knee).

Lessons Learned

In my experience startups that succeed in building their product have more than a vague idea of what they want to do and HOW they are going to do it. They realize that money and smart people can’t cure all ills. Most importantly they understand where they have risk and then only have at most A SINGLE serious risk. They then focus on making sure they covering that risk. In the case of Magic Leap, they had multiple major risks in many different areas. You can’t focus on the key risk because there so many and it is a prescription for product failure no matter how much money is applied.

Its even possible the “smart money” that invested realized that ML realized that they were unlikely to totally succeed but thought with money and smart people they might spin out some valuable technology and/or patents. The “equation works” if they multiply a hoped by $100B/year market by even a small chance of success. If a big name places what is for them a small bet, it is surprising how much money will follow along assuming the big name investor had done all the hard work of due diligence.

Even if they get paste the basic technology risk get the PEQ running. We they will then have the problem of building a high volume product, worse yet they are building their own factory. And then we have the 90/90 rule which states, “it takes 90% of the effort to get 90% of the way there and then another 90% to solve the last 10%.” When you have a fully working prototype that behaves well (which by the reports in ML has NOT achieved yet) you have just made it to the starting line; then you have to make it manufacturable at a reasonable cost and yield. Other have said it is really 90/90/90 where there is a third 90%. This is where many a Kickstarter company has spun their wheels.

Magic Leap & Hololens: Waveguide Ego Trip?

ml-and-hololens-combiner-cropThe Dark Side of Waveguides

Flat and thin waveguides are certainly impressive optical devices. It is almost magical how you can put light into what looks a lot like thin plates of glass and an small image will go on one side and then with total internal reflection (TIR) inside the glass, the image comes out in a different place. They are coveted by R&D people for their scientific sophistication and loved by Industrial Designers because they look so much like ordinary glass.

But there is a “dark side” to waveguides, at least every one that I have seen. To made them work, the light follows a torturous path and often has to be bent at about 45 degrees to couple into the waveguide and then by roughly 45 degrees to couple out in addition to rattling of the two surfaces while it TIRs. The image is just never the same quality when it goes through all this torture. Some of the light does not make all the turns and bends correctly and it come out in the wrong places which degrade the image quality. A major effect I have seen in every diffractive/holographic waveguid  is I have come to call “waveguide glow.”

Part of the problem is that when you bend light either by refraction or using diffraction or holograms, the various colors of light bend slightly differently based on wavelength. The diffraction/holograms are tuned for each color but invariably they have some effect on the other color; this is particularly problem is if the colors don’t have a narrow spectrum that is exactly match by the waveguide. Even microscopic defects cause some light to follow the wrong path and invariably a grating/hologram meant to bend say green, will also affect the direction of say blue. Worse yet, some of the  light gets scattered, and causes the waveguide glow.

hololens-through-the-lens-waveguide-glowTo the right is a still frame from a “Through the lens” video” taken through the a Hololens headset. Note, this is actually through the optics and NOT the video feed that Microsoft and most other people show. What you should notice is a violet colored “glow” beneath the white circle. There is usually also a tendency to have a glow or halo around any high contrast object/text, but it is most noticeable when there is a large bright area.

For these waveguides to work at all, they require very high quality manufacturing which tends to make them expensive. I have heard several reports that Hololens has very low yields of their waveguide.

I haven’t, nor have most people that have visited Magic Leap (ML), seen though ML’s waveguide. What  ML leap shows most if not all their visitors are prototype systems that use non-waveguide optics has I discussed last time. Maybe ML has solved all the problems with waveguides, if they have, they will be the first.

I have nothing personally against waveguides. They are marvels of optical science and require very intelligent people to make them and very high precision manufacturing to make. It is just that they always seem to hurt image quality and they tend to be expensive.

Hololens – How Did Waveguides Reduce the Size?

Microsoft acquired their waveguide technology from Nokia. It looks almost like they found this great bit of technology that Nokia had developed and decided to build a product around it. hololensBut then when you look at Hololens (left) there is this the shield to protect the lenses (often tinted but I picked a clear shield so you could see the waveguides). On top of this there is all the other electronic and frame to mount it on the user’s head.

The space savings from the using waveguides over much simpler flat combiner  is a drop in the bucket.

ODG Same Basic Design for LCOS and OLED

I’m picking Osterhout Design Group’s for comparison below because because they demonstrate a simpler, more flexible, and better image quality alternative to using a waveguide. I think it makes a point. Most probably have not heard of them, but I have know of them for about 8 or 9 years (I have no relationship with them at this time). They have done mostly military headsets in the past and burst onto the public scene when Microsoft paid them about $150 million dollars for a license to their I.P. Beyond this they just raised another $58 million from V.C.’s. Still this is chump change compared to what Hololens and Magic Leap are spending.

Below is the ODG R7 LCOS based glasses (with the one of the protective covers removed). Note, the very simple flat combiner. It is extremely low tech and much lower cost compared to the Hololens waveguide. To be fair, the R7 does not have as much in the way of sensors and processing as the as Hololens.


The point here is that by the time you put the shield on the Hololens what difference does having a flat waveguide make to the overall size? Worse yet, the image quality from the simple combiner is much better.

Next, below is ODG’s next generation Horizon glasses that use a 1080p Micro-OLED display. It appears to have somewhat larger combiner (I can’t tell if it is flat or slightly curved from the available pictures of it) to support the wider FOV and a larger outer cover, but pretty much the same design. The remarkable thing is that they can use the a similar optical design with the OLEDs and the whole thing is about the same size where as the Hololens waveguide won’t work at all with OLEDs due broad bandwidth colors OLEDs generate.


ODG put up a short video clip through their optics of the Micro-OLED based Horizon (they don’t come out and say that it is, but the frame is from the Horizon and the image motion artifacts are from an OLED). The image quality appears to be (you can’t be too quantitative from a YouTube video) much better than anything I have seen from waveguide optics. There is not of the “waveguide glow”. odg-oled-through-the-optics-002

They even were willing to show text image with both clear and white backgrounds that looks reasonably good (see below). It looks more like a monitor image except for the fact that is translucent. This is the hard content display because you know what it is supposed to look like so you know when something is wrong. Also, that large white area would glow like mad on any waveguide optics I have seen. odg-oled-text-screen-002

The clear text on white background is a little hard to read at small size because it is translucent, but that is a fundamental issue will all  see-though displays. The “black” is what ever is in the background and the “white” is the combination of the light from the image and the real world background.  See through displays are never going as good as an opaque displays in this regards.

Hololens and Magic Leap – Cart Before the Horse

It looks to me like Hololens and Magic Leap both started with a waveguide display as a given and then built everything else around it. They overlooked that they were building a system. Additionally, they needed get it in many developers hands as soon as possible to work out the myriad of other sensor, software, and human factors issues. The waveguide became a bottleneck, and from what I can see from Hololens was an unnecessary burden. As my fellow TI Fellow Gene Frantz and I used to say when we where on TI’s patent committeed, “it is often the great new invention that causes the product to fail.”

I (and few/nobody outside of Magic Leap) has seen an image through ML’s production combiner, maybe they will be the first to make one that looks as good as simpler combiner solution (I tend to doubt it, but it not impossible). But what has leaked out is that they have had problems getting systems to their own internal developers. According the Business Insider’s Oct. 24th article (with my added highlighting):

“Court filings reveal new secrets about the company, including a west coast software team in disarray, insufficient hardware for testing, and a secret skunkworks team devoted to getting patents and designing new prototypes — before its first product has even hit the market.”

From what I can tell of what Magic Leap is trying to do, namely focus planes to support vergence/accommodation, they could have achieved this faster with more conventional optics. It might not have been as sleek or “magical” as the final product, but it would have done the job, shown the advantage (assuming it is compelling) and got their internal developers up and running sooner.

It is even more obvious for Hololens. Using a simple combiner would have added trivially to the the design size while reducing the cost and getting the the SDK’s in more developer’s hands sooner.


It looks to me that both Hololens and likely Magic Leap put too much emphasis on the using waveguides which had a domino effect in other decisions rather than making a holistic system decision. The way I see it:

  1. The waveguide did not dramatically make Hololens smaller (the case is still out for Magic Leap – maybe they will pull a rabbit out of the hat). Look at ODG’s designs, they are every bit as small.
  2. The image quality is worse with waveguides than simpler combiner designs.
  3. Using waveguides boxed them in to using only display devices that were compatible with their waveguides. Most notably they can’t use OLED or other display technology that emit broader spectrum light.
  4. Even if it was smaller, it is more important to get more SDKs in developers (internal and/or external hand) sooner rather than later.

Hololens and Magic Leap appear to be banking on getting waveguides into volume production in order to solve all the image quality and cost problems with them. But it will depend on a lot of factors, some of which are not in their control, namely, how hard it is to make them well and at a price that people can afford. Even if they solve all the issues with waveguides, it is only a small piece of their puzzle.

Right now ODG seems to be taking more the of the original Apple/Wozniak approach; they are finding elegance in a simpler design. I still have issues with what they are doing, but in the area of combining the light and image quality, they seem to be way ahead.

Magic Leap: When Reality Hits the Fan

Largely A Summary With Some New Information

ml-slm-beam-splitter-lcos-type-optics-colorI have covered a lot of material and even then only glossed at the surface of what I have learned about Magic Leap (ML). By combining the information available (patent applications, articles, and my sources), I have a fairly accurate picture of what Magic Leap is actually doing based on feedback I have received from multiple sources.

This blog has covered a lot of different topics and some conclusions have changed slightly as I discovered more information and with feedback from some of my sources. Additionally, many people just want “the answer.” So I thought it would be helpful to summarize some of the key results including some more up to date information.

What Magic Leap Is Not Doing In The Product

Between what I have learned and feedback from sources I can say conclusively that ML is not doing the following:

  1. Light Fields – These would requires a ridiculously large and expensive display system for even moderate resolution.
  2. Fiber Scan Displays – They have demonstrated low resolution versions of these and may have used them to convince investors that they had a way to break through the limitations of pixel size of Spatial Light Modulators (SLM) like LCOS, DLP, and OLEDs. Its not clear how much they improved the technology over what the University of Washington had done, but they have given up on these being competitive in resolution and cost with SLMs anytime soon. It appears to have been channeled into being a long term R&D effort and to keep the dream alive with investors.
  3. Laser Beam Scanning (LBS) by Microvision or anyone else – I only put this on the list because of an incredibly ill-informed new release by Technavio stating “Magic Leap is yet to release its product, and the product is likely to adopt MicroVision’s VRD technology.” Based on this, I would give the entire report they are marketing zero credibility; I think they are basing their reports on reading fan-person blogs about Microvision.
  4. OLED Microdisplays – They were using these in their demos and likely in the video they made, but OLED are incompatible optically with there use of a diffractive waveguide (= ML’s Photonic Chip).
Prototypes that Magic Leap Has Shown
  1. FSD – Very low resolution/crude green only fiber scanned display. This is what Rachel Metz described (with my emphasis added) in her MIT Technology Review March/April 2015 article, “It includes a projector, built into a black wire, that’s smaller than a grain of rice and channels light toward a single see-through lens. Peering through the lens, I spy a crude green version of the same four-armed monster that earlier seemed to stomp around on my palm.
  2. ml-495-applicationTI DLP with a conventional combiner and  a “variable focus element” (VFE). They use the DLP to generate a series of focus planes time sequentially and change the VFE between the sequential focus planes. Based on what I have heard, this is their most impressive demo visually and they have been using this for over a year, but the system is huge.
  3. OLED with a conventional combiner (not a waveguide/”Photonics Chip”). This is likely the version they used to shoot their “Through Magic Leap Technology” videos that I analyzed in my Nov. 9th, 2016 blog post. In that article I though that Micro-OLED might be used in the final product, but I have revised this opinion. OLEDs output very wide bandwidth light that is incompatible with waveguides, so it would be incompatible with working with Photonics Chip ML makes such a big deal about.

What is curious is that none of these prototypes, with the possible exception of #1, the single color low resolution FSD, are using a “waveguide.” Waveguides are largely incompatible with OLEDs and having a variable focus element is also problematical.  Also none of these are using LCOS, the most likely technology in the final product.

What Magic Leap Is Trying to Do In Their First “Product”

I’m going to piece together below what I believe based on the information available from both public information and some private conversations (but none of it is based on NDA’ed information as far as I am aware).

  1. ml-slm-beam-splitter-lcos-type-optics-colorLCOS Microdisplay – All the evidence including Business Insider’s October 27, 2016 points to ML using LCOS. They need a technology that will work well with waveguides using narrow band (likely LED) light sources that they can make as bright as necessary and control the angle of the light illumination. LCOS is less expensive, more optically compact, and requires less power than DLP for near eye systems. All these reason are same as why Hololens is using LCOS. Note, I’m not 100% sure on them using LCOS, but it by far the most likely technology they will be using. They could also be using DLP but I would put that at less than a 10% chance. I’m now ruling out Micro-OLED because it would not work in a waveguide.
  2. Two (2) sequential focus planes are supported – The LCOS microdisplay is likely only able to support about 120 full color frames per second which is only enough to support 2 sequential focus planes per 1/60th of a second of a moving image. Supporting more planes at a slower rate would result in serious image breakup when things move. The other big issue is the amount of processing required. Having even two focus planes greatly increase the computation that have to be done. To make it work correctly, they will need to track the person’s pupils and factor that into their processing and deal with things like occlusion. Also with the limited number of focus planes they will have to figure out how to “fake” or deal with a wider range of focus.
  3. Variable Focus – What I don’t know is how they are supporting the change in focus between the sequential focus planes. They could be using some form of electrically alterable lens but it is problematical to have non-collimated light entering a waveguide. It would therefore seem more consistent for them to be using the technique shown in their patent application US 2016/0327789 that I discussed before.
  4. Photmagic-leap-combiner-croponics Chip (= Diffractive Waveguide) – ML has made a big deal about their Photonic’s Chip, what everyone else would call a “waveguide.” The Photonics Chip likely works similar to the one Hololens uses (for more information on waveguides, see my Oct 27th, 2016 post). The reports are that Hololens has suffered low yields with their Waveguides and Magic Leaps will have more to do optically to support focus planes.

Overall, I think it it is very clear that what they will actually make is only a fraction of he vision they have portrayed to the press. They may have wanted to do 50 megapixel equivalent foveated displays, use FSD as their display device, have 6 focus planes, or even (from Fortune July 12, 2016) ““light-field” technology essentially mimics the brain’s visual-perception mechanisms to create objects and even people who look and behave just the way they would in the real world, and interact with that world seamlessly.” But then, they have to build something that actually works and that people can afford to buy. Reality then hits the fan


Magic Leap – Fiber Scanning Display Follow UP

Some Newer Information On Fiber Scanning

Through some discussions and further searching I found some more information about Fiber Scanning Displays (FSD) that I wanted to share. If anything, this material further supports the contention that Magic Leap (ML) is not going to have a high resolution FSD anytime soon.

Most of the images available is about fiber scanning for use as a endoscope camera and not as a display device. The images are of things like body parts they really don’t show resolution or the amount of distortion in the image. Furthermore most of the images are from 2008 or older which gives quite a bit of time for improvement. I have found some information that was generated in the 2014 to 2015 time frame that I would like to share.

Ivan Yeoh’s 2015 PhD dissertation


In terms of more recent fiber scanning technology, Ivan Yeoh’s name seems to be a common link. Show at left is a laser projected image and the source test pattern from Ivan Yeoh’s 2015 PhD dissertation “Online Self-Calibrating Precision Scanning Fiber Technology with Piezoelectric Self-Sensing“at the University of Washington. It is the best quality image of a test pattern or known image that I have found of a FSD anywhere. The dissertation is about how to use feedback to control the piezoelectric drive of the fiber. While his paper is about the endoscope calibration, he nicely included this laser projected image.

The drive resulted in 180 spirals which would nominally be 360 pixels across at the equator of the image with a 50Hz frame rate. But based on the resolution chart, the effective resolution is about 1/8th of that or only ~40 pixels, but about half of this “loss” is due to resampling a rectilinear image onto the spiral. You should also note that there is considerably more distortion in the center of the image where the fiber will be moving more slowly.

2015-yeoh-endoscope-manual-calibrationYeoh also included some good images at right showing how had previously used a calibration setup to manually calibrate the endoscope before use as it would go out of calibration with various factors including temperature. These are camera images and based on the test charts they are able to resolve about 130 pixels across which is pretty close to the Nyquist sampling rate from a 360 samples across spiral. As expected the center of the image where the fiber is moving the slowest is the most distorted.

While a 360 pixel camera is still very low resolution by today’s standards, it is still 4 to 8 times better than the resolution of the laser projected image. Unfortunately Yeoh was concerned with distortion and does not really address resolution issues in his dissertation. My resolution comments are based on measurements I could make from the images he published and copied above.

Washington Patent Application Filed in 2014

uow-2016-fsd-applicationYeoh is also the lead inventor on the University of Washington patent application US 2016/0324403 filed in 2014 and published in June 2016. At left is Fig. 26 from that application. It is supposed to be of a checkerboard pattern which you may be able to make out. The figure is described as using a “spiral in and spiral out” process where the rather than having a retrace time, they just reverse the process. This applications appears to be related to Yeoh’s dissertation work. Yeoh is shown as living in Fort Lauderdale, FL on the application, near Magic Leap headquarters.   Yeoh is also listed as an inventor on the Magic Leap application US 2016/0328884 “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” that I discuss in my last article. It would appear that Yeoh is or has worked for Magic Leap.

2008 YouTube Video


Additionally, I would like to include some images from a 2008 YouTube Video that kmanmx from the Reddit Magic Leap subreddit alerted me to. White this is old, it has a nice picture of the fiber scanning process both as a whole and with close-up image near the start of the spiral process.

For reference on the closeup image I have added the size of a “pixel” for a 250 spiral / 500 pixel image (red square) and what a 1080p pixel (green square) would be if you cropped the circle to a 16:9 aspect ratio. As you hopefully can see the spacing and jitter variations-error in the scan process are several 1080p pixels in size. While this information is from 2008, the more recent evidence above does not show a tremendous improvement in resolution.

Other Issues

So far I have mostly concentrated on the issue of resolution, but there are other serious issues that have to be overcome. What is interesting in the Magic Leap and University of Washington patent literature is the lack of patent activity to address the other issues associated with generating a fiber scanned image. If Magic Leap were serious and had solved these issues with FSD, one would expect to see patent activity in making FSD work at high resolution.

One major issue that may not be apparent to the casual observer is the the controlling/driving the lasers over an extremely large dynamic range. In addition to support the typical 256 (8-bits) per color and supporting overall brightness adjustment based on the ambient light, the speed of the scan varies by a large amount an they must compensate for this or end up with a very bright center where the scan is moving more slowly. When you combine it all together they would seem to need to control the lasers over a greater than 2000:1 dynamic range from a dim pixel at the center to a brightest pixel at the periphery.


Looking at all the evidence there is just nothing there to convince me that Magic Leap is anywhere close to having perfected a FSD to the point that it could be competitive with a conventional display device like LCOS, DLP or Micro-OLED, not less the 50 megapixel resolutions they talk about. Overall, there is reasons to doubt that a electromechanical scan process is going to in the long run compete with an all electronic method.

It very well could be that Magic Leap had hoped that FSD would work and/or it was just a good way to convince investors that they had a technology that would lead to super high resolution in the future. But there is zero evidence that have seriously improved on what the University of Washington has done. They may still be pursuing it as an R&D effort but there is no reason to believe that they will have it in a product anytime soon.

All roads point to ML using either LCOS (per Business Insider of October 2016) or a DLP based what I have heard is in some prototypes. This would mean they will likely have either 720p or 1080p resolution display, or the same as others such as Hololens (which will likely have a 1080p version soon).

The whole FSD is about trying to break through the physical pixel barrier of conventional technologies.  There are various physics (diffraction is becoming a serious issue) and material issues that will likely make it tough to make physical pixels much smaller than 3 micron.

Even if there was a display resolution breakthrough (which I doubt based on the evidence), there are issues as to whether this resolution could make it through the optics. As the resolution improves the optics have to also improve or else they will limit the resolution. This is a factor that particularly concerns me with the waveguide technologies I have seen to date that appear to be at the heart of Magic Leap optics.

Magic Leap – No Fiber Scan Display (FSD)

Sorry, No Fiber Scan Displays

For those that only want my conclusion, I will cut to the chase. Anyone that believes Magic Leap (ML) is going to have a Laser Fiber Scanned Display (FSD) anytime soon (as in the next decade) is going to be sorely disappointed. FSDs is one of those concepts that sounds like it would work until you look at it carefully. Developed at the University of Washington in the mid to late 2000’s, they were able to generate some very poor quality images in 2009 and as best I can find, nothing better since.

The fundamental problem with this technology is that wiggling a fiber is very hard to control accurately enough to make a quality display. This problem is particularly true when the scanning fiber has to come to near rest in the center of the image. It is next to impossible (and impossible at a rational cost) to have the wiggling fiber tip with finite mass and its own resonate frequency follow a highly accurate and totally repeatable path.

Magic Leap has patents applications related to FSDs showing two different ways to try and increase the resolution, provide they could ever make a decent low resolution display in the first place. Effectively, they have patents that doubled down on FSD, one was the “array of FSDs” which I discussed in the Appendix of my last article that would be insanely expensive and would not work optically in a near eye system and the other doubles down on a single FSD that ML calls “Dynamic Region Resolution” (DRR) which I will discuss below after discussing the FSD basics.

The ML patent applications on the subject of FSD read more like technical fairy tales of what they wished they could do with a bit of technical detail and drawing scattered in to make it sound plausible. But the really tough problems of making it work are never even discussed, no less solutions proposed.

Fiber Scanning Display (FSD) Basics

ml-spiral-scanThe concept of the Fiber Scanning Display (FSD) is simple enough, two piezoelectric vibrators connected to one side of an optical fiber cause the fiber tip follow a spiral path starting from the center and a working its way out. The amplitude of the vibration starts at zero in the center and then gradually increases in amplitue causing the fiber to both speed up and follow a spiral path. At the fiber tip accelerates the tip moves outward radially. The spacing of each orbit is a function of the increase in speed.


Red, Green, and Blue (RGB) lasers are combined and coupled into the fiber at the stationary end. As the fiber moves, the lasers turn on an off to create “pixels” that come out the spiraling end of the fiber. At the end a scan, the lasers are turned off and drive is gradually reduced to bring the fiber tip back to the starting point under control (if they just stopped the vibration, it would wiggle uncontrollably).  This retrace period while faster than the scan takes a significant amount of time since it is a mechanical process.

An obvious issue is how well they can control a wiggling optical fiber. As the documents point out, the fiber will want to oscillate based on its resonance frequency that can be stimulated by the piezoelectric vibrators. Still, one would expect that the motion will not be perfectly stable, particularly at the beginning when it is moving slowly and has no momentum.  Then there is the issue as how well it will follow the exactly the same path from frame to frame when the image is supposed to be still.

One major complication I did not see covered in any of the ML or University of Washington (which originated the concept) documents or applications is what it takes to control the laser accurately enough. The fiber speeding up from near zero at its center to maximum speed as the end of the scan. At the center of the spiral the tip moving very slowly (near zero speed). If you turned a laser on for the same amount of time and brightness as the center, pixels would be many times closer together and brighter at the center than the periphery. The ML applications even recognize that increasing the resolution of a single electromechanical  FSD is impossible for all practical purposes.

Remember that they are electromechanically vibrating one end of the fiber to cause the tip to move in a spiral to cover the area of a circle. There is a limit to how fast they can move the fiber, how well they can control it, and the fact that they want fill a wide rectangular area so a lot of the circular area will be cut off.

Looking through everything I could find that was published on the FSD, including Schowengerdt (ML co-founder and Chief Scientist) et al’s SID 2009 paper “1-mm Diameter, Full-color Scanning Fiber Pico Projector” and SID2010 paper, “Near-to-Eye Display using Scanning Fiber Display Engine” only low resolution still images are available and no videos. Below are two images from the SID 2009 paper along with the “Lenna” standard image reproduced in one of them, perhaps sadly, these are best FSD images I could find anywhere. What’s more, there has never been a public demonstration of it producing a video which I believe would show additional temporal and motion problems. 2009-fsd-images2

What you can see in both of the actual FSD images is that the center is much brighter than the periphery. From the Lenna FSD image you can see how distorted the image is particularly in the center (look at Lenna’s eye in the center and the brim of the hat for example). Even the outer parts of the image are pretty distorted. They don’t even have an decent brightness control of the pixels and didn’t even attempt to show color reproduction (requiring extremely precise laser control). Yes the images are old, but there are a series of extremely hard problems outlined above that are likely not solvable which is likely why we have not seen any better pictures of an FSD from ANYONE (ML or others) in the last 7 years.

While ML may have improved upon the earlier University of Washington work, there is obviously nothing they are proud enough to publish, no less a video of it working. It is obvious that non of the released ML videos use a FSD.

Maybe ML had improved it enough to show some promise to get investors to believe it was possible (just speculating). But even if they could perfect the basic FSD, by their own admission in the patent applications, the resolution would be too low to support a high resolution near eye display. They would need to come up with a plausible way to further increase the effective resolution to meet the Magic Leap hype of “50 Mega Pixels.”

“Dynamic Region Resolution (DRR) – 50 Mega Pixels ???

Magic Leap on more than one occasion has talked about the need to 50 Megapixels to support the field of view (FOV) they want with the angular resolution of 1-arcminute/pixel that they say is desirable. Suspending the disbelief that they could even make a good low resolution FSD, they doubled down with what they call “Dynamic Region Resolution” (DRR).

US 2016/0328884 (‘884) “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” shows the concept. This would appear to answer the question of how ML convinced investors that having a 50 megapixel equivalent display could be plausible (but not possible).

ml-variable-scan-thinThe application shows what could be considered to be a “foveated display”, where various area’s of the display varies in image density based on where it will be projected onto the human’s retina. The idea is to have high pixel density where the image will project on the highest resolution part of the eye, the fovea, and that resolution is “wasted” on the parts of the eye that can’t resolve it.

The concept is simple enough as shown in ‘884’s figures 17a and 17b (left). The concept is to track the pupil to see where the eye is looking (indicated by the red “X” in the figures) and then adjust the scan speed, line density, and sequential pixel density based on where the eye is looking. Fig 17a show the pattern for when the eye is looking at the center of the image where they would accelerate more slowly in the center of the scan. In Fig. 17b they show the scanning density to be higher where the eye is looking at some point in the middle of the image. They increase the line density in a ring that covers where the eye is looking.

Starting at the center the fiber tip is always accelerating.  For denser lines they just accelerate less, for less dense areas they accelerate at a higher rate so this sound plausible. The devil is in the details in how the fiber tip behaves as it acceleration rate changes.

Tracking the pupil accurately enough seems very possible with today’s technology. The patent application discusses how wide the band of high resolution needs to be to cover a reasonable range of eye movement from frame to frame which make it sound plausible. Some of the obvious fallacies with this approach include:

  1. Control the a wiggling fiber with enough precision to meet the high resolution and to do it repeatedly from scan to scan. They can’t even do it at low resolution with constant acceleration.
  2. Stability/tracking of the fiber as it increase and decreases its acceleration.
  3. Controlling the laser brightness accurately at both the highest and lowest resolution regions.  This will be particularly tricky as the the fiber increases or decreases it acceleration rate.
  4. The rest of the optics including any lenses and waveguides must support the highest resolution possible for the use to be able to see it. This means that the other optics need to be extremely high precision (and expensive)
What about Focus Planes?

Beyond the above is the need to support ML’s whole focus plane (“poor person’s light field”) concept.  To support focus planes they need 2 to 6 or more images per eye per frame time (say 1/60th of a second). The fiber scanning process is so slow that even producing a single low resolution and highly distorted image in 1/60th is barely possible, no less multiple images per 1/60th of a second to support the plane concept.  So to support the focus plane concept they would need a FSD per focus plane with all its associated lasers and control circuitry; the size and cost to produce would become astronomical.

Conclusion – A Way to Convince the Gullible

The whole FSD appears to me to be a dead end other than to convince the gullible that it is plausible. Even getting a FSD to produce a single low resolution image would take more than one miracle.  The idea of a DRR just doubles down on a concept that cannot produce a decent low resolution image.

The overall impression I get from the ML patent applications is that they were written to impress people (investors?) that didn’t look at the details too carefully. I can see how one can get sucked into the whole DRR concept as the applications gives numbers and graphs that try and show it is plausible; but they ignore the huge issues that they have not figured out.