Tag Archive for DLP

Kopin Entering OLED Microdisplay Market

Kopin Making OLED Microdisplays

Kopin announced today that they are getting into the OLED Microdisplay business. This is particularly notable because Kopin has been a long time (since 1999) manufacture of transmissive LCD microdisplays used in camera viewfinders and near eye display devices. They also bought Forth Dimension Displays back in 2011, a maker of high resolution ferroelectric reflective LCOS used in higher end near eye products.

OLED Microdisplays Trending in AR/VR Market

With the rare exception of the large and bulky Meta 2, microdisplays, (LCOS, DLP, OLED, and transmissive LCD), dominate the AR/MR see-through market. They also are a significant factor in VR and other non-see-through near eye displays

Kopins entry seems to be part of what may be a trend toward OLED Microdisplays used in near eye products. ODG’s next generation “Horizon” AR glasses is switching from LCOS (used in the current R7) to OLED microdisplays. Epson which was a direct competitor to Kopin in transmissive LCD, switched to OLED microdisplays in their new Moverio BT-300 AR glasses announced back in February.

OLED Microdisplays Could Make VR and Non-See-Through Headsets Smaller/Lighter

Today most of the VR headsets are following Oculus’s use of large flat panels with simple optics. This leads to large bulky headsets, but the cost of OLED and LCD flat panels is so low compared to other microdisplays with their optics that they win out. OLED microdisplays have been far too expensive to compete on price with the larger flat panels, but this could change as there are more entrants into the OLED microdisplay market.

OLEDs Don’t Work With Waveguides As Used By Hololens and Magic Leap

It should be noted that the broad spectrum and diffuse light emitted by OLED is generally incompatible with the flat waveguide optics such as used by Hololens and is expected from Magic Leap (ML). So don’t expect to see these being used by Hololens and ML anytime soon unless they radically redesign their optics. Illuminated microdisplays like DLP and LCOS can be illuminated by narrower spectrum light sources such as LED and even lasers and the light can be highly collimated by the illumination optics.

Transmissive LCD Microdisplays Can’t Compete As Resolution Increases

If anything, this announcement from Kopin is the last nail in the coffin of the transmissive LCD microdisplay in the future. OLED Microdisplays have the advantages over transmissive Micro-LCD in the ability to go to higher resolution and smaller pixels to keep the overall display size down for a given resolution when compared to transmissive LCD. OLEDs consume less power for the same brightness than transmissive LCD. OLED also have much better contrast. As resolution increases transmissive LCDs cannot compete.

OLEDs Microdisplays More Of A Mixed Set of Pros and Cons Compared to LCOS and DLP.

There is a mix of pro’s and con’s when comparing OLED microdisplays with LCOS and DLP. The Pro’s for OLED over LCOS and DLP include:

  1. Significantly simpler optical path (illumination path not in the way). Enables optical solutions not possible with reflective microdisplays
  2. Lower power for a given brightness
  3. Separate RGB subpixels so there is no field sequential color breakup
  4. Higher contrast.

The advantages for LCOS and DLP reflective technologies over OLED microdisplays include:

  1. Smaller pixel equals a smaller display for a given resoluion. DLP and LCOS pixels are typically from 2 to 10 times smaller in area per pixel.
  2. Ability to use narrow band light sources which enable the use of waveguides (flat optical combiners).
  3. Higher brightness
  4. Longer lifetime
  5. Lower cost even including the extra optics and illumination

Up until recently, the cost of OLED microdisplays were so high that only defense contractors and other applications that could afford the high cost could consider them. But that seems to be changing. Also historically the brightness and lifetimes of OLED microdisplays were limited. But companies are making progress.

OLED Microdisplay Competition

Kopin is long from being the first and certainly is not the biggest entry in the OLED microdisplay market. But Kopin does have a history of selling volume into the microdisplay market. The list of known competitors includes:

  1. Sony appears to be the biggest player. They have been building OLED microdisplays for many years for use in camera viewfinders. They are starting to bring higher resolution products to the market and bring the costs down.
  2. eMagin is a 23-year-old “startup”. They have a lot of base technology and are a “pure play” stock wise. But they have failed to break through and are in danger of being outrun by big companies
  3. MicoOLED – Small France startup – not sure where they really stand.
  4. Samsung – nothing announced but they have all the technology necessary to make them. Update: Ron Mertens of OLED-Info.com informed me that I was rumored that the second generation of Google Glass was considering a Samsung OLED microdisplay and that Samsung had presented a paper going back to 2011.
  5.  LG – nothing announced but they have all the technology necessary to make them.

I included Samsung and LG above not because I have seen or heard of them working on them, but I would be amazed if they didn’t at least have a significant R&D effort given their sets of expertise and their extreme interest in this market.

For More Information:

For more complete information on the OLED microdisplay market, you might want go to OLED-info that has been following both large flat panel and small OLED microdisplay devices for many years. They also have two reports available, OLED Microdisplays Market Report and OLED for VR and AR Market Report.

For those who want to know more about Kopin’s manufacturing plan, Chris Chinnock of Insight Media has an interesting article outlining Kopin’s fabless development strategy.

Magic Leap: “The Information” Article

The Information: The Reality Behind Magic Leap

the-information-magic-leap-dec-8-2016-coverThe online news magazine “The Information” released the article “The Reality Behind Magic Leap” on Dec. 8th, 2016, by Reed Albergotti and in the story gave a link to this blog. So you may be a new reader.  The article appears to be well researched and I understand that “The Information” has a reputation as a reliable news source. The article also dovetails nicely on the business side with what I have been discussing with this blog on the technical side. The magazine is a paid publication but there is a summary on The Verge along with their added commentary and a lot of the text from the article has shown up in discussion forums about Magic Leap (ML).

For this blog post, I am going to try put 2+2 together between what I have figured out on the technical side and what Mr. Albergotti reported on the business side. Note, I have not seen what he as seen so I am reading between the lines somewhat but hopefully it will give a more complete picture.

The Magic Leap Prototypes

The article states “Magic Leap CEO Rony Abovitz acknowledged that the prototypes used different technology.” This blog has identified that the early prototypes as:

  1. ml-495-applicationA DLP based prototype that uses a variable focus lens to produce “focus planes” by generating different images for different distances and changing the focus between images and supported maybe 3 to 6 focus planes. This is probably their earliest one and is what the article calls “The Beast” and described as the “size of a refrigerator.”
  2. One or more OLED base variations once again using an electrically controlled focus element where ML made a smaller helmet version. The article discussed only one version, dubbed “WD3” but I suspect that they had variations of this one with different capabilities (as in maybe a WD1, WD2, WD3 and maybe more). I believe based on the video evidence a version that could only change focus was used for their Oct. 14, 2015 through the technology” video.  Their later “A New Morning” and “Lost Droids” videos appear to use an Mico-OLED based optics that supported at least two simultaneous focus planes by running the OLED at 120hz to generate two 60hz sequential “focus plane” images and changing the focus be each one.
  3. ml-slm-beam-splitter-lcos-type-optics-colorThe LCOS version that is using their “Photonic Chip” and supports about 2 focus planes with no moving focusing optics (according to the article); what the article dubbed the “PEQ” prototype.

If you want to get more into the gory technical details on how the above work, I would suggest one of my earlier articles titled “Magic Leap – Separating Magic and Reality“. And if you really want to get dirty, read the ML patent applications they reference but be prepared for a long read as they they cover a lot of totally different concepts.

As this blog has been reporting (and for which I have gotten criticism on some of the on-line discussion forms), the must discussed “fiber scanning display” (FSD) has not been perfected and with it any chance of making the “light field display” ML has talked so much about. Quoting the article,”Magic Leap relegated the fiber scanning display to a long-term research project in hopes that it might one day work, and significantly pared back on its light field display idea.

Possible Scenario – A Little Story

Based on my startup and big company experiences, I think I understand roughly how it went down. Please take the rest of this section as reasonable speculation and reading between the lines of known information. So I am going to play Columbo (old TV series reference) below to give my theory of how it went down.

Startups have sometimes been described as “Jumping out of a plane and sewing a parachute on the way down.” This appears to be the case with Magic Leap. They had a rough idea of what they wanted to do and were able to build an impressive demo system and with some good hand waving convince investors they could reduce it to a consumer headset.

They found Brian Schowengerdt, co-founder and Chief Scientist, who worked on the fiber scanning display (FSD) technology and the issue of vergence and accomodation at the University of Washington to join. Mr. Schowengerdt is clearly a smart person that added a lot of credibility to Rony Abovitz’s dreams. The problem with “university types” is that they often don’t appreciate what it takes to go from R&D to a real high volume product.

The “new optical people” built “The Beast” prototype using DLP’s and electrical controlled focusing lenses to support multiple focus plane, to address the vergence and accommodation issue. They then used the “Jedi Hand Wave Mind Trick” (ordinary hand waving may not be enough) to show the DLP engine, the crude low resolution FSD display from the U of W, some non-functional waveguides, and a mock-up of how wonderful it would be someday with a simple application of money and people (if you can dream it you can build it, right?).

This got them their “big fish,” Google who was attuned to the future of near eye displays with their investment in Google Glass and all the big noise with Oculus Rift. There is phenomenal FoMO (Fear of Missing Out) going on with AR/VR/MR  The fact they got a lot of money from a big name company became it own publicity and fund raising engine. ML then got showered with money and that they hoped could cover the bet. Have Google invest publicly also became its own shield against any question of whether it would work.

All the money gave them a lot of altitude to try and build the proverbial parachute on the way down. But sometimes the problem is harder than all the smart people and money can solve. As I have pointed out on this blog, making the fiber scan display work at high resolution is no small task if not impossible. They came to realize this at some point, probably early on, that FSD were not going to happen in a meaningful time frame.

So “plan B” became to use an existing working display technology to give a similar visual effect, even if much reduced in resolution. The beast is way to big and expensive to cost reduce and then need to have more demo systems that are easier to make.

So then they make the WDx based on OLEDs. But there is fatal flaw with using OLEDs (and it tripped me up at first when looking at the videos). While OLED make the design much easier and smaller the don’t work due to the nature of the they put out with the wonderfully flat waveguides (what ML calls their “Photonics Chip”) that ML has convince investors are part of their secret sauce.

So if they couldn’t use the Photonics Chip with OLEDs and the FSD is a no-go, what do you tell investors, both of your secret sauces are a bust? So in parallel they are working on plan “C” which is to use LCOS panels with LED light sources that will work with some type of waveguide which they will dub the “Photonics Chip”.

But then there is a fly in the ointment. Microsoft starts going public with their Hololens system making Magic Leap look like they are way behind the giant Microsoft that can spend even more money than ML can raise. They need to show something to stay relevant. They start with totally fake videos and get called on the carpet for being obviously fake. So they need a “Magic Leap Technology” (but not the optics they are actually planning on using) demo.

The “Beast System” with it DLP’s and field sequential color will not video well. The camera will reveal to any knowledgeable expert what they are using. So for the video they press into service the WDx OLED systems that will video better. By cleaver editing and only showing short clips, they can demonstrate some focus effects while not showing the limitations of the WDx prototypes. These videos then make ML seem more “real” and keep people from asking too many embarrassing questions.

A problem jhere is that LCOS is much slower than DLP’s and thus they may only be able to support about 2 focus planes. I also believe from 16 years working with LCOS that this likely to look like crap to the eye due to color field breakup; but reapplying the Jedi Mind Trick, maybe two focus planes will work and people won’t notice the color field breakup. And thus you have the PEQ which still does not work well or would be demoing with it rather than the helmet sized WD3.

I suspect that Reed Albergotti from “The Information” had gotten the drop on ML by doing some good investigative journalism work. He told them he was going to run with the story and ML decided to try see if they could do damage control and invited him in. But apparently he was prepared and still saw the holes in their story.

Epilogue: It sounds like Mr. Schowengerdt has been put off to the side having served is usefulness in raising money. They used the money to hire other optical experts who knew how to design the optics they would actually be using. He may be still playing around the FSD to keep the dream alive of a super high resolution display someday and maybe the the next to impossible high resolution light fields (I would suggest reading “The Horse Will Talk Fable” to gain insight into why they would keep doing this as an “R&D” program).

I’m probably a little off in the details, but it probably went down something like the above. If not, hopefully you found it an amusing story. BTW, if you want to make a book and or movie out of this original story please consider it my copyrighted work (c) 2016 (my father was and two brothers are Patent Lawyers and I learned about copyright as a small child at my fathers knee).

Lessons Learned

In my experience startups that succeed in building their product have more than a vague idea of what they want to do and HOW they are going to do it. They realize that money and smart people can’t cure all ills. Most importantly they understand where they have risk and then only have at most A SINGLE serious risk. They then focus on making sure they covering that risk. In the case of Magic Leap, they had multiple major risks in many different areas. You can’t focus on the key risk because there so many and it is a prescription for product failure no matter how much money is applied.

Its even possible the “smart money” that invested realized that ML realized that they were unlikely to totally succeed but thought with money and smart people they might spin out some valuable technology and/or patents. The “equation works” if they multiply a hoped by $100B/year market by even a small chance of success. If a big name places what is for them a small bet, it is surprising how much money will follow along assuming the big name investor had done all the hard work of due diligence.

Even if they get paste the basic technology risk get the PEQ running. We they will then have the problem of building a high volume product, worse yet they are building their own factory. And then we have the 90/90 rule which states, “it takes 90% of the effort to get 90% of the way there and then another 90% to solve the last 10%.” When you have a fully working prototype that behaves well (which by the reports in ML has NOT achieved yet) you have just made it to the starting line; then you have to make it manufacturable at a reasonable cost and yield. Other have said it is really 90/90/90 where there is a third 90%. This is where many a Kickstarter company has spun their wheels.

Magic Leap & Hololens: Waveguide Ego Trip?

ml-and-hololens-combiner-cropThe Dark Side of Waveguides

Flat and thin waveguides are certainly impressive optical devices. It is almost magical how you can put light into what looks a lot like thin plates of glass and an small image will go on one side and then with total internal reflection (TIR) inside the glass, the image comes out in a different place. They are coveted by R&D people for their scientific sophistication and loved by Industrial Designers because they look so much like ordinary glass.

But there is a “dark side” to waveguides, at least every one that I have seen. To made them work, the light follows a torturous path and often has to be bent at about 45 degrees to couple into the waveguide and then by roughly 45 degrees to couple out in addition to rattling of the two surfaces while it TIRs. The image is just never the same quality when it goes through all this torture. Some of the light does not make all the turns and bends correctly and it come out in the wrong places which degrade the image quality. A major effect I have seen in every diffractive/holographic waveguid  is I have come to call “waveguide glow.”

Part of the problem is that when you bend light either by refraction or using diffraction or holograms, the various colors of light bend slightly differently based on wavelength. The diffraction/holograms are tuned for each color but invariably they have some effect on the other color; this is particularly problem is if the colors don’t have a narrow spectrum that is exactly match by the waveguide. Even microscopic defects cause some light to follow the wrong path and invariably a grating/hologram meant to bend say green, will also affect the direction of say blue. Worse yet, some of the  light gets scattered, and causes the waveguide glow.

hololens-through-the-lens-waveguide-glowTo the right is a still frame from a “Through the lens” video” taken through the a Hololens headset. Note, this is actually through the optics and NOT the video feed that Microsoft and most other people show. What you should notice is a violet colored “glow” beneath the white circle. There is usually also a tendency to have a glow or halo around any high contrast object/text, but it is most noticeable when there is a large bright area.

For these waveguides to work at all, they require very high quality manufacturing which tends to make them expensive. I have heard several reports that Hololens has very low yields of their waveguide.

I haven’t, nor have most people that have visited Magic Leap (ML), seen though ML’s waveguide. What  ML leap shows most if not all their visitors are prototype systems that use non-waveguide optics has I discussed last time. Maybe ML has solved all the problems with waveguides, if they have, they will be the first.

I have nothing personally against waveguides. They are marvels of optical science and require very intelligent people to make them and very high precision manufacturing to make. It is just that they always seem to hurt image quality and they tend to be expensive.

Hololens – How Did Waveguides Reduce the Size?

Microsoft acquired their waveguide technology from Nokia. It looks almost like they found this great bit of technology that Nokia had developed and decided to build a product around it. hololensBut then when you look at Hololens (left) there is this the shield to protect the lenses (often tinted but I picked a clear shield so you could see the waveguides). On top of this there is all the other electronic and frame to mount it on the user’s head.

The space savings from the using waveguides over much simpler flat combiner  is a drop in the bucket.

ODG Same Basic Design for LCOS and OLED

I’m picking Osterhout Design Group’s for comparison below because because they demonstrate a simpler, more flexible, and better image quality alternative to using a waveguide. I think it makes a point. Most probably have not heard of them, but I have know of them for about 8 or 9 years (I have no relationship with them at this time). They have done mostly military headsets in the past and burst onto the public scene when Microsoft paid them about $150 million dollars for a license to their I.P. Beyond this they just raised another $58 million from V.C.’s. Still this is chump change compared to what Hololens and Magic Leap are spending.

Below is the ODG R7 LCOS based glasses (with the one of the protective covers removed). Note, the very simple flat combiner. It is extremely low tech and much lower cost compared to the Hololens waveguide. To be fair, the R7 does not have as much in the way of sensors and processing as the as Hololens.

odg-r-with-a-cover-removed

The point here is that by the time you put the shield on the Hololens what difference does having a flat waveguide make to the overall size? Worse yet, the image quality from the simple combiner is much better.

Next, below is ODG’s next generation Horizon glasses that use a 1080p Micro-OLED display. It appears to have somewhat larger combiner (I can’t tell if it is flat or slightly curved from the available pictures of it) to support the wider FOV and a larger outer cover, but pretty much the same design. The remarkable thing is that they can use the a similar optical design with the OLEDs and the whole thing is about the same size where as the Hololens waveguide won’t work at all with OLEDs due broad bandwidth colors OLEDs generate.

odg-horizons-50d-fov

ODG put up a short video clip through their optics of the Micro-OLED based Horizon (they don’t come out and say that it is, but the frame is from the Horizon and the image motion artifacts are from an OLED). The image quality appears to be (you can’t be too quantitative from a YouTube video) much better than anything I have seen from waveguide optics. There is not of the “waveguide glow”. odg-oled-through-the-optics-002

They even were willing to show text image with both clear and white backgrounds that looks reasonably good (see below). It looks more like a monitor image except for the fact that is translucent. This is the hard content display because you know what it is supposed to look like so you know when something is wrong. Also, that large white area would glow like mad on any waveguide optics I have seen. odg-oled-text-screen-002

The clear text on white background is a little hard to read at small size because it is translucent, but that is a fundamental issue will all  see-though displays. The “black” is what ever is in the background and the “white” is the combination of the light from the image and the real world background.  See through displays are never going as good as an opaque displays in this regards.

Hololens and Magic Leap – Cart Before the Horse

It looks to me like Hololens and Magic Leap both started with a waveguide display as a given and then built everything else around it. They overlooked that they were building a system. Additionally, they needed get it in many developers hands as soon as possible to work out the myriad of other sensor, software, and human factors issues. The waveguide became a bottleneck, and from what I can see from Hololens was an unnecessary burden. As my fellow TI Fellow Gene Frantz and I used to say when we where on TI’s patent committeed, “it is often the great new invention that causes the product to fail.”

I (and few/nobody outside of Magic Leap) has seen an image through ML’s production combiner, maybe they will be the first to make one that looks as good as simpler combiner solution (I tend to doubt it, but it not impossible). But what has leaked out is that they have had problems getting systems to their own internal developers. According the Business Insider’s Oct. 24th article (with my added highlighting):

“Court filings reveal new secrets about the company, including a west coast software team in disarray, insufficient hardware for testing, and a secret skunkworks team devoted to getting patents and designing new prototypes — before its first product has even hit the market.”

From what I can tell of what Magic Leap is trying to do, namely focus planes to support vergence/accommodation, they could have achieved this faster with more conventional optics. It might not have been as sleek or “magical” as the final product, but it would have done the job, shown the advantage (assuming it is compelling) and got their internal developers up and running sooner.

It is even more obvious for Hololens. Using a simple combiner would have added trivially to the the design size while reducing the cost and getting the the SDK’s in more developer’s hands sooner.

Summary

It looks to me that both Hololens and likely Magic Leap put too much emphasis on the using waveguides which had a domino effect in other decisions rather than making a holistic system decision. The way I see it:

  1. The waveguide did not dramatically make Hololens smaller (the case is still out for Magic Leap – maybe they will pull a rabbit out of the hat). Look at ODG’s designs, they are every bit as small.
  2. The image quality is worse with waveguides than simpler combiner designs.
  3. Using waveguides boxed them in to using only display devices that were compatible with their waveguides. Most notably they can’t use OLED or other display technology that emit broader spectrum light.
  4. Even if it was smaller, it is more important to get more SDKs in developers (internal and/or external hand) sooner rather than later.

Hololens and Magic Leap appear to be banking on getting waveguides into volume production in order to solve all the image quality and cost problems with them. But it will depend on a lot of factors, some of which are not in their control, namely, how hard it is to make them well and at a price that people can afford. Even if they solve all the issues with waveguides, it is only a small piece of their puzzle.

Right now ODG seems to be taking more the of the original Apple/Wozniak approach; they are finding elegance in a simpler design. I still have issues with what they are doing, but in the area of combining the light and image quality, they seem to be way ahead.

Magic Leap: When Reality Hits the Fan

Largely A Summary With Some New Information

ml-slm-beam-splitter-lcos-type-optics-colorI have covered a lot of material and even then only glossed at the surface of what I have learned about Magic Leap (ML). By combining the information available (patent applications, articles, and my sources), I have a fairly accurate picture of what Magic Leap is actually doing based on feedback I have received from multiple sources.

This blog has covered a lot of different topics and some conclusions have changed slightly as I discovered more information and with feedback from some of my sources. Additionally, many people just want “the answer.” So I thought it would be helpful to summarize some of the key results including some more up to date information.

What Magic Leap Is Not Doing In The Product

Between what I have learned and feedback from sources I can say conclusively that ML is not doing the following:

  1. Light Fields – These would requires a ridiculously large and expensive display system for even moderate resolution.
  2. Fiber Scan Displays – They have demonstrated low resolution versions of these and may have used them to convince investors that they had a way to break through the limitations of pixel size of Spatial Light Modulators (SLM) like LCOS, DLP, and OLEDs. Its not clear how much they improved the technology over what the University of Washington had done, but they have given up on these being competitive in resolution and cost with SLMs anytime soon. It appears to have been channeled into being a long term R&D effort and to keep the dream alive with investors.
  3. Laser Beam Scanning (LBS) by Microvision or anyone else – I only put this on the list because of an incredibly ill-informed new release by Technavio stating “Magic Leap is yet to release its product, and the product is likely to adopt MicroVision’s VRD technology.” Based on this, I would give the entire report they are marketing zero credibility; I think they are basing their reports on reading fan-person blogs about Microvision.
  4. OLED Microdisplays – They were using these in their demos and likely in the video they made, but OLED are incompatible optically with there use of a diffractive waveguide (= ML’s Photonic Chip).
Prototypes that Magic Leap Has Shown
  1. FSD – Very low resolution/crude green only fiber scanned display. This is what Rachel Metz described (with my emphasis added) in her MIT Technology Review March/April 2015 article, “It includes a projector, built into a black wire, that’s smaller than a grain of rice and channels light toward a single see-through lens. Peering through the lens, I spy a crude green version of the same four-armed monster that earlier seemed to stomp around on my palm.
  2. ml-495-applicationTI DLP with a conventional combiner and  a “variable focus element” (VFE). They use the DLP to generate a series of focus planes time sequentially and change the VFE between the sequential focus planes. Based on what I have heard, this is their most impressive demo visually and they have been using this for over a year, but the system is huge.
  3. OLED with a conventional combiner (not a waveguide/”Photonics Chip”). This is likely the version they used to shoot their “Through Magic Leap Technology” videos that I analyzed in my Nov. 9th, 2016 blog post. In that article I though that Micro-OLED might be used in the final product, but I have revised this opinion. OLEDs output very wide bandwidth light that is incompatible with waveguides, so it would be incompatible with working with Photonics Chip ML makes such a big deal about.

What is curious is that none of these prototypes, with the possible exception of #1, the single color low resolution FSD, are using a “waveguide.” Waveguides are largely incompatible with OLEDs and having a variable focus element is also problematical.  Also none of these are using LCOS, the most likely technology in the final product.

What Magic Leap Is Trying to Do In Their First “Product”

I’m going to piece together below what I believe based on the information available from both public information and some private conversations (but none of it is based on NDA’ed information as far as I am aware).

  1. ml-slm-beam-splitter-lcos-type-optics-colorLCOS Microdisplay – All the evidence including Business Insider’s October 27, 2016 points to ML using LCOS. They need a technology that will work well with waveguides using narrow band (likely LED) light sources that they can make as bright as necessary and control the angle of the light illumination. LCOS is less expensive, more optically compact, and requires less power than DLP for near eye systems. All these reason are same as why Hololens is using LCOS. Note, I’m not 100% sure on them using LCOS, but it by far the most likely technology they will be using. They could also be using DLP but I would put that at less than a 10% chance. I’m now ruling out Micro-OLED because it would not work in a waveguide.
  2. Two (2) sequential focus planes are supported – The LCOS microdisplay is likely only able to support about 120 full color frames per second which is only enough to support 2 sequential focus planes per 1/60th of a second of a moving image. Supporting more planes at a slower rate would result in serious image breakup when things move. The other big issue is the amount of processing required. Having even two focus planes greatly increase the computation that have to be done. To make it work correctly, they will need to track the person’s pupils and factor that into their processing and deal with things like occlusion. Also with the limited number of focus planes they will have to figure out how to “fake” or deal with a wider range of focus.
  3. Variable Focus – What I don’t know is how they are supporting the change in focus between the sequential focus planes. They could be using some form of electrically alterable lens but it is problematical to have non-collimated light entering a waveguide. It would therefore seem more consistent for them to be using the technique shown in their patent application US 2016/0327789 that I discussed before.
  4. Photmagic-leap-combiner-croponics Chip (= Diffractive Waveguide) – ML has made a big deal about their Photonic’s Chip, what everyone else would call a “waveguide.” The Photonics Chip likely works similar to the one Hololens uses (for more information on waveguides, see my Oct 27th, 2016 post). The reports are that Hololens has suffered low yields with their Waveguides and Magic Leaps will have more to do optically to support focus planes.
Comments

Overall, I think it it is very clear that what they will actually make is only a fraction of he vision they have portrayed to the press. They may have wanted to do 50 megapixel equivalent foveated displays, use FSD as their display device, have 6 focus planes, or even (from Fortune July 12, 2016) ““light-field” technology essentially mimics the brain’s visual-perception mechanisms to create objects and even people who look and behave just the way they would in the real world, and interact with that world seamlessly.” But then, they have to build something that actually works and that people can afford to buy. Reality then hits the fan

 

Magic Leap – Fiber Scanning Display Follow UP

Some Newer Information On Fiber Scanning

Through some discussions and further searching I found some more information about Fiber Scanning Displays (FSD) that I wanted to share. If anything, this material further supports the contention that Magic Leap (ML) is not going to have a high resolution FSD anytime soon.

Most of the images available is about fiber scanning for use as a endoscope camera and not as a display device. The images are of things like body parts they really don’t show resolution or the amount of distortion in the image. Furthermore most of the images are from 2008 or older which gives quite a bit of time for improvement. I have found some information that was generated in the 2014 to 2015 time frame that I would like to share.

Ivan Yeoh’s 2015 PhD dissertation

2015-yeoh-laser-projection

In terms of more recent fiber scanning technology, Ivan Yeoh’s name seems to be a common link. Show at left is a laser projected image and the source test pattern from Ivan Yeoh’s 2015 PhD dissertation “Online Self-Calibrating Precision Scanning Fiber Technology with Piezoelectric Self-Sensing“at the University of Washington. It is the best quality image of a test pattern or known image that I have found of a FSD anywhere. The dissertation is about how to use feedback to control the piezoelectric drive of the fiber. While his paper is about the endoscope calibration, he nicely included this laser projected image.

The drive resulted in 180 spirals which would nominally be 360 pixels across at the equator of the image with a 50Hz frame rate. But based on the resolution chart, the effective resolution is about 1/8th of that or only ~40 pixels, but about half of this “loss” is due to resampling a rectilinear image onto the spiral. You should also note that there is considerably more distortion in the center of the image where the fiber will be moving more slowly.

2015-yeoh-endoscope-manual-calibrationYeoh also included some good images at right showing how had previously used a calibration setup to manually calibrate the endoscope before use as it would go out of calibration with various factors including temperature. These are camera images and based on the test charts they are able to resolve about 130 pixels across which is pretty close to the Nyquist sampling rate from a 360 samples across spiral. As expected the center of the image where the fiber is moving the slowest is the most distorted.

While a 360 pixel camera is still very low resolution by today’s standards, it is still 4 to 8 times better than the resolution of the laser projected image. Unfortunately Yeoh was concerned with distortion and does not really address resolution issues in his dissertation. My resolution comments are based on measurements I could make from the images he published and copied above.

Washington Patent Application Filed in 2014

uow-2016-fsd-applicationYeoh is also the lead inventor on the University of Washington patent application US 2016/0324403 filed in 2014 and published in June 2016. At left is Fig. 26 from that application. It is supposed to be of a checkerboard pattern which you may be able to make out. The figure is described as using a “spiral in and spiral out” process where the rather than having a retrace time, they just reverse the process. This applications appears to be related to Yeoh’s dissertation work. Yeoh is shown as living in Fort Lauderdale, FL on the application, near Magic Leap headquarters.   Yeoh is also listed as an inventor on the Magic Leap application US 2016/0328884 “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” that I discuss in my last article. It would appear that Yeoh is or has worked for Magic Leap.

2008 YouTube Video

ideal-versus-actually-spiral-scan

Additionally, I would like to include some images from a 2008 YouTube Video that kmanmx from the Reddit Magic Leap subreddit alerted me to. White this is old, it has a nice picture of the fiber scanning process both as a whole and with close-up image near the start of the spiral process.

For reference on the closeup image I have added the size of a “pixel” for a 250 spiral / 500 pixel image (red square) and what a 1080p pixel (green square) would be if you cropped the circle to a 16:9 aspect ratio. As you hopefully can see the spacing and jitter variations-error in the scan process are several 1080p pixels in size. While this information is from 2008, the more recent evidence above does not show a tremendous improvement in resolution.

Other Issues

So far I have mostly concentrated on the issue of resolution, but there are other serious issues that have to be overcome. What is interesting in the Magic Leap and University of Washington patent literature is the lack of patent activity to address the other issues associated with generating a fiber scanned image. If Magic Leap were serious and had solved these issues with FSD, one would expect to see patent activity in making FSD work at high resolution.

One major issue that may not be apparent to the casual observer is the the controlling/driving the lasers over an extremely large dynamic range. In addition to support the typical 256 (8-bits) per color and supporting overall brightness adjustment based on the ambient light, the speed of the scan varies by a large amount an they must compensate for this or end up with a very bright center where the scan is moving more slowly. When you combine it all together they would seem to need to control the lasers over a greater than 2000:1 dynamic range from a dim pixel at the center to a brightest pixel at the periphery.

Conclusion

Looking at all the evidence there is just nothing there to convince me that Magic Leap is anywhere close to having perfected a FSD to the point that it could be competitive with a conventional display device like LCOS, DLP or Micro-OLED, not less the 50 megapixel resolutions they talk about. Overall, there is reasons to doubt that a electromechanical scan process is going to in the long run compete with an all electronic method.

It very well could be that Magic Leap had hoped that FSD would work and/or it was just a good way to convince investors that they had a technology that would lead to super high resolution in the future. But there is zero evidence that have seriously improved on what the University of Washington has done. They may still be pursuing it as an R&D effort but there is no reason to believe that they will have it in a product anytime soon.

All roads point to ML using either LCOS (per Business Insider of October 2016) or a DLP based what I have heard is in some prototypes. This would mean they will likely have either 720p or 1080p resolution display, or the same as others such as Hololens (which will likely have a 1080p version soon).

The whole FSD is about trying to break through the physical pixel barrier of conventional technologies.  There are various physics (diffraction is becoming a serious issue) and material issues that will likely make it tough to make physical pixels much smaller than 3 micron.

Even if there was a display resolution breakthrough (which I doubt based on the evidence), there are issues as to whether this resolution could make it through the optics. As the resolution improves the optics have to also improve or else they will limit the resolution. This is a factor that particularly concerns me with the waveguide technologies I have seen to date that appear to be at the heart of Magic Leap optics.

Magic Leap – Separating Magic and Reality

The Goal – Explain What is Magic Leap Doing

Magic Leap has a way of talking about what they hope to do someday and not necessarily what they can do anytime soon.  Their patent applications are full of things that are totally impossible or impractical to implement.  I’ve been reading well over a thousand pages of Magic Leap (ML) patents/applications, various articles about the company, watching ML’s “through the optics” videos frame by frame, and then applying my own knowledge of display devices and the technology business to develop a picture of what Magic Leap might produce.

Some warnings in advance

If you want all happiness and butterflies, as well as elephants in your hand and whales jumping in auditoriums, or some tall tale of 50 megapixel displays and of how great it will be someday, you have come to the wrong place.  I’m putting the puzzle together based on the evidence and filling in with what is likely to be possible in both the next few years and for the next decade.

Separating Fact From Fiction

There have been other well meaning evaluations such as “Demystifying Magic Leap: What Is It and How Does It Work?“,  “GPU of the Brain“, and the videos by “Vance Vids” but these tend to start from the point of believing the promotion/marketing surrounding ML and finding support in the patent applications rather than critically evaluating them. Wired Magazine has a series of articles as well as Forbes and others have covered ML, but these have been are personality and business pieces that make no attempt to seriously understand or evaluate the technology.

ml-array-picAmong the biggest fantasies surrounding Magic Leap is the Arrayed Fiber Scanning Displays (FSD); many people think this is real. ML Co-founder and Chief Scientist, Brian Schowengerdt, develop this display concept at the University of Washington based off an innovative endoscope technology and it features prominently in a number of ML assigned patent applications.  There are giant issues in scaling up FSD technology to high resolution and what it would require.

In order to get on with what ML is most likely doing, I have moved to the Appendix why FSDs, light fields, and very complex waveguides are not what Magic Leap is doing. Once you get rid of all the “noise” of the impossible things in the ML patents, you are left with a much better picture of what they are actually could be doing.

What left is enough to make impressive demos and it may be possible to produce at a price that at least some people could afford in the next two years. But ML still has to live by what is possible to manufacture.

Magic Leaps Optical “Magic” – Focus Planes

Fm: Journal of Vision 2009

At the heart all of ML optical related patents is the concept eye vergence-accomodation where the focus of the of the various parts of a 3-D image should agree with their distances or it will cause eye/brain discomfort. For more details about this subject see this information about Stanford’s work in this area and their approach of using quantized (only 2 level) time sequential light fields.

There are some key similarities in that between the Stanford and Magic Leap’s approaches.  They both quantize to a few levels to make them possible to implement and they both present their images time sequentially and they rely on the eye/brain to both fill in between the quantizated levels and integrate a series of time sequential images. Stanford’s approach is decidedly not a “see through” with an Oculus-like setup with two LCD flat panel displays in series where Magic Leap’s goal is to merge the 3-D images with the real world with Mixed Reality (MR).

ml-focus-planesMagic Leap uses the concept of “focus planes” where they conceptually break up a 3-D image into quantized focus planes based on the distance of the virtual image.  While they show 6 virtual planes in Fig. 4 from the ML application above, that is probably what they would like to do but they are doing fewer planes (2 to 4) due to practical concerns.

Magic Leap then renders the parts of an image image into the various planes based on the virtual distance.  The ML optics make it planes appear to the eye like they are focus based their corresponding virtual distance. These planes are optically stacked on top of each other give the final image and they rely on the person’s eye/brain to fill in for the quantization.

Frame Sequential Focus Planes With SLMs

ml-slm-vfe-biocular-systemMagic Leap’s patents/applications show various ways to generate these focus planes, the most fully form concepts use a single display per eye and present the focus planes time sequentially in rapid succession, what ML refers to as “frame-sequential“where there is one focus plane per “frame.”

Both due to the cost and size multiple displays per eye and their associated optics including those to align and overlay them, the only possible way ML could build a product for even a modest volume market is by using frame sequential methods using a a high speed spatial light modulator (SLM) such a DLP, LCOS, or OLED microdisplay.

Waveguides and Focus Planes

Light rays that coming from a far away point that make into the eye are essentially parallel (collimated) and light rays from a near point have a wider set angles.  These differences in angles is what makes them focus differently, but at the same time creates problems for existing waveguide optics, such as what Hololens is using.

The very flat and thin optical structures call “waveguides” will only work with collimated light entering them because of how total light totally internally reflects to stay in the light guide and the the way the diffraction works to make the light exits.  So a simple waveguide would not work for ML.

ml-angle-mirror-deviceSome of ML’s concepts use use one or more beam splitting mirrors type optics rather than waveguides for this reasons. Various ML’s patent applications show using a single large beam splitter or multiple smaller ones (such as at left), but these will be substantially thicker than a typical waveguide.

magic-leap-combiner-cropWhat Magic Leap calls a “Photonics Chip” looks to be at least one layer of diffractive waveguide. There is no evidence of mirror structures, and because it bends the wood in the background (if it were just a simple plate of glass, the wood in the background would not be bent), it appears to be a diffractive optical structure.

Because ML is doing focus planes, they need to have not one, but a stack of waveguides, one per focus plane. The waveguides in ML’s patent applications show collimated light entering the each waveguide in the stack like a normal waveguide, but then the exit diffraction gratings both causes the light to exit also imparts the appropriate focus plane angle to the light.

To be complete, Magic Leap has shown in several patent applications shown some very thick “freeform optics” concepts, but none of this would look anything like the “Photonics Chip” that ML shows.  ML’s patent applications show many different optical configurations and they have demoed a variety of different designs. What we don’t know is if the Photonics Chip they are showing is what they hope to use in the future or if this will be in their first products.

Magic Leaps Fully Formed Designs In Their Recent Patent Applications

Most of Magic Leaps patent applications showing optics have more like fragments of ideas.  There are lots of loose ends and incomplete concepts.

More recently (one publish just last week) there are patent applications assigned to Magic Leap with more “fully formed designs” that look much more like they actually tried to design and/or build them.  Interestingly, these applications don’t include as inventors the founders Rony Abovitz, the CEO, nor even Brian T. Schowengerdt, Chief Scientist, while they may use ideas from those prior “founders patent application.”

While the earlier ML applications mention Spatial Light Modulators (SLMs) using DLP, LCOS, and OLED microdisplays and talk about Variable Focus Element (VFEs) for time sequentially generating focus planes, they don’t really show how to put them together to make anything (a lot is left to the reader).

freeform-opticsPatent Applications 2016/0011419 (left) and 2015/0346495 (below) show straight forward ways to achieve field sequential focus planes using a Spatial Light Modulator (SLM) such as DLP, LCOS or OLED microdisplay.  ml-vfe-with-dlp-003b

As focus plane is created by setting the a variable focus element (VFE) to a one focus point and then generating the image by the SLM. Then the VFE focus is then changed and a second focus plane is displayed by the SLM.  This process can be repeated to generate more focus planes and limited by how fast the SLM can generate image and by level of motion artifact that can be tolerated.

These are clearly among the simplest way to generate focus planes. All that is added over a “conventional” design is the VFE.  When I first heard about Magic Leap many months ago, I heard they were using DLPs with multiple focus depths but a more recent Business Insider is reporting ML is using using Himax LCOS.  Both of these could easily be adapted to support OLED microdisplays.

The big issue I have with the straight forward optical approaches are the optical artifacts I have seen in the videos and the big deal ML makes out of their Photonics Chip (waveguide).  Certainly their first generation might use a more straightforward optical design and then save the Photonics Chip for the next generation.

Magic Leaps Videos Show Evidence of Waveguide Optics

As I wrote last time, there is a lot of evidence from the videos ML has put out that they are using a waveguide at least for the video demos.  The problem is when you bend light in a short distance using diffraction gratings or holograms is that some of the light does not get bent correctly and this shows up colors not lining up (chroma aberrations) as well as what I have come to call the “waveguide glow”.  If at R2D2 below (you may have to click on the image see it clearly) you should see a blue/white glow around R2D2.  I have seen this kind of glow in every diffractive and holographic waveguide I have seen.  I have heard that the glow might be eliminated someday with laser/very narrow bandwidth colors and holographic optics.ml-r2d2-glow2

The point here is that there is a lot of artifact evident that ML was at least using some kind of waveguide in their videos.  This makes it more likely that their final product will also use waveguides and at the same time may have some or all of the same artifacts.

Best Fit Magic Leap Application with Waveguides

If you drew a venn diagram of all existing information, the one patent application that fits best it all is the very recent US 2016/0327789. This is no guarantee that it is what they are doing, but it fits the current evidence best. It combines the a focus plane sequential LCOS SLM (although it shows it could also support DLP but not OLED) with waveguide optics.

The way this works is that for every focus plane there are 3 Waveguides (RED, Green,and Blue) and spatial separate set of LEDs Because the are spatially separate,  they will illuminate the LCOS device at a different angle and after going through the beam splitter the waveguide “injection optics” will cause the light from the different spatially separated LEDs to be aimed at a different waveguide of the same color. Not shown in the figure below is that there is an exit grating that both causes the light to exit the waveguide and imparts an angle to the light based on the focus associated with that give focus plane.  I have coloring in the “a” and “b” spatially separated red paths below (there are similar pairs for blue and green).

With this optical configuration, the LCOS SLM is driven with the image date for a given color for a given focus plane and then the associated color LED for that plane is illuminated.  This process then continues with a different color and/or focus plane until all 6 waveguides for the 3 colors by 2 planes have been illuminated.  ml-slm-beam-splitter-lcos-type-optics-color

The obvious drawbacks with this approach:

  1. There are a lot of layers of waveguide with exit diffraction gratings that the user will be looking through and the number of layers grows by 3 with each added focus plane.  That is a lot of stuff to be looking though and it is bound to degrade the forward view.
  2. There are a lot of optical devices that all the light is passing through and even small errors and leak light builds up.  This can’t be good for the overall optical quality.  These errors have their effect on resolution/blurring, chroma aberrations, and glowing/halo effects.
  3. Being able to switch though all the colors and focus planes fast enough to avoid motion artifacts where the colors and/or the focus planes break up.  Note this issue exist with using any approach that both does field and focus plan sequential.   Obviously this issue becomes worse with more focus planes.

The ‘789 patent show an alternative implementation for using a DLP SLM. Interestingly, this arrangement would not work for OLED Microdisplays as they generate their own illumination so you would not be able to get the spatially separated illumination.

So what are they doing?  

Magic Leap is almost certainly using some form of spatial light modulator with field sequential focus planes (I know I will get push-back form the ML fans that want to believe in the FSD — see the Appendix below); but this is the only way I could see them going to production in the next few years.  Based on the Business Insider information, it could very well be an LCOS device in the production unit.

The the 2015/0346495 with the simple beam splitter would be what I would have choose for a first design provide there is an appropriate variable focus element (VFE) available.  It is by far the simplest design and would seem to have the lowest risk. The downside is that the angled large beamsplitter will make it thicker but I doubt that much more so.   Not only is it lower risk (if the VFE works) but the image quality will likely be better using a simple beam splitter and spherical mirror-combiner than many layers diffractive waveguide.

The 2016/0327789 application touches all the basis based on available information.  The downside is that they need 3 waveguides per focus plane.  So if they are going to say support just 3 focus planes (say infinity, medium, and short focus) they are going to have 9 (3×3) layers waveguides to manufacture and pay for and 9 layers to look through to see the real world.  Even if each layer is extremely good quality, the error will build up in so many layers of optics.  I have heard that the Waveguide in Hololens has been a major yield/cost item and what ML would have to build would seem to be much more complex.   

While Magic Leap certainly could have something totally different, but they can’t be pushing on all fronts at once.  They pretty much have to go with a working SLM technology and get their focus planes time sequentially to build an affordable product.

I’m fond to repeating the 90/90 rule that “it takes 90%  of the effort to get 90% of the way there, then it takes the other 90% to do the last 10%” and someone quipped back, it can also be 90/90/90. The point being is that you can have something that look pretty good and impresses people, but solving the niggling problems, making it manufacturable and cost effective almost always takes more time, effort, and money than people want to think. These problems tend to become multiplicative if you take on too many challenges at the same time.

Comments on Display Technologies

As far as display technologies go each of the spatial light technologies has it pro’s and cons.

  1. LCOS seems to be finding the widest acceptance due to cost.  It is generally lower power in near eye displays than DLP.   The downside is that it has a more modest field rate which could limit the number of focus planes.  It could also be used in any of the 3 prime candidate optical system.  Because the LEDs are separate from the display, they can support essentially any level of brightness.
  2. DLP has the fastest potential field rate which will support more focus planes.  With DLPs they could trade color depth for focus planes.  DLPs will also tend to have higher contrast.  Like LCOS, brightness will not an issue as the LEDs can provide more than enough light.  DLP tends to be higher in cost and power and due to the off axis illumination, tend to have a little bigger optical system that LCOS in near eye applications.
  3. OLED – It has a lot of advantages in that it does not have to sequentially change the color fields, but the current devices still have a slower frame rate than DLP and LCOS can support.  What I don’t know, is how much the field rate is limited by the OLED designs to date versus what they could support if pressed.   The lack of control of the angle of illumination such as used in the ‘789 application.  OLEDs put out rather diffuse with little angle control and this could limit its usefulness with respect to focus plane where you need to  control the angles of light.
  4. FSD Per my other comment and the Appendix below, don’t hold your breath waiting for FSDs.
Image Quality Concerns

I would be very concerned about Magic Leap’s image quality and resolution beyond gaming applications. Forget all those magazine writers and bloggers getting all geeked out over a demo with a new toy, at some point reality must set in.

Looking at what Magic Leap is doing and what I have seen in the videos about the effective resolution and image quality it is going to be low compared to what you get even on a larger cell phone.  They are taking a display device that could produce a good image (either 720p or maybe 1080p) under normal/simple optics and putting it through a torture test of optical waveguides and whatever optics used to generate their focus planes at a rational cost; something has to give.

I fully expect to see a significant resolution loss no matter what they do plus chroma aberrations, and waveguide halos provide they use waveguides.  Another big issue for me will be the “real world view” through whatever it takes to create the focus planes and how will it effect you say seeing you TV or computer monitor through the combiner/waveguide optics.

I would also be concerned about field sequential artifacts and focus plane sequential artifacts.  Perhaps these are why there are so many double images in the videos.

Not to be all doom and gloom.  Based on casual comments from people that have seen it and the fact that some really smart people invested in Magic Leap,  it must provide an interesting experience and image quality is not everything for many applications. It certainly could be fun to play with at least for a while. After all, Oculus rift has a big following and its angular resolution is so bad that they cover up by blurring and it has optical problems like “god rays.”

I’m more trying to level out the expectations.   I expect it to be a long way from replacing your computer monitor, as one reporter suggested, or even your cell phone, at least for a very long time. Remember that this has so much stuff in that in addition to the head worn optics and display you are going to have a cable down to the processor and battery pack (a subject I have only barely touched on above).

Yes, Yes, I know Magic Leap has a lot of smart people and a lot of money (and you could say the same for Hololens), but sometime the problem is bigger than all the smart people and money can solve.

Appendix: 

The Big Things Magic Leap is NOT Going To Make in Production Anytime Soon

The first step in understand Magic Leap is to remove all the clutter/noise that ML has generated.  As my father use to often say, there are to ways to hide information, you can remove it from view or your can bury it.” Below is a list of the big things that are discussed by ML themselves and/or in their patents that are either infeasible or impossible any time soon.

It would take a long article on each of these to give all the reasons why they are not happening, but hopefully the comments below will at least outline the why:

ml-array-pic

A) Laser Fiber Scanning Display (FSD) 

A number of people of picked up on this particularly because the co-founder and Chief Scientist, Brian Schowengerdt, developed this at the University of Washington.  The FSD comes in two “flavors” the low resolution single FSD and the Arrayed FSD

1) First, you pretty limited on the resolution of a single mechanically scanning fiber (even more so than Mirror scanners). You can only make them spiral so fast and they have their own inherent resonance. They make an imperfectly space circular spiral that you then have to map a rectangular grid of pixels onto. You can only move the fiber so fast and you can trade frame rate for resolution a bit but you can’t just make the fiber move faster with good control and scale up the resolution. So maybe you get 600 spirals but it only yields maybe 300 x 300 effective pixels in a square.

2) When you array them you then have to overlap the spirals quite a bit. According to ML patent US 9,389,424 it will take about 72 fibers scanner to made a 2560×2048 array (about 284×284 effective pixels per fiber scanner) at 72 Hz.

3) Lets say we only want 1920×1080 which is where the better microdisplays are today or about 1/2.5 of 72 fiber scanners or about 28 of them. This means we need 28 x 3 (Red, Green, Blue) = 84 lasers. A near eye display typical outputs between 0.2 and 1 lumen of light and you divide this then by 28. So you need a very large number really tiny lasers that nobody I know of makes (or may even know how to make). You have to have individual very fast switching lasers so you can control them totally independently and at very high speed (on-off in the time of a “spiral pixel”).

4) So now you need to convince somebody to spend hundreds of millions of dollars in R&D to develop very small and very inexpensive direct green (particularly) lasers (those cheap green lasers you find in laser pointers won’t work because they switch WAY to slow and are very unstable). Then after they spend all that R&D money they have to then sell them to you very cheap.

5) Laser Combining into each fiber. You then have the other nasty problem of getting the light from 3 lasers into a single fiber; it can be done with dichroic mirrors and the like but it has to be VERY precise or you miss the fiber. To give you some idea of the “combining” process you might want to look at my article on how Sony combined 5 lasers (2 Red, 2 Green, and 1 Blue for brightness) for a laser mirror scanning projector http://www.kguttag.com/2015/07/13/celluonsonymicrovision-optical-path/. Only now you don’t do this just once but 28 times. This problem is not impossible but requires precision and precision cost money. Maybe if you put enough R&D money into it you can make it on a single substrate.  BTW, It looks to me that in the photo you see of Magic Leap prototype (https://www.wired.com/wp-content/uploads/2016/04/ff_magic_leap-eric_browy-929×697.jpg) it looks like they didn’t bother combining the lasers into single fibers.

6) Next to get the light injected into a waveguide you need to collimate the arrays of cone shaped light rays. I don’t know of any way, even with holographic optics that you can Collimate this light because you have overlapping rays of light going in different directions.  You can’t collimate the individual cones of light rays or there is not way to get them to overlap to make a single image without gaps in it. I have been looking through the ML patent applications an they never seem to say how they will get this array of FSDs injected into a waveguide. You might be able to build this in a lab for one that is horribly inefficient by diffusing the light first but it would be horribly inefficient.

7) Now you have the issue of how are you going to support multiple focus planes. 72Hz is not fast enough to do it Field Sequentially so you have to put in either parallel ones so multiply by the number of focus planes. The question at this point is how much more than a Tesla Model S (starting at $66K) will it cost in production.

I think this is a big ask when you can buy an LCOS engine at 720p (and probably soon 1080p) for at about $35 per eye. The theoretical FSD advantage is that it might be able to be scaled it up to higher resolutions but you are several miracles away from that today.

ml-wavefrontB) Light Fields, Light Waves, etc.

 There is no way to support any decent resolution with Light Fields that is going to fit on anyone’s head.  It takes about 50 to 100 times the simultaneous image information to support the same resolution with a light field.  Not only can’t you afford to display all the information to support good resolution, it would take and insane level of computer processing. What ML is doing is a “shortcut” of multiple focus planes which is at least possible.  The “light wave display” is insane-squared, it requires the array of fibers to be in perfect sync among other issues.

ml-multi-displayC) Multiple Displays Driving the Waveguides

ML patents show passive waveguides with multiple displays (fiber scanning or conventional) driving them. It quickly becomes cost prohibitive to support multiple displays (2 to 6 as the patents show) all with the resolution required.

ml-vfe-compensation4) Variable Focus Optics on either side of the Waveguides

Several of their figures show electrically controlled variable focus elements (VFE) optics on either side of the waveguides with one set changing the focus of a frame sequential image plane compensating while a second set of VFE compensates so the  “real world” view remains in focus. There is zero probability of this working without horribly distorting the real world view.

What Magic Leap Is Highly Unlikely to Produce

multiplane-waveguideActive Switching Waveguides – ML patents applications show many variations they drawn attention from other articles. The complexity of making them and the resultant cost is one big issue.  There would likely be serious the degradation to the view all the layers and optical structures through to the real world.  Then you have the cost both in terms of displays and optics to get images routed to the various planes of the waveguide.  ML’s patent applications don’t really say how the switching would work other than saying they might use liquid crystal or lithium niobate but nothing so show they have really thought it through.   I put this in the “unlikely” category because companies such as DigiLens have built switchable Bragg Gratings.

Magic Leap – The Display Technology Used in their Videos

So, what display technology is Magic Leap (ML) using, at least in their posted videos?   I believe the videos rule out a number of the possible display devices, and by a process of elimination it leaves only one likely technology. Hint: it is NOT laser fiber scanning prominently shown number of ML patents and article about ML.

Qualifiers

Magic Leap, could be posting deliberately misleading videos that show technology and/or deliberately bad videos to throw off people analyzing them; but I doubt it. It is certainly possible that the display technology shown in the videos is a prototype that uses different technology from what they are going to use in their products.   I am hearing that ML has a number of different levels of systems.  So what is being shown in the videos may or may not what they go to production with.

A “Smoking Gun Frame” 

So with all the qualifiers out of the way, below is a frame capture from Magic Leaps “A New Morning” while they are panning the headset and camera. The panning actions cause temporal (time based) frame shutter artifact in the form of partial ghost images as a result of the camera and the display running asynchronously and/or different frame rates. This one frame along with other artifacts you don’t see when playing the video, tells a lot about the display technology used to generate the image.

ml-new-morning-text-images-pan-leftIf you look at the left red oval you will see at the green arrow a double/ghost image starting and continuing below that point.  This is where the camera caught the display in its display update process. Also if you look at the right side of the image you will notice that the lower 3 circular icons (in the red oval) have double images where the top one does not (the 2nd to the top has a faint ghost as it is at the top of the field transition). By comparison, there is not a double image of the real world’s lamp arm (see center red oval) verifying that the roll bar is from the ML image generation.

ml-new-morning-text-whole-frameUpdate 2016-11-10: I have upload for those that would want to look at it.   Click on the thumbnail at left to see the whole 1920×1080 frame capture (I left the highlighting ovals that I overlaid).

Update 2016-11-14 I found a better “smoking gun” frame below at 1:23 in the video.  In this frame you can see the transition from one frame to the next.  In playing the video the frame transition slowly moves up from frame to frame indicating that they are asynchronous but at almost the same frame rate (or an integer multiple thereof like 1/60 or 1/30th)

ml-smoking-gun-002

 In addition to the “Smoking Gun Frame” above, I have looked at the “A New Morning Video” as well the “ILMxLAB and ‘Lost Droids’ Mixed Reality Test” and the early “Magic Leap Demo” that are stated to be “Shot directly through Magic Leap technology . . . without use of special effects or compositing.”through the optics.”   I was looking for any other artifacts that would be indicative of the various possible technologies

Display Technologies it Can’t Be

Based on the image above and other video evidence, I think it save to rule out the following display technologies:

  1. Laser Fiber Scanning Display – either a single or multiple  fiber scanning display as shown in Magic Leaps patents and articles (and for which their CTO is famous for working on prior to joining ML).  A fiber scan display scans in a spiral (or if they are arrayed an array of spirals) with a “retrace/blanking” time to get back to the starting point.  This blanking would show up as diagonal black line(s) and/or flicker in the video (sort of like an old CRT would show up with a horizontal black retrace line).  Also, if it is laser fiber scanning, I would expect to see evidence of laser speckle which is not there. Laser speckle will come through even if the image is out of focus.  There is nothing to suggest in this image and its video that there is a scanning process with blanking or that lasers are being used at all.  Through my study of Laser Beam Scanning (and I am old enough to have photographed CRTs) there is nothing in the still frame nor videos that is indicative of a scanning processes that has a retrace.
  2. Field Sequential DLP or LCOS – There is absolutely no field sequential color rolling, flashing, or flickering in the video or in any still captures I have made. Field sequential displays, display only one color at a time very rapidly. When these rapid color field changes beat against the camera’s scanning/shutter process.  This will show up as color variances and/or flicker and not as a simple double image. This is particularly important because it has been reported that Himax which makes field sequential LCOS devices, is making projector engines for Magic Leap. So either they are not using Himax or they are changing technology for the actual product.  I have seen many years of DLP and LCOS displays both live and through many types of video and still cameras and I see nothing that suggest field sequential color is being used.
  3. Laser Beam Scanning with a mirror – As with CRTs and fiber scanning, there has to be a blanking/retrace period between frames will show up in the videos as roll bar (dark and/or light) and it would roll/move over time.  I’m including this just to be complete as this was never suggested anywhere with respect to ML.
UPDATE Nov 17, 2016

Based on other evidence that as recently come in, even though I have not found video evidence of Field Sequential Color artifacts in any of the Magic Leap Videos, I’m more open to thinking that it could be LCOS or (less likely) DLP and maybe the camera sensor is doing more to average out the color fields than other cameras I have used in the past.  

Display Technologies That it Could Be 

Below are a list of possible technologies that could generate video images consistent with what has been shown by Magic Leap to date including the still frame above:

  1. Mico-OLED (about 10 known companies) – Very small OLEDs on silicon or similar substrates. As list of the some of the known makers is given here at OLED-info (Epson has recently joined this list and I would bet that Samsung and others are working on them internally). Micro-OLEDs both A) are small enough toinject an image into a waveguide for a small headset and B) has the the display characteristics that behave the way the image in the video is behaving.
  2. Transmissive Color Filter HTPS (Epson) – While Epson was making transmissive color filter HTPS devices, their most recent headset has switch to a Micro-OLED panel suggesting they themselves are moving away.  Additionally while Meta first generation used Epson’s HTPS, they moved away to a large OLED (with a very large spherical reflective combiner).  This technology is challenged in going to high resolution and small size.
  3. Transmissive Color Filter LCOS (Kopin) – is the only company making Color Filter Transmissive LCOS but they have not been that active as of last as a component supplier and they have serious issues with a roadmap to higher resolution and size.
  4. Color Filter reflective LCOS– I’m putting this in here more for completeness as it is less likely.  While in theory it could produce the images, it generally has lower contrast (which would translate into lack of transparency and a milkiness to the image) and color saturation.   This would fit with Himax as a supplier as they have color filter LCOS devices.
  5. Large Panel LCD or OLED – This would suggest a large headset that is doing something similar to the Meta 2.   Would tend to rule this out because it would go against everything else Magic Leap shows in their patents and what they have said publicly.   It’s just that it could have generated the image in the video.
And the “Winner” is I believe . . . Micro-OLED (see update above) 

By a process of elimination including getting rid of the “possible but unlikely” ones from above, it strongly points to it being Micro-OLED display device. Let me say, I have no personal reason to favor it being Micro-OLED, one could argue it might be to my advantage based on my experience for it to be LCOS if anything.

Before I started any serious analysis, I didn’t have an opinion. I started out doubtful that it was field sequential or and scanning (fiber/beam) devices due to the lack of any indicative artifacts in the video, but it was the “smoking gun frame” that convince me that that if the camera was catching temporal artifacts, it should have been catching the other artifact.

I’m basing this conclusions on the facts as I see them.  Period, full stop.   I would be happy to discuss this conclusion (if asked rationally) in the comments section.

Disclosure . . . I Just Bought Some Stock Based on My Conclusion and My Reasoning for Doing So

The last time I played this game of “what’s inside” I was the first to identify that a Himax LCOS panel was inside Google Glass which resulted in their market cap going up almost $100M in a couple of hours.  I had zero shares of Hixmax when this happened, my technical conclusion now as it was then was based on what I saw.

Unlike my call on Himax in Google Glass I have no idea which company make the device Magic Leap appears to using nor if Magic Leap will change technologies for their production device.  I have zero inside information and am basing this entirely on the information I have given above (you have been warned).   Not only is the information public, but it is based on videos that are many months old.

I  looked at companie on the OLED Microdisplay List by www.oled-info.com (who has followed OLED for a long time).  It turned out all the companies were either part of a very large company or were private companies, except for one, namely eMagin.

I have know of eMagin since 1998 and they have been around since 1993.  They essentially mirror Microvision doing Laser Beam Scanning and was also founded in 1993, a time where you could go public without revenue.  eMagin has spent/loss a lot of shareholder money and is worth about 1/100th from their peak in March 2000.

I have NOT done any serious technical, due diligence, or other stock analysis of eMagin and I am not a stock expert. 

I’m NOT saying that eMagine is in Magic Leap. I’m NOT saying that Micro-OLED is necessarily better than any other technology.  All I am saying is that I think that someone’s Micro-OLED technology is being using the Magic Leap prototype and that Magic Leap is such hotly followed company that it might (or might not) affect the stock price of companies making Micro-OLEDs..

So, unlike the Google Glass and Himax case above, I decided to place a small “stock bet” (for me) on ability to identify the technology (but not the company) by buying some eMagin stock  on the open market at $2.40 this morning, 2016-11-09 (symbol EMAN). I’m just putting my money where my mouth is so to speak (and NOT, once again, stock advice) and playing a hunch.  I’m just making a full disclosure in letting you know what I have done.

My Plans for Next Time

I have some other significant conclusions I have drawn from looking at Magic Leap’s video about the waveguide/display technology that I plan to show and discuss next time.