Magic Leap and Hololens and LCOS

LCOS Used In Hololens and Likely Magic Leap

It is well known that Microsoft’s Hololens uses two Himax manufactured Field Sequential Color (FSC) LCOS microdisplays. Additionally there are reports, particularly from KGI Securities analyst Ming-Chi Kuo as reported in Business Insider that Magic Leap (ML) is also using Himax’s LCOS. Further supporting this is that of all ML patent applications, ML patent application US 2016/0327789 which uses LCOS best fits the available evidence.

I have now from some additional evidence that ML is likely using LCOS. After discussing this new ML evidence, I will relay some Microsoft Hololens 2nd generation (or lack thereof) rumors.

Patent Application Tends To Confirm Magic Leaps Use of Field Sequential LCOS

I came across a bit of a strange patent that seems to confirm that ML is using field sequential color (FSC) LCOS. The patent application US 2016/0241827 was filed in January 2015 just 3 months before the lead inventor, Michael Kass then a ML Fellow, left ML. From what I can tell from their public LinkedIn profiles, Mr. Kass and his fellow inventor both worked on software at ML and not hardware and neither one has any background in hardware.

The patent application is directed towards reducing “Color-Breakup” for color sequential displays and shows an LCOS implementation. The concept they are proposing is at least 15 years old that I know of and it is well known to people in the projection industry that DLP’s projectors had “white segment” color wheels and later with LED illumination. Additionally the way they arranged the LEDs in their diagram above with 3 separate LEDs going to a dichroic mirror is how it is done for front projectors and not for a near eye display. The question I had on finding this application was:

Why are two ML people working on software with only a rudimentary knowledge display design and located in California filing for a patent on an “improvement” for field sequential color?

The only logical answer I could come up with is that they had look through ML prototypes that used an LCOS system and were bothered by seeing color breakup. I’m guessing they were told it was LCOS but did not know how it was designed so they grabbed a LCOS design off the internet (only one for a front projector and not for near eye). They didn’t know the history of FSC projectors using white segments, so they re-invented the 15+ year old concept of adding a “white” period where all the RGB colors are on in order to help reduce color breakup.

For bonus speculation, why did the lead inventor Mr. Kass who had only a month before filing this patent been promoted “Distinguished Fellow” then leave only 3 months after filing the provisional patent? Perhaps, just perhaps, he did not like the color breakup he was seeing (just a guess)?

It should be noted that it has been nearly two years since the provisional application was filed which would give ML time to change. But I doubt they could totally change directions as they would be too far down the road with the rest of the design. At least if, as they claim, they will have a product out “soon.” They might change the type of LCOS device either in resolution or manufacturer but it would seem unlikely that they could totally change the technology.

Hololens Rumored 2nd Generation Delayed?

There was a lot of talk that Hololens, after announcing that Hololens would be focusing first on business applications, would be coming out with a 2nd generation Hololens next year. This sometimes gets conflated with the 2nd generation being a “consumer version.” But apparently the costs to make Hololens are high particularly  with the custom waveguides having very low yield.

The recent scuttlebutt is that expected 2nd generation is on hold while Microsoft management figures out what they want to do with Hololens. For those that were hoping for a Consumer edition, the idea of focusing on “enterprise/business” sounds scarily similar to what Google did with Google Glass when if realized it did not have a high volume market. While Microsoft is continuing to expand sales of Hololens for businesses worldwide, one gets the feeling that Microsoft is trying to figure out if Hololens will have the size market anytime soon that is worthy of a company Microsoft’s size.

Update Dec 20, 2016 – I posed a question on the Reddit Hololens subgroup about finding a public source for issues with Himax and Hololens and they pointed to “A component maker suffers as Microsoft develops next-gen HoloLens” by Kevin Parrish on Dec. 14, 2016 in Digital Trends. In the article they cited Himax CEO Jordan Wu stating, “near-term headwinds” due to a “major AR customer’s shift in focus to the development of future-generation devices.” This would seem to imply that the “AR Customer,” of which Hololens is the most notable/likely, is switching from using a 720p to their new 1080p device on a Gen. 2 Hololens.  

Mixed Bag for Himax’s LCOS

So there is mounting evidence that ML is using LCOS and the most likely manufacturer is Himax. I have had some people write me that ML switched from Himax but I don’t know how credible their sources may be, so this I would categorize as rumor right now.

Either way, Himax can’t be shipping a lot of LCOS to ML right now. The lack of volume coming out of Hololens also means that there are not big new orders from Microsoft for Himax panels.

Meeting At CES 2017 January 5th to 8th

I have had a number of people ask if I was going to CES 2017 in Las Vega and we could meet. I’m going to be at the show from January 5th through the 8th.

If you would like to meet, please email me at info@kguttag.com.

If possible, please include you contact information, reason you or your company wants to meet, and the best dates, times, and if you have a place where you would like to meet if you have a preference.

 

Magic Leap: Focus Planes (Too) Are a Dead End

What Magic Leap Appears to be Doing

For this article I would like to dive down on the most likely display and optics Magic Leap (ML) is developing for their their Product Equivalent (PEQ). The PEQ was discussed in the “The Information” story “The Reality Behind Magic Leap.” As I explained in my  November 20, 2016 article Separating Magic and Reality (before the Dec 8th “The Information” story) the ML patent application US 2016/0327789 best fits the available evidence and if anything the “The Information” article reinforce that conclusion. Recapping the evidence:

  1. ML uses a “spatial light modulator” as stated in “The Information”
  2. Most likely an LCOS spatial light modulator and the Oct. 27th 2017 Inside Business citing “KGI Securities analyst Ming-Chi Kuo, who has a reputation for being tapped into the Asian consumer electronics supply chain” claims ML is using a Himax LCOS device.
  3. Focus planes to support vergence/accommodation per many ML presentations and their patent applications
  4. Uses waveguides which fit the description and pictures of what ML calls a “Photonics Chip”
  5. Does not have a separate focus mechanism as reported in the “The Information” article.
  6. Could fit the form factor as suggested in “The Information”
  7. Its the only patent that shows serious optical design that also uses what could be considered a “Photonics chip.”

I can’t say with certainty that the optical path is that of application 2016/0327789. It is just the only optical path in the ML patent applications that fits all the available evidence and and has a chance of working.

Field of View (FOV)

Rony Abovitz, ML CEO, is claiming a larger a larger FOV.  I would think ML would not want to be have lower angular resolution than Hololens. Keeping the same 1.7 arc minutes per pixel angular resolution as Hololens and ODG’s Horizon, this would give a horizontal FOV of about 54.4 degrees.

Note, there are rumors that Hololens is going to be moving to a 1080p device next year so ML may still not have an advantage by the time they actually have a product. There is a chance that ML will just use a 720p device, at least at first, and accept lower angular resolution of say 2.5 or greater to get into the 54+ FOV range. Supporting a larger FOV is not small trick with waveguides and is  one thing that ML might have over Hololoens; but then again Hololens is not standing still.

Sequential Focus Planes Domino Effect

The support of vergence/accommodation appears to be a paramount issue with ML. Light fields are woefully impractical for any reasonable resolution, so ML in their patent application and some of their demo videos show the concept of “focus planes.” But for every focus plane an image has to be generated and displayed.

The cost of having more than one display per eye including the optics to combine the multiple displays would be both very costly and physically large. So the only rational way ML could support focus planes is to use a single display device and sequentially display the focus planes. But as I will outline below, using sequential focus planes to address vergence/accommodation, comes at the cost of hurting other visual comfort issues.

Expect Field Sequential Color Breakup If Magic Leap Supports “Focus Planes”

Both high resolution LCOS and DLP displays use “field sequential color” where they have a single set of mirrors that display a single color plane at a time. To get the colors to fuse together in the eye they repeat the same colors multiple times per frame of an image. Where I have serious problems with ML using Himax LCOS is that instead of repeating colors to reduce the color breakup, they will be instead be showing different images to support Sequential Focus Planes. Even if they have just two focus planes as suggested in “The Information,” it means they will reduce the rate repeating of colors to help them fuse in the eye is cut in half.

The Hololens which also uses a field sequential color LCOS one can already detect breakup. Cutting the color update rate by 2 or more will make this problem significantly worse.

Another interesting factor is that field sequential color breakup tends to be more noticeable by people’s peripheral vision which is more motion/change sensitive. This means the problem will tend to get worse as the FOV increases.

I have worked many years with field sequential display devices, specifically LCOS. Based on this experience I expect that the human vision system  will do a poor job of “fusing” the colors at such slow color field update rates and I would expect people will see a lot of field sequential color breakup particularly when objects move.

In short, I expect a lot of color breakup to be noticeable if ML support focus planes with a field sequential color device (LCOS or DLP).

Focus Planes Hurt Latency/Lag and Will Cause Double Images

An important factor in human comfort is the latency/lag between any head movement and the display reacting can cause user discomfort. A web search will turn up thousands of references about this problem.

To support focus planes ML must use a display fast enough to support at least 120 frame per second. But to support just two focus planes it will take them 1/60th of a second to sequentially display both focus planes. Thus they have increase the total latency/lag from the time they sense movement until the display is updated by ~8.333 milliseconds and this is on top of any other processing latency. So really focus planes is trading off one discomfort issue, vergence/accommodation, for another, latency/lag.

Another issue which concerns me is how well sequential focus planes are doing to fuse in the eye. With fast movement the eye/brain visual system is takes its own asynchronous “snapshots” and tries to assemble the information and line it up. But as with field sequential color, it can put together time sequential information wrong, particularly if some objects in the image move and others don’t. The result will be double images, getting double images with sequential focus planes would be unavoidable with fast movement either in the virtual world or when a person moves their eyes. These problems will be compounded by color field sequential breakup.

Focus Planes Are a Dead End – Might Magic Leap Have Given Up On Them?

I don’t know all the behind the scenes issues with what ML told investors and maybe ML has been hemmed in by their own words and demos to investors. But as an engineer with most of my 37 years in the industry working with image generation and display, it looks to me that focus planes causes bigger problems than it solves.

What gets me is that they should have figured out that focus planes were hopeless in the first few months (much less if someone that knew what they were doing was there). Maybe they were ego driven and/or they built to much around the impression they made with their “Beast” demo system (big system using DLPs). Then maybe they hand waved away the problems sequential focus planes cause thinking they could fix them somehow or hoped that people won’t notice the problems. It would certainly not be the first time that a company committed to a direction and then felt that is had gone to far to change course. Then there is always the hope that “dumb consumers” won’t see the problems (in this case I think they will).

It is clear to me that like Fiber Scan Displays (FSD), focus planes are a dead end, period, full-stop. Vergence/accommodation is a real issue but only for objects that get reasonably close to the users. I think a much more rational way to address the issue is to use sensors to track the eyes/pupils and adjust the image accordingly as the eye’s focus changes relatively slowly it should be possible to keep up. In short, move the problem from the physical display and optics domain (that will remain costly and problematical), to the sensor and processing domain (that will more rapidly come down in cost).

If I’m at Hololens, ODG, or any other company working on an AR/MR systems and accept that vergence/accommodation is a problem needs to be to solve, I’m going to solve it with eye/pupil sensing and processing, not by screwing up everything else by doing it with optics and displays. ML’s competitors have had enough warning to already be well into developing solutions if they weren’t prior to ML making such a big deal about the already well known issue.

The question I’m left is if and when did Magic Leap figured this out and were they too committed by ego or what they told investors to focus planes to change at that point? I have not found evidence so far in their patent applications that they tried to changed course, but these patent applications will be about 18 months or more behind what they decided to do. But if they don’t use focus planes, they would have to admit that they are much closer to Hololens and other competitors than they would like the market to think.

Evergaze: Helping People See the Real World

Real World AR

Today I would like to forget about all the hype and glamor near eye products to have fun in a virtual world. Instead I’m going to talk a near eye device aimed at helping people to see and live in the real world.  The product is called the “seeBoost®” and it is made by the startup Evergaze in Richardson, Texas. I happen to know the founder and CEO Pat Antaki from working together on a near eye display back in 1998, long before it was fashionable. I’ve watched Pat bootstrap this company from its earliest days and asked him if I could be the first to write about seeBoost on my blog.

The Problem

Imagine you get Age Related Macular Degeration (AMD) or Diabetic Retinopathy. All your high-resolution vision and best color vision of the macular (and where high resolution fovea resides) is gone and you see something like the picture on the right. All you can use is your peripheral vision which is low in resolution, contrast, and color sensitivity. There are over 2 million people in the U.S that can still see but have worse than 20/60 vision in their better eye.

What would you pay to be able to read a book again and do other normal activities that require the ability to have “functional vision?” So not only is Evergaze aiming to help a large number of people, they are going after a sizable and growing market.

seeBoost Overview

seeBoost has 3 key parts, the lightweight near-to-eye display, a camera with high speed autofocus, and proprietary processing in an ASIC that remaps what the camera sees onto the functioning part of the user’s vision. They put the proprietary algorithms in hardware so they could have the image remapping and contrast enhancement performed with extremely low latency so that there is no perceptible delay when a person moves their head. As anyone that has used VR headsets will know, this important for wearing the device for long periods of time to avoid headaches and nausea.

A perhaps subtle but important point is that the camera and display are perfectly coaxial, so there is no parallax error as you move the object closer to your eye. The importance of centering the camera with the user’s eye for long term comfort was a major point made AR headset user and advocate Steve Mann in his March 2013, IEEE Spectrum article, “What I’ve learned from 35 years of wearing computerized eyewear”. Quoting from the article, “The slight misalignment seemed unimportant at the time, but it produced some strange and unpleasant result.” And in commenting on Google Glass Mr. Mann said, “The current prototypes of Google Glass position the camera well to the right side of the wearer’s right eye. Were that system to overlay live video imagery from the camera on top of the user’s view, the very same problems would surely crop up.”

Unlike traditional magnifying optics like a magnifying glass, in addition to being able to remap the camera image to the parts of the eye that can see, the depth of field and magnification amount are decoupled: you can get any magnification (from 1x to 8x) at any distance (2 inches to infinity). It also has digital image color reversal (black-to-white reversal, useful for reading pages with a lot of white). The device is very lightweight at 0.9 oz. including cable. The battery pack supports for 6 hours of continual use on a single charge.

Use Case

Imagine this use scenario: playing bridge with your friends. To look at the cards in your hand you may need 2x mag at 12 inches’ distance. The autofocus allows you to merely move the cards as close to your face as you like, the way a person would naturally use to make something larger. Having the camera coaxial with the display makes this all seem natural versus say having a camera above the eye. Looking at the table to see what cards are placed there, maybe you need 6x mag. at 2 feet. To see other people’s eyes and facial expressions around the table, you need 1-2x at 3-4 feet.

seeBoost is designed to help people see so they can better take part in the simple joys of normal life. The lightweight design mounts on top of a user’s prescription glasses and can help while walking, reading signs and literature, shopping, watching television, recognizing faces, cooking, and even playing sports like golf.

Another major design consideration was the narrow design so that it does not cover-up lateral and downwards peripheral vision of the eye.  This turns out to be important for people who don’t want to further lose peripheral vision. In this application, monocular(single eye) is for better situational awareness and peripheral vision.

seeBoost is a vision enhancement device rather it essentially a computer (or cell phone) monitor that you must plug into something. The user simply looks at the screen (through seeBoost), as seeBoost improves their vision for whatever they’re looking at, be it an electronic display or their grandchildren’s faces.

Assembled in the USA and Starting to Ship

This is not just some Kickstarter concept either. Evergaze has been testing prototypes with vision impaired patients for over a year and have already finished a number of studies. What’s more they recently started shipping product. To the left is an image that was taken though the seeBoost camera via its display and optics.

What’s more this product is manufactured in the US at a production line Evergaze set up in Richardson, TX. If you want to find out more about the company you can go their their YouTube Channel or if you know someone that needs a seeBoost, you can contact Pat Antaki via email: pantaki@evergaze.com

Magic Leap CSI: Display Device Fingerprints

Introduction

I have gotten a lot of questions as to how I could be so sure that Magic Leap (ML) was using Micro-OLEDs in all their “Through Magic Leap Technology” and not say a scanning fiber display as so many had thought. I was in a hurry to get people to the conclusion. For this post, I am going to step back and show how I knew. When display devices have video and still pictures taking whit a camera, every display type has its own identifiable “fingerprint” but you have to know where to look.

Sometimes in video it might only be a few frames that give the clue as to the device being used. In this article I am going cropped image from videos for most of the technologies that capture their distinctive artifacts as captured by the camera, but for laser scanning the distinctive artifacts are best seen in the whole image so I am going to use thumbnails size images.

This article should not be new information to this blog’s readers, but rather it details how I knew what technology was in the ML through “the technology” videos. For the plot twist at the end, you have to know to parse ML’s words, as in “the technology” is not what they are planning on using in their actual product. The ML “through the technology videos” are using totally different technology than what they plan to use in the product.

Most Small Cameras Today Use a Rolling Shutter

First it is important to understand that cameras capture images much differently than the human eye. Most small cameras today, particularly those in cell phones, have a “rolling shutter.” Photography.net has a good article describing a rolling shutter and some of its effects. A rolling shutter captures a horizontal band of pixels (the width of this band varies from camera to camera) as it scans down vertically. With “real world analog” movement this causes moving objects to be distorted. This happens very famously with airplane propellers (above right). With the various display technologies they will reveal different effects.

OLEDs (And color filter LCDs)

When an object moves on a display device the same object in the digital image will jump in its location between the two frames displayed. If the rolling shutter is open when the image is changing, the camera will capture a double image.  This is shown classically with the Micro-OLED device from an ODG Horizon prototype. The icons and text in the image was moving vertically and the camera captured contend from two frames. Larger flat panel OLEDs work pretty much the same way as can be see in this cropped image from a Meta 2 headset at right.

From a video image artifact point of view, it is hard to distinguish the artifacts with a rolling shutter camera between OLED and color filter (most common) LCDs. Unlike old CRTs and scanning systems, OLEDs and LCD don’t have any “blanking” where there is no image. They just simply quickly row by row change the RGB (and White sometimes) sub-pixels of the image from one frame to the next (this video taken with a high speed camera demonstrates how it works).

Color Field Sequential DLP and LCOS

DLP and LCOS devices used in near eye displays use what is known as “field sequential color” (FSC). They have one set of “mirrors” and in rapid sequence display only the red sub-image and flash a red light source (LED or laser) and then repeat this for green and blue. Generally they sequence these very rapidly and usually repeate the red, green, and blue sub-images multiple times so the eye will fuse the colors together even if there is motion. If the colors are not sequenced fast enough (and for many other reasons that would take too long to explain), a person’s eye will not fuse the image and they will see fringing of colors in what is known as  “field sequential color breakup,” also known pejoratively as “the rainbow effect”. Due to the way DLP and LCOS works, LCOS does not have to sequence quite as rapidly to get the images to fuse in the human eye which is a good thing because they can’t sequence as fast as DLP.

In the case of field sequential color when there is motion, the camera can capture the various sub images individually as seen above-left of the Hololens that uses FSC LCOS. As seen it looks sort of like print were the various colors are shifted. IF you study the image you can even tell the color sequence.

Vuzix uses FSC DLP and has similar artifacts but they are harder to spot. Generally DLPs sequence their colors faster than LCOS (by about 2x) so it can be significantly harder to capture them (that is a clue to if it is DLP or LCOS). On the right, I have captured two icons when sill and when they are moving and you can see how the colors separate. You will notice that you don’t see all the colors because the DLP is sequencing more rapidly that the Hololens LCOS.

DLP and LCOS also have “blanking” between colors where the LEDs (and lasers maybe in the future) are turned off while the color sub-images are changing. The blanking is extremely fast and will only be see using high speed cameras and/or setting a very fast shutter time on a DLSR.

DLP and LCOS for Use with ML “Focus Planes”

If you have a high speed camera or other sensing equipment you can tell even more about the differences between the way in which DLP and LCOS generate field sequential color. But a very important aspect for Magic Leap”s time sequential focus planes is that DLP an sequence fields much faster than LCOS and thus support more focus planes.

I will be getting more into this in a future article, but to do focus planes with DLP or LCOS, Magic Leap will have to trade repeating the same single color sub-images for different images corresponding to different focus planes. The obvious problem for those that understand FSC, that the color field rates will become so low that color breakup (the rainbow effect) would seem inevitable.

Laser Beam Scanning

Laser scanning systems are a bit like old CRTs, they scan from top to bottom and then have a blanking time while the scanning mirror retraces quickly to the top corner. The top image on the left was taken with DSL at a 1/60th of a second shutter speed that reveals the blanking roll bar (called a roll bar because it will be in a different place if the camera and video source are not running at exactly the same speed).

The next two images were taken with a rolling shutter camera of the exact same projector. The middle image shows a dark wide roll bar (it moves) and the bottom image shows a thin white roll-bar. These variations from the same projector and camera are due to the frame rates generated by the image and/or the camera’s shutter rate.

Fiber Scanning Display (FSD) Expected Artifacts

FSD displays/projectors are so rare that nobody has published a video of them. Their scan rates are generally low and they have “zero persistence” (similar to the to laser scanning) and they would look horrible in a video which I suspect is why no one has published a video of them.

I they were video’ed I would expect a circular blanking effect similar to the laser beam scanning but circular. Rather than rolling vertically it would “roll” from center to the outside or vice versa. I have put a could of very crudely simulated whole frame images at left.

So What Did the Magic Leap “Through The Technology” Videos Use?

There is a obvious artifact match between the artifacts in all the Magic Leap “Through the Technology” videos and OLEDs (or filter LCD which are much less common in near eye displays). You see the distinctive double image with no color breakup.

Nowhere on any frames can be found field sequential color artifacts. So this rules out FSC DLP and LCOS.

In looking at the whole frame videos you don’t see any roll-bars effects of any kind. So this totally rules out both laser beam scanning and fiber scanning displays.

We have a winner. The ML through the technology videos could only be done with OLEDs (or color filter LCDs).

But OLEDs Don’t Work With Thin Waveguides!!!

Like most compelling detective mysteries there is a plot twist. OLEDs unlike LCOS, DLP, and Laser Scanning output wide spectrum colors and these don’t work with the thin waveguides like the Photonic Chip that Rony Abovitz, ML CEO, likes to show.

This is how it became obvious that while the “Through The Magic Leap Technology” videos were NOT using the same “Magic Leap Technology” as Magic Leap is planning to use for their production product. And this agrees with the much publicized ML Article from “The Information.”

Appendix – Micro HTPS LCD (Highly Unlikely)

I need to add, just to be complete, that theoretically they could use color filter HTPS LCDs illuminated by either LEDs or lasers to get a narrow spectrum and fairly colliminated light that might work with the waveguide.  They would have similar artifacts to those seen in the ML videos. EPSON has made such a device illuminated by LEDs that was used in their earlier headsets, but even EPSON that is moving to Micro-OLEDs for their next generation. I’m not sure the HTPS could support frame rates high enough to support focus planes.  I think therefore that using color filter HTPS panels while theoretically possible is highly unlikely.

Magic Leap: Are Parts of Their NDAs Now Unenforceable?

mousy-responseRony Abovitz’s tweet about “mousy tech bloggers” and one of its responses made me realize something I was taught way back about NDA and intellectual property. It is summarized well (with my bold emphasis) in the article, “What You Probably Don’t Know About Non Disclosure Agreements”:

Remember that if you have 99 people sign an NDA and 1 person doesn’t, that person can publish your idea in the Wall Street Journal – and to add insult to injury, when they do, the other NDAs all become invalid since they only apply to confidential information.

Reed Albergotti with “The Information,” was shown demos previously were not open to the public and as best I am aware did not have an NDA or other confidential agreement. Also David M. Ewalt of Forbes Magazine wrote on Reddit:

I didn’t sign an NDA, but I agreed not to reveal certain proprietary details”

So when Arghya Sur  (copied above) in his response asked Rony Abivutz to “publicly reveal and demos (sic)”. So I’m left wondering what is and what is not confidential now at Magic Leap? Has Magic Leap inadvertently already done what Arghya Sur requested? Has Magic Leap at least caused some people to be released from some parts of their NDAs?

Disclaimer: I am not a lawyer and my understanding is that this is contract issue based on the laws for the state(s) that governed the NDAs in question. Also, I have not seen ML’s NDAs nor do I know what they cover. There are likely severability clauses meant to limit the damage if there is a breach of some parts.

And it might be even worse. As I remember it, if a company is generally sloppy in their handling and protecting of what they tell people is confidential material, then you can’t enforce your confidential/NDA agreements. The principle is, how can you expect people to know what is “really confidential” from what is “only marked confidential?”

And Rony Abovitz is not just anybody at Magic Leap, he is CEO and and met with the reporters and presumably had some idea as to what they were being shown. This also goes to why you should not tweet about “mouse tech bloggers” if you are a CEO, it makes them ask questions.

I would appreciate if those with expertise  in this area would weigh-in  with your comments. Please don’t give any legal advice to anyone, but rather let people know how you what you were taught about handling NDA material.

BTW

I am always amused and a little shocked when I seen slides at open conferences with “Confidential” market on them. I was taught to NEVER do. If the material is not longer confidential, then remove it from the slides. You probably will not get the “confidential death sentence” for doing it once, but it should not become routine or the company might find all their confidential agreements unenforceable.

Magic Leap Shout Out?: Grumpy Mouse Tech Blogger Here

grumpy-mouse-tech-bloggerI have been very busy the last few days and just realize it looks like I got a shout out Tweet from Rony Abovit, the CEO of Magic Leap. On the evening of Dec. 8th, 2016 he wrote, “To a few of the grumpy mouse tech blogger writers: you too will get to play the real thing when we ship.” As far as I am aware, I’m the only “tech blogger” that has been critical of what Magic Leap is doing and on the off-chance that Mr. Abovit did not know of my blog before, it was the only tech blog that was critical of Magic Leap cited in the “The Information” Article by Reed Albergotti that appeared on the 8th.

Mr. Albergotti is an writer for a legitimate news source and not a blogger. Maybe Mr. Abovit was trying to put him down as a “mere blogger” or was his petulant way to try and put down both of us.

In any event, is this the right way for a CEO who has raised $1.4B to strike back at writers he disagreed with?  Why can’t he be specific about with  whom and what he disagrees?  The best he could muster is an ad hominem attack and a bunch of unverifiable whistling in the dark tweets.

I’ve been laying out my proof in this blog. I was only trying to answer the question, “what is Magic Leap doing?” because as I knew that almost all the existing writing about Magic leap was doing was wrong and I thought it would be a fun to be the first to solve the puzzle. If I figured out they were doing something great, I would have reported it. But what I found as I studied the patents, technical material and the released Magic Leap’s videos combined my technical experience in the field, their whole technical story related to the display started to unravel.

Magic Leap: “The Information” Article

The Information: The Reality Behind Magic Leap

the-information-magic-leap-dec-8-2016-coverThe online news magazine “The Information” released the article “The Reality Behind Magic Leap” on Dec. 8th, 2016, by Reed Albergotti and in the story gave a link to this blog. So you may be a new reader.  The article appears to be well researched and I understand that “The Information” has a reputation as a reliable news source. The article also dovetails nicely on the business side with what I have been discussing with this blog on the technical side. The magazine is a paid publication but there is a summary on The Verge along with their added commentary and a lot of the text from the article has shown up in discussion forums about Magic Leap (ML).

For this blog post, I am going to try put 2+2 together between what I have figured out on the technical side and what Mr. Albergotti reported on the business side. Note, I have not seen what he as seen so I am reading between the lines somewhat but hopefully it will give a more complete picture.

The Magic Leap Prototypes

The article states “Magic Leap CEO Rony Abovitz acknowledged that the prototypes used different technology.” This blog has identified that the early prototypes as:

  1. ml-495-applicationA DLP based prototype that uses a variable focus lens to produce “focus planes” by generating different images for different distances and changing the focus between images and supported maybe 3 to 6 focus planes. This is probably their earliest one and is what the article calls “The Beast” and described as the “size of a refrigerator.”
  2. One or more OLED base variations once again using an electrically controlled focus element where ML made a smaller helmet version. The article discussed only one version, dubbed “WD3” but I suspect that they had variations of this one with different capabilities (as in maybe a WD1, WD2, WD3 and maybe more). I believe based on the video evidence a version that could only change focus was used for their Oct. 14, 2015 through the technology” video.  Their later “A New Morning” and “Lost Droids” videos appear to use an Mico-OLED based optics that supported at least two simultaneous focus planes by running the OLED at 120hz to generate two 60hz sequential “focus plane” images and changing the focus be each one.
  3. ml-slm-beam-splitter-lcos-type-optics-colorThe LCOS version that is using their “Photonic Chip” and supports about 2 focus planes with no moving focusing optics (according to the article); what the article dubbed the “PEQ” prototype.

If you want to get more into the gory technical details on how the above work, I would suggest one of my earlier articles titled “Magic Leap – Separating Magic and Reality“. And if you really want to get dirty, read the ML patent applications they reference but be prepared for a long read as they they cover a lot of totally different concepts.

As this blog has been reporting (and for which I have gotten criticism on some of the on-line discussion forms), the must discussed “fiber scanning display” (FSD) has not been perfected and with it any chance of making the “light field display” ML has talked so much about. Quoting the article,”Magic Leap relegated the fiber scanning display to a long-term research project in hopes that it might one day work, and significantly pared back on its light field display idea.

Possible Scenario – A Little Story

Based on my startup and big company experiences, I think I understand roughly how it went down. Please take the rest of this section as reasonable speculation and reading between the lines of known information. So I am going to play Columbo (old TV series reference) below to give my theory of how it went down.

Startups have sometimes been described as “Jumping out of a plane and sewing a parachute on the way down.” This appears to be the case with Magic Leap. They had a rough idea of what they wanted to do and were able to build an impressive demo system and with some good hand waving convince investors they could reduce it to a consumer headset.

They found Brian Schowengerdt, co-founder and Chief Scientist, who worked on the fiber scanning display (FSD) technology and the issue of vergence and accomodation at the University of Washington to join. Mr. Schowengerdt is clearly a smart person that added a lot of credibility to Rony Abovitz’s dreams. The problem with “university types” is that they often don’t appreciate what it takes to go from R&D to a real high volume product.

The “new optical people” built “The Beast” prototype using DLP’s and electrical controlled focusing lenses to support multiple focus plane, to address the vergence and accommodation issue. They then used the “Jedi Hand Wave Mind Trick” (ordinary hand waving may not be enough) to show the DLP engine, the crude low resolution FSD display from the U of W, some non-functional waveguides, and a mock-up of how wonderful it would be someday with a simple application of money and people (if you can dream it you can build it, right?).

This got them their “big fish,” Google who was attuned to the future of near eye displays with their investment in Google Glass and all the big noise with Oculus Rift. There is phenomenal FoMO (Fear of Missing Out) going on with AR/VR/MR  The fact they got a lot of money from a big name company became it own publicity and fund raising engine. ML then got showered with money and that they hoped could cover the bet. Have Google invest publicly also became its own shield against any question of whether it would work.

All the money gave them a lot of altitude to try and build the proverbial parachute on the way down. But sometimes the problem is harder than all the smart people and money can solve. As I have pointed out on this blog, making the fiber scan display work at high resolution is no small task if not impossible. They came to realize this at some point, probably early on, that FSD were not going to happen in a meaningful time frame.

So “plan B” became to use an existing working display technology to give a similar visual effect, even if much reduced in resolution. The beast is way to big and expensive to cost reduce and then need to have more demo systems that are easier to make.

So then they make the WDx based on OLEDs. But there is fatal flaw with using OLEDs (and it tripped me up at first when looking at the videos). While OLED make the design much easier and smaller the don’t work due to the nature of the they put out with the wonderfully flat waveguides (what ML calls their “Photonics Chip”) that ML has convince investors are part of their secret sauce.

So if they couldn’t use the Photonics Chip with OLEDs and the FSD is a no-go, what do you tell investors, both of your secret sauces are a bust? So in parallel they are working on plan “C” which is to use LCOS panels with LED light sources that will work with some type of waveguide which they will dub the “Photonics Chip”.

But then there is a fly in the ointment. Microsoft starts going public with their Hololens system making Magic Leap look like they are way behind the giant Microsoft that can spend even more money than ML can raise. They need to show something to stay relevant. They start with totally fake videos and get called on the carpet for being obviously fake. So they need a “Magic Leap Technology” (but not the optics they are actually planning on using) demo.

The “Beast System” with it DLP’s and field sequential color will not video well. The camera will reveal to any knowledgeable expert what they are using. So for the video they press into service the WDx OLED systems that will video better. By cleaver editing and only showing short clips, they can demonstrate some focus effects while not showing the limitations of the WDx prototypes. These videos then make ML seem more “real” and keep people from asking too many embarrassing questions.

A problem jhere is that LCOS is much slower than DLP’s and thus they may only be able to support about 2 focus planes. I also believe from 16 years working with LCOS that this likely to look like crap to the eye due to color field breakup; but reapplying the Jedi Mind Trick, maybe two focus planes will work and people won’t notice the color field breakup. And thus you have the PEQ which still does not work well or would be demoing with it rather than the helmet sized WD3.

I suspect that Reed Albergotti from “The Information” had gotten the drop on ML by doing some good investigative journalism work. He told them he was going to run with the story and ML decided to try see if they could do damage control and invited him in. But apparently he was prepared and still saw the holes in their story.

Epilogue: It sounds like Mr. Schowengerdt has been put off to the side having served is usefulness in raising money. They used the money to hire other optical experts who knew how to design the optics they would actually be using. He may be still playing around the FSD to keep the dream alive of a super high resolution display someday and maybe the the next to impossible high resolution light fields (I would suggest reading “The Horse Will Talk Fable” to gain insight into why they would keep doing this as an “R&D” program).

I’m probably a little off in the details, but it probably went down something like the above. If not, hopefully you found it an amusing story. BTW, if you want to make a book and or movie out of this original story please consider it my copyrighted work (c) 2016 (my father was and two brothers are Patent Lawyers and I learned about copyright as a small child at my fathers knee).

Lessons Learned

In my experience startups that succeed in building their product have more than a vague idea of what they want to do and HOW they are going to do it. They realize that money and smart people can’t cure all ills. Most importantly they understand where they have risk and then only have at most A SINGLE serious risk. They then focus on making sure they covering that risk. In the case of Magic Leap, they had multiple major risks in many different areas. You can’t focus on the key risk because there so many and it is a prescription for product failure no matter how much money is applied.

Its even possible the “smart money” that invested realized that ML realized that they were unlikely to totally succeed but thought with money and smart people they might spin out some valuable technology and/or patents. The “equation works” if they multiply a hoped by $100B/year market by even a small chance of success. If a big name places what is for them a small bet, it is surprising how much money will follow along assuming the big name investor had done all the hard work of due diligence.

Even if they get paste the basic technology risk get the PEQ running. We they will then have the problem of building a high volume product, worse yet they are building their own factory. And then we have the 90/90 rule which states, “it takes 90% of the effort to get 90% of the way there and then another 90% to solve the last 10%.” When you have a fully working prototype that behaves well (which by the reports in ML has NOT achieved yet) you have just made it to the starting line; then you have to make it manufacturable at a reasonable cost and yield. Other have said it is really 90/90/90 where there is a third 90%. This is where many a Kickstarter company has spun their wheels.

Magic Leap & Hololens: Waveguide Ego Trip?

ml-and-hololens-combiner-cropThe Dark Side of Waveguides

Flat and thin waveguides are certainly impressive optical devices. It is almost magical how you can put light into what looks a lot like thin plates of glass and an small image will go on one side and then with total internal reflection (TIR) inside the glass, the image comes out in a different place. They are coveted by R&D people for their scientific sophistication and loved by Industrial Designers because they look so much like ordinary glass.

But there is a “dark side” to waveguides, at least every one that I have seen. To made them work, the light follows a torturous path and often has to be bent at about 45 degrees to couple into the waveguide and then by roughly 45 degrees to couple out in addition to rattling of the two surfaces while it TIRs. The image is just never the same quality when it goes through all this torture. Some of the light does not make all the turns and bends correctly and it come out in the wrong places which degrade the image quality. A major effect I have seen in every diffractive/holographic waveguid  is I have come to call “waveguide glow.”

Part of the problem is that when you bend light either by refraction or using diffraction or holograms, the various colors of light bend slightly differently based on wavelength. The diffraction/holograms are tuned for each color but invariably they have some effect on the other color; this is particularly problem is if the colors don’t have a narrow spectrum that is exactly match by the waveguide. Even microscopic defects cause some light to follow the wrong path and invariably a grating/hologram meant to bend say green, will also affect the direction of say blue. Worse yet, some of the  light gets scattered, and causes the waveguide glow.

hololens-through-the-lens-waveguide-glowTo the right is a still frame from a “Through the lens” video” taken through the a Hololens headset. Note, this is actually through the optics and NOT the video feed that Microsoft and most other people show. What you should notice is a violet colored “glow” beneath the white circle. There is usually also a tendency to have a glow or halo around any high contrast object/text, but it is most noticeable when there is a large bright area.

For these waveguides to work at all, they require very high quality manufacturing which tends to make them expensive. I have heard several reports that Hololens has very low yields of their waveguide.

I haven’t, nor have most people that have visited Magic Leap (ML), seen though ML’s waveguide. What  ML leap shows most if not all their visitors are prototype systems that use non-waveguide optics has I discussed last time. Maybe ML has solved all the problems with waveguides, if they have, they will be the first.

I have nothing personally against waveguides. They are marvels of optical science and require very intelligent people to make them and very high precision manufacturing to make. It is just that they always seem to hurt image quality and they tend to be expensive.

Hololens – How Did Waveguides Reduce the Size?

Microsoft acquired their waveguide technology from Nokia. It looks almost like they found this great bit of technology that Nokia had developed and decided to build a product around it. hololensBut then when you look at Hololens (left) there is this the shield to protect the lenses (often tinted but I picked a clear shield so you could see the waveguides). On top of this there is all the other electronic and frame to mount it on the user’s head.

The space savings from the using waveguides over much simpler flat combiner  is a drop in the bucket.

ODG Same Basic Design for LCOS and OLED

I’m picking Osterhout Design Group’s for comparison below because because they demonstrate a simpler, more flexible, and better image quality alternative to using a waveguide. I think it makes a point. Most probably have not heard of them, but I have know of them for about 8 or 9 years (I have no relationship with them at this time). They have done mostly military headsets in the past and burst onto the public scene when Microsoft paid them about $150 million dollars for a license to their I.P. Beyond this they just raised another $58 million from V.C.’s. Still this is chump change compared to what Hololens and Magic Leap are spending.

Below is the ODG R7 LCOS based glasses (with the one of the protective covers removed). Note, the very simple flat combiner. It is extremely low tech and much lower cost compared to the Hololens waveguide. To be fair, the R7 does not have as much in the way of sensors and processing as the as Hololens.

odg-r-with-a-cover-removed

The point here is that by the time you put the shield on the Hololens what difference does having a flat waveguide make to the overall size? Worse yet, the image quality from the simple combiner is much better.

Next, below is ODG’s next generation Horizon glasses that use a 1080p Micro-OLED display. It appears to have somewhat larger combiner (I can’t tell if it is flat or slightly curved from the available pictures of it) to support the wider FOV and a larger outer cover, but pretty much the same design. The remarkable thing is that they can use the a similar optical design with the OLEDs and the whole thing is about the same size where as the Hololens waveguide won’t work at all with OLEDs due broad bandwidth colors OLEDs generate.

odg-horizons-50d-fov

ODG put up a short video clip through their optics of the Micro-OLED based Horizon (they don’t come out and say that it is, but the frame is from the Horizon and the image motion artifacts are from an OLED). The image quality appears to be (you can’t be too quantitative from a YouTube video) much better than anything I have seen from waveguide optics. There is not of the “waveguide glow”. odg-oled-through-the-optics-002

They even were willing to show text image with both clear and white backgrounds that looks reasonably good (see below). It looks more like a monitor image except for the fact that is translucent. This is the hard content display because you know what it is supposed to look like so you know when something is wrong. Also, that large white area would glow like mad on any waveguide optics I have seen. odg-oled-text-screen-002

The clear text on white background is a little hard to read at small size because it is translucent, but that is a fundamental issue will all  see-though displays. The “black” is what ever is in the background and the “white” is the combination of the light from the image and the real world background.  See through displays are never going as good as an opaque displays in this regards.

Hololens and Magic Leap – Cart Before the Horse

It looks to me like Hololens and Magic Leap both started with a waveguide display as a given and then built everything else around it. They overlooked that they were building a system. Additionally, they needed get it in many developers hands as soon as possible to work out the myriad of other sensor, software, and human factors issues. The waveguide became a bottleneck, and from what I can see from Hololens was an unnecessary burden. As my fellow TI Fellow Gene Frantz and I used to say when we where on TI’s patent committeed, “it is often the great new invention that causes the product to fail.”

I (and few/nobody outside of Magic Leap) has seen an image through ML’s production combiner, maybe they will be the first to make one that looks as good as simpler combiner solution (I tend to doubt it, but it not impossible). But what has leaked out is that they have had problems getting systems to their own internal developers. According the Business Insider’s Oct. 24th article (with my added highlighting):

“Court filings reveal new secrets about the company, including a west coast software team in disarray, insufficient hardware for testing, and a secret skunkworks team devoted to getting patents and designing new prototypes — before its first product has even hit the market.”

From what I can tell of what Magic Leap is trying to do, namely focus planes to support vergence/accommodation, they could have achieved this faster with more conventional optics. It might not have been as sleek or “magical” as the final product, but it would have done the job, shown the advantage (assuming it is compelling) and got their internal developers up and running sooner.

It is even more obvious for Hololens. Using a simple combiner would have added trivially to the the design size while reducing the cost and getting the the SDK’s in more developer’s hands sooner.

Summary

It looks to me that both Hololens and likely Magic Leap put too much emphasis on the using waveguides which had a domino effect in other decisions rather than making a holistic system decision. The way I see it:

  1. The waveguide did not dramatically make Hololens smaller (the case is still out for Magic Leap – maybe they will pull a rabbit out of the hat). Look at ODG’s designs, they are every bit as small.
  2. The image quality is worse with waveguides than simpler combiner designs.
  3. Using waveguides boxed them in to using only display devices that were compatible with their waveguides. Most notably they can’t use OLED or other display technology that emit broader spectrum light.
  4. Even if it was smaller, it is more important to get more SDKs in developers (internal and/or external hand) sooner rather than later.

Hololens and Magic Leap appear to be banking on getting waveguides into volume production in order to solve all the image quality and cost problems with them. But it will depend on a lot of factors, some of which are not in their control, namely, how hard it is to make them well and at a price that people can afford. Even if they solve all the issues with waveguides, it is only a small piece of their puzzle.

Right now ODG seems to be taking more the of the original Apple/Wozniak approach; they are finding elegance in a simpler design. I still have issues with what they are doing, but in the area of combining the light and image quality, they seem to be way ahead.

Magic Leap: When Reality Hits the Fan

Largely A Summary With Some New Information

ml-slm-beam-splitter-lcos-type-optics-colorI have covered a lot of material and even then only glossed at the surface of what I have learned about Magic Leap (ML). By combining the information available (patent applications, articles, and my sources), I have a fairly accurate picture of what Magic Leap is actually doing based on feedback I have received from multiple sources.

This blog has covered a lot of different topics and some conclusions have changed slightly as I discovered more information and with feedback from some of my sources. Additionally, many people just want “the answer.” So I thought it would be helpful to summarize some of the key results including some more up to date information.

What Magic Leap Is Not Doing In The Product

Between what I have learned and feedback from sources I can say conclusively that ML is not doing the following:

  1. Light Fields – These would requires a ridiculously large and expensive display system for even moderate resolution.
  2. Fiber Scan Displays – They have demonstrated low resolution versions of these and may have used them to convince investors that they had a way to break through the limitations of pixel size of Spatial Light Modulators (SLM) like LCOS, DLP, and OLEDs. Its not clear how much they improved the technology over what the University of Washington had done, but they have given up on these being competitive in resolution and cost with SLMs anytime soon. It appears to have been channeled into being a long term R&D effort and to keep the dream alive with investors.
  3. Laser Beam Scanning (LBS) by Microvision or anyone else – I only put this on the list because of an incredibly ill-informed new release by Technavio stating “Magic Leap is yet to release its product, and the product is likely to adopt MicroVision’s VRD technology.” Based on this, I would give the entire report they are marketing zero credibility; I think they are basing their reports on reading fan-person blogs about Microvision.
  4. OLED Microdisplays – They were using these in their demos and likely in the video they made, but OLED are incompatible optically with there use of a diffractive waveguide (= ML’s Photonic Chip).
Prototypes that Magic Leap Has Shown
  1. FSD – Very low resolution/crude green only fiber scanned display. This is what Rachel Metz described (with my emphasis added) in her MIT Technology Review March/April 2015 article, “It includes a projector, built into a black wire, that’s smaller than a grain of rice and channels light toward a single see-through lens. Peering through the lens, I spy a crude green version of the same four-armed monster that earlier seemed to stomp around on my palm.
  2. ml-495-applicationTI DLP with a conventional combiner and  a “variable focus element” (VFE). They use the DLP to generate a series of focus planes time sequentially and change the VFE between the sequential focus planes. Based on what I have heard, this is their most impressive demo visually and they have been using this for over a year, but the system is huge.
  3. OLED with a conventional combiner (not a waveguide/”Photonics Chip”). This is likely the version they used to shoot their “Through Magic Leap Technology” videos that I analyzed in my Nov. 9th, 2016 blog post. In that article I though that Micro-OLED might be used in the final product, but I have revised this opinion. OLEDs output very wide bandwidth light that is incompatible with waveguides, so it would be incompatible with working with Photonics Chip ML makes such a big deal about.

What is curious is that none of these prototypes, with the possible exception of #1, the single color low resolution FSD, are using a “waveguide.” Waveguides are largely incompatible with OLEDs and having a variable focus element is also problematical.  Also none of these are using LCOS, the most likely technology in the final product.

What Magic Leap Is Trying to Do In Their First “Product”

I’m going to piece together below what I believe based on the information available from both public information and some private conversations (but none of it is based on NDA’ed information as far as I am aware).

  1. ml-slm-beam-splitter-lcos-type-optics-colorLCOS Microdisplay – All the evidence including Business Insider’s October 27, 2016 points to ML using LCOS. They need a technology that will work well with waveguides using narrow band (likely LED) light sources that they can make as bright as necessary and control the angle of the light illumination. LCOS is less expensive, more optically compact, and requires less power than DLP for near eye systems. All these reason are same as why Hololens is using LCOS. Note, I’m not 100% sure on them using LCOS, but it by far the most likely technology they will be using. They could also be using DLP but I would put that at less than a 10% chance. I’m now ruling out Micro-OLED because it would not work in a waveguide.
  2. Two (2) sequential focus planes are supported – The LCOS microdisplay is likely only able to support about 120 full color frames per second which is only enough to support 2 sequential focus planes per 1/60th of a second of a moving image. Supporting more planes at a slower rate would result in serious image breakup when things move. The other big issue is the amount of processing required. Having even two focus planes greatly increase the computation that have to be done. To make it work correctly, they will need to track the person’s pupils and factor that into their processing and deal with things like occlusion. Also with the limited number of focus planes they will have to figure out how to “fake” or deal with a wider range of focus.
  3. Variable Focus – What I don’t know is how they are supporting the change in focus between the sequential focus planes. They could be using some form of electrically alterable lens but it is problematical to have non-collimated light entering a waveguide. It would therefore seem more consistent for them to be using the technique shown in their patent application US 2016/0327789 that I discussed before.
  4. Photmagic-leap-combiner-croponics Chip (= Diffractive Waveguide) – ML has made a big deal about their Photonic’s Chip, what everyone else would call a “waveguide.” The Photonics Chip likely works similar to the one Hololens uses (for more information on waveguides, see my Oct 27th, 2016 post). The reports are that Hololens has suffered low yields with their Waveguides and Magic Leaps will have more to do optically to support focus planes.
Comments

Overall, I think it it is very clear that what they will actually make is only a fraction of he vision they have portrayed to the press. They may have wanted to do 50 megapixel equivalent foveated displays, use FSD as their display device, have 6 focus planes, or even (from Fortune July 12, 2016) ““light-field” technology essentially mimics the brain’s visual-perception mechanisms to create objects and even people who look and behave just the way they would in the real world, and interact with that world seamlessly.” But then, they have to build something that actually works and that people can afford to buy. Reality then hits the fan