Magic Leap – The Display Technology Used in their Videos

So, what display technology is Magic Leap (ML) using, at least in their posted videos?   I believe the videos rule out a number of the possible display devices, and by a process of elimination it leaves only one likely technology. Hint: it is NOT laser fiber scanning prominently shown number of ML patents and article about ML.

Qualifiers

Magic Leap, could be posting deliberately misleading videos that show technology and/or deliberately bad videos to throw off people analyzing them; but I doubt it. It is certainly possible that the display technology shown in the videos is a prototype that uses different technology from what they are going to use in their products.   I am hearing that ML has a number of different levels of systems.  So what is being shown in the videos may or may not what they go to production with.

A “Smoking Gun Frame” 

So with all the qualifiers out of the way, below is a frame capture from Magic Leaps “A New Morning” while they are panning the headset and camera. The panning actions cause temporal (time based) frame shutter artifact in the form of partial ghost images as a result of the camera and the display running asynchronously and/or different frame rates. This one frame along with other artifacts you don’t see when playing the video, tells a lot about the display technology used to generate the image.

ml-new-morning-text-images-pan-leftIf you look at the left red oval you will see at the green arrow a double/ghost image starting and continuing below that point.  This is where the camera caught the display in its display update process. Also if you look at the right side of the image you will notice that the lower 3 circular icons (in the red oval) have double images where the top one does not (the 2nd to the top has a faint ghost as it is at the top of the field transition). By comparison, there is not a double image of the real world’s lamp arm (see center red oval) verifying that the roll bar is from the ML image generation.

ml-new-morning-text-whole-frameUpdate 2016-11-10: I have upload for those that would want to look at it.   Click on the thumbnail at left to see the whole 1920×1080 frame capture (I left the highlighting ovals that I overlaid).

Update 2016-11-14 I found a better “smoking gun” frame below at 1:23 in the video.  In this frame you can see the transition from one frame to the next.  In playing the video the frame transition slowly moves up from frame to frame indicating that they are asynchronous but at almost the same frame rate (or an integer multiple thereof like 1/60 or 1/30th)

ml-smoking-gun-002

 In addition to the “Smoking Gun Frame” above, I have looked at the “A New Morning Video” as well the “ILMxLAB and ‘Lost Droids’ Mixed Reality Test” and the early “Magic Leap Demo” that are stated to be “Shot directly through Magic Leap technology . . . without use of special effects or compositing.”through the optics.”   I was looking for any other artifacts that would be indicative of the various possible technologies

Display Technologies it Can’t Be

Based on the image above and other video evidence, I think it save to rule out the following display technologies:

  1. Laser Fiber Scanning Display – either a single or multiple  fiber scanning display as shown in Magic Leaps patents and articles (and for which their CTO is famous for working on prior to joining ML).  A fiber scan display scans in a spiral (or if they are arrayed an array of spirals) with a “retrace/blanking” time to get back to the starting point.  This blanking would show up as diagonal black line(s) and/or flicker in the video (sort of like an old CRT would show up with a horizontal black retrace line).  Also, if it is laser fiber scanning, I would expect to see evidence of laser speckle which is not there. Laser speckle will come through even if the image is out of focus.  There is nothing to suggest in this image and its video that there is a scanning process with blanking or that lasers are being used at all.  Through my study of Laser Beam Scanning (and I am old enough to have photographed CRTs) there is nothing in the still frame nor videos that is indicative of a scanning processes that has a retrace.
  2. Field Sequential DLP or LCOS – There is absolutely no field sequential color rolling, flashing, or flickering in the video or in any still captures I have made. Field sequential displays, display only one color at a time very rapidly. When these rapid color field changes beat against the camera’s scanning/shutter process.  This will show up as color variances and/or flicker and not as a simple double image. This is particularly important because it has been reported that Himax which makes field sequential LCOS devices, is making projector engines for Magic Leap. So either they are not using Himax or they are changing technology for the actual product.  I have seen many years of DLP and LCOS displays both live and through many types of video and still cameras and I see nothing that suggest field sequential color is being used.
  3. Laser Beam Scanning with a mirror – As with CRTs and fiber scanning, there has to be a blanking/retrace period between frames will show up in the videos as roll bar (dark and/or light) and it would roll/move over time.  I’m including this just to be complete as this was never suggested anywhere with respect to ML.
UPDATE Nov 17, 2016

Based on other evidence that as recently come in, even though I have not found video evidence of Field Sequential Color artifacts in any of the Magic Leap Videos, I’m more open to thinking that it could be LCOS or (less likely) DLP and maybe the camera sensor is doing more to average out the color fields than other cameras I have used in the past.  

Display Technologies That it Could Be 

Below are a list of possible technologies that could generate video images consistent with what has been shown by Magic Leap to date including the still frame above:

  1. Mico-OLED (about 10 known companies) – Very small OLEDs on silicon or similar substrates. As list of the some of the known makers is given here at OLED-info (Epson has recently joined this list and I would bet that Samsung and others are working on them internally). Micro-OLEDs both A) are small enough toinject an image into a waveguide for a small headset and B) has the the display characteristics that behave the way the image in the video is behaving.
  2. Transmissive Color Filter HTPS (Epson) – While Epson was making transmissive color filter HTPS devices, their most recent headset has switch to a Micro-OLED panel suggesting they themselves are moving away.  Additionally while Meta first generation used Epson’s HTPS, they moved away to a large OLED (with a very large spherical reflective combiner).  This technology is challenged in going to high resolution and small size.
  3. Transmissive Color Filter LCOS (Kopin) – is the only company making Color Filter Transmissive LCOS but they have not been that active as of last as a component supplier and they have serious issues with a roadmap to higher resolution and size.
  4. Color Filter reflective LCOS– I’m putting this in here more for completeness as it is less likely.  While in theory it could produce the images, it generally has lower contrast (which would translate into lack of transparency and a milkiness to the image) and color saturation.   This would fit with Himax as a supplier as they have color filter LCOS devices.
  5. Large Panel LCD or OLED – This would suggest a large headset that is doing something similar to the Meta 2.   Would tend to rule this out because it would go against everything else Magic Leap shows in their patents and what they have said publicly.   It’s just that it could have generated the image in the video.
And the “Winner” is I believe . . . Micro-OLED (see update above) 

By a process of elimination including getting rid of the “possible but unlikely” ones from above, it strongly points to it being Micro-OLED display device. Let me say, I have no personal reason to favor it being Micro-OLED, one could argue it might be to my advantage based on my experience for it to be LCOS if anything.

Before I started any serious analysis, I didn’t have an opinion. I started out doubtful that it was field sequential or and scanning (fiber/beam) devices due to the lack of any indicative artifacts in the video, but it was the “smoking gun frame” that convince me that that if the camera was catching temporal artifacts, it should have been catching the other artifact.

I’m basing this conclusions on the facts as I see them.  Period, full stop.   I would be happy to discuss this conclusion (if asked rationally) in the comments section.

Disclosure . . . I Just Bought Some Stock Based on My Conclusion and My Reasoning for Doing So

The last time I played this game of “what’s inside” I was the first to identify that a Himax LCOS panel was inside Google Glass which resulted in their market cap going up almost $100M in a couple of hours.  I had zero shares of Hixmax when this happened, my technical conclusion now as it was then was based on what I saw.

Unlike my call on Himax in Google Glass I have no idea which company make the device Magic Leap appears to using nor if Magic Leap will change technologies for their production device.  I have zero inside information and am basing this entirely on the information I have given above (you have been warned).   Not only is the information public, but it is based on videos that are many months old.

I  looked at companie on the OLED Microdisplay List by http://www.oled-info.com (who has followed OLED for a long time).  It turned out all the companies were either part of a very large company or were private companies, except for one, namely eMagin.

I have know of eMagin since 1998 and they have been around since 1993.  They essentially mirror Microvision doing Laser Beam Scanning and was also founded in 1993, a time where you could go public without revenue.  eMagin has spent/loss a lot of shareholder money and is worth about 1/100th from their peak in March 2000.

I have NOT done any serious technical, due diligence, or other stock analysis of eMagin and I am not a stock expert. 

I’m NOT saying that eMagine is in Magic Leap. I’m NOT saying that Micro-OLED is necessarily better than any other technology.  All I am saying is that I think that someone’s Micro-OLED technology is being using the Magic Leap prototype and that Magic Leap is such hotly followed company that it might (or might not) affect the stock price of companies making Micro-OLEDs..

So, unlike the Google Glass and Himax case above, I decided to place a small “stock bet” (for me) on ability to identify the technology (but not the company) by buying some eMagin stock  on the open market at $2.40 this morning, 2016-11-09 (symbol EMAN). I’m just putting my money where my mouth is so to speak (and NOT, once again, stock advice) and playing a hunch.  I’m just making a full disclosure in letting you know what I have done.

My Plans for Next Time

I have some other significant conclusions I have drawn from looking at Magic Leap’s video about the waveguide/display technology that I plan to show and discuss next time.

Karl Guttag
Karl Guttag
Articles: 256

24 Comments

  1. Interesting read, thank you, appreciate your blog posts alot.

    However was wondering what are your thoughts on your previous article where you quoted Business insider article, where KGI Securities analyst Ming-Chi Kuo stated the following: “the high cost of some of Magic Leap’s components, such as a micro projector from Himax that costs about $35 to $45 per unit.”
    Ming-Chi Kuo is notoriously famous getting his predictions right so i would appreciate your thoughts how OLED and micro projectors could/would be connected?

    • Thanks,

      I tried to cover this point in the article, noting that Business Insider had reported Ming-Chi Kuo’s information. It is certainly possible that both could be true; it could be that Magic Leap is using LCOS for their production version while using OLED’s in their demo (I have no inside information either way on this). I’m as certain as I can be that the demo video is not using a field sequential LCOS panel and it extremely unlikely that they would use a low resolution, low contrast, lower color quality Himax color filter panel.

      I also don’t see how a Himax LCOS microdisplay gets Magic Leap to where they claim they are going with a multi-focus-plane display system.

      I think what makes my findings interesting is that there is no evidence of Magic Leap in the video using EITHER LCOS nor their scanning fiber display which by all other reports were what they are using. I’m trying to take nothing at face value if it does not agree with the facts.

      • Thank you for your reply. I have heard it before from multiple sources before, that the device which is being used for presentation, could not be the very one, which will be consumer version, this is of course very wild speculation with no basis what so ever. I recall you commenting on reddit post the following: “For all we know, their internal prototypes they’re demoing to journalists could be costing them $50K to produce”.

        Would appreciate it a lot if you could please offer some of your ideas as to why hololens choose the LCOS technical solution route and magic leap could be moving forward with OLED? There has been some speculations that hololens could also possibly switch the technology due to FOV? Not sure how much truth is to it from technological limitation perspective. I remember you mentioning in one of your posts, that FOV increase is technically no problem, one they would be willing to sacrifice angular resolution. How do you see angular resolution and FOV compare between LCOS and OLED? I think you have mentioned multiple times, that large FOV with current technologies is very expensive and difficult to achieve with consumer like device price tag. Then at the same time you hear these stories from magic leap claiming to have very large field of view. Putting wide FOV stories and your current post about oled together, how do you see these 2 things working out?

        What would be the most significant benefits you see. Some notes i have heard being mentioned is brightness, but there was discussion while ago, where one person for example posted link about emagin ultra high brightness micro oled display. I have no idea, i could not even dare to guess the price for example, here is the link: http://www.businesswire.com/news/home/20160602005730/en/eMagin-Announces-Public-Demonstration-Ultra-High-Brightness-Direct
        Then there are also the production cost (not low enough for consumer version), power consumption, contrast, brightness, ease of manufacturing scalability etc. Sorry for long post, but the topic is very interesting.

      • That is a lot of questions that will take a while to discuss. I will try and hit on them briefly and plan to cover some of this in future installments.

        Microsoft probably chose LCOS for Hololens for its combination of, resolution, cost, brightness, power consumption and size. Today OLED’s are still pretty expensive and their lifetimes are generally shorter. If you illuminate LCOS with LEDS, I don’t think there is a big difference in FOV/Eyebox, but because you don’t have to illuminate OLEDs, their optical path is simpler (they don’t need a “beam splitter” which may help.

        Fundamentally everyone is playing trade-off games with angular resolution (arc-minutes/pixel) versus FOV. If you can only afford a display with a given number of pixels, you can either go for a wide FOV and low angular resolution or visa versa. If you want a 120 degree wide FOV and very good angular resolution (1 arc-minute/degree) then you need a display that has 120 x 60 = 7,200 pixels horizontally and nobody can make this display at a cost that very many could afford. Hololens is using a 1280×720 pixel display which means they can either have decent angular resolution or large FOV but NOT both.

        Overall it is a cost versus benefit analysis with Micro-OLED versus LCOS (and DLP but less used in near eye). There are many pros and cons of each today. OLEDs are self emitting (they emit light) versus LCOS which is illuminated by LEDs or Lasers. This makes OLEDs more compact and simpler optically. LCOS is much less expensive (including the illumination). LCOS can be much brighter as it is only limited by the light source whereas OLEDs tend to burn themselves up if they are driven too hard. High resolution LCOS has field sequential color (rapidly sequencing R,G, and B) which can cause color breakup (and why I’m sure the ML video was not using them), where Micro-OLED has all the colors at the same time (small R,G, B dots). A big advantage for Micro-OLED is high contrast which translates in a AR/MR display into being able to see through better. I’m sure there are some more I have missed. Micro-OLEDs are usually more power efficient.

        Another big issues is the optics to support a see-through wide FOV. This gets very difficult to make and very complicated to explain. The “easy” way to go for wide FOV is to start with a big display like Oculus Rift and Google Cardboard and add lenses so that your eyes can focus on it. BUT this is very large AND you can’t see the real world through it (you can add cameras but this causes lag an other issues).

        Thanks for the link to the eMagin article. Over the years I have suggest companies that that they look at eMagin’s OLED but they came back saying they were “too expensive” but I don’t have details. Still companies like Sony have been building lower resolution OLED’s for camera viewfinders. BUT remember a viewfinder is only on for maybe a few minutes a day (or less) on average so operating/turned-on lifetime (a common issue with OLEDs) is not an issue.

        Let me know if I missed anything.

  2. Karl,

    I have read some speculation that the ability of Magic Leap to remain independent from Google may prove difficult over time .

    With that in mind along with your recent investment decision to purchase some eMagin shares based on you belief that ML may be using micro OLED displays, I would like to point out that Jerry Carollo previously worked for eMagin and is now Optical Architect at Google .

    http://www.businesswire.com/news/home/20161110005439/en/

    One of Jerry Carollo’s responsibilities while at eMagin :

    Special Projects; System architect and inventor of the “Steam Punk” Virtual Reality goggle. The VR goggle has a 100 degree Field of View (FOV), 2K x 2K resolution in a very compact light weight form factor is the highest performance VR HMD ever produced.

    So no doubt Google is very familiar with eMagin’s tech .

    My question is would Google be running separate development programs from ML on AR/VR or is ML merely an extension of Google that will eventually get sucked into their company ?

    In any event , it seems likely Google would be closely monitoring ML progress given their investment in the company & a technical individual with Optical Experience would be an obvious choice – perhaps someone like Jerry Carollo .

      • Thanks for correcting the link.

        Yeah, I saw MVIS and STM getting together. It is going to cause the Microvision stock to pop a bit, but it doesn’t really change the technical facts and use case for LBS projectors. Neither company has serious sales of LBS so even if you add them together it you don’t get much (I don’t know of any STB LBS products, just the Lenovo demo of a while ago).

        It is kind of a classic Microvision stock pump stunt of making a big announcement out of old news; STB and Microvision have been working together for a long time (STB manufacturing the Microvision mirror). Maybe they tweaked the relationship a little hoping the news might move the stock and scare up some business for either of them.

    • Thanks, People move around for all kinds of reasons. eMagin is a 26 year old “startup”. Google is collecting lots of smart people working on all kinds of display systems. Google will be certainly be keeping an eye on ML but hard to know who will be doing the watching unless you have inside information.

      I think Magic Leaps ability to stay independent will depend mostly on whether they can live up to their Hype which frankly may be difficult. ML is trying to pioneer in a lot of very different areas at the same time and this almost never ends up good even for a big company no less a startup.

      I would think that Google will see ML as a strategic investment with at least the thought they might try and merge in someday. But very few times does this kind of thing happen, you just know about the ones that do happen because they are big events (like Disney and Pixar).

    • Thanks,

      I have looked at those videos and they are put together very well (much better than my blog) with good narration, but I cringe occasionally at the technical content. I do like how he searches and find some sources to back up a lot of the claims like the eye tracking. The combiner info in the one of the videos is frankly pretty light weight, but I understand that he is trying to simplify for a less technical audience. BTW, an excellent summary of combiner technology (cited in my blog) is at http://www.kessleroptics.com/wp-content/pdfs/Optics-of-Near-to-Eye-Displays.pdf .

      I tend to like the Socratic Method (most famously shown in the movie “The Paper Chase”). I don’t take things at face value as I know companies will often exaggerate in promoting their product. What gets my juices flowing the most is when the hype appear to be way out of line up with the reality. These are the cases where the end product either never gets made or is a big disappointment (ala Google Glass). I think the videos start from the premise that everything Magic Leap says is “true” and tries to find sources to back it up, where I want to see everything they say “proven.” Sometimes we can get the proof directly, like the display technology used in the videos, but then we can’t know (without inside information) if they will change for production.

      There are some things where I am using by 38 years of industry experience and knowledge to try and sort out by piecing together the known information and filling in the gaps. I’m highly skeptical that Magic Leap is going to have an array of laser fiber displays (FSB) generating high resolution anytime soon (not cold fusion, but way beyond simple). For the fans of Magic Leap, FSD sound much easier on paper than it is in reality and they think it is just around the corner if not already done. There are MANY major technical challenges just to make one (I’m planning on a whole article just on this), no less bring it down to a consumer price point. I will believe it when I see it. But the “Magic Leap Fans” start from the premise that it exists or can be solved with less than $1.4B. There are many patents of things that were never built (not a requirements to get a patent); I look at the patent for the Arrayed FSD and try and figure out how it could be built and what big problems have to be solved, I don’t accept it as true without proof.

      • There’s a lot of reading I’d have to do to understand how this micro display and ‘waveguide’ would differ from normal HMDs (aside from form factor) but I wonder if you could link to or simple explain where the silicon micro display would be located and what material would be placed between the users eyes and the digital images (like the lenses in a normal pair of glasses for example). Is this like the technology that Lumus were working on all these years?

      • I don’t can follow it, but the best summary I know of of a wide variety of “waveguides” that get light from the microdisplay to the eye is by Kessler Optics at http://www.kessleroptics.com/wp-content/pdfs/Optics-of-Near-to-Eye-Displays.pdf. Where the microdisplay is located depends on the optics. A “Waveguide” is a term applied to optics where the light bounces off the internal surfaces of glass or plastic due to total internal reflection (TIR) when the light hit at a shallow angle. There are different optical techniques to get the light into the waveguide and then to cause it to escape. To make the image light escape you need to change the angle of the light or the angle of the surface.

        Lumus is one kind of waveguide (and covered in the Kessler presentation). With it you inject the image light with a surface that is at an angle and then there are surfaces at an angle embedded in the waveguide that cause the light to escape. There are a lot of other techniques.

    • Microvision certainly has misled people and made unsubstantiated claims that I have covered in a number of post, but I don’t think simply calling them names helps with anyone’s understanding.

      • one big reason why I have bought Emagin over the years is the dearth of information
        and absolutely no fluff , nor junk pr’s … just nose to the grindstone and keep making
        progress …
        I have a hard time getting my head around the very low sp ,tho’ … the company does
        have assets(equipment and personnel) , they some cash on hand , and inventory ,
        and patents … they have also stated that manufacturing can easily be transported from
        the dev lab to mfg floor with ease …. garce

  3. If the icons in the right side red oval have progressively worse double image, doesn’t this suggest at least one of:
    1. The camera is a rolling shutter
    2. The display is a scanner

    Also I can’t think of a physical reason why a scanners’ retrace can’t be used to show the next frame, particularly as diverting the orbit of a cantilever (for piezo-fibre) quickly to the start position might have unreliable positioning.

    • Thanks,

      The camera has a rolling shutter and the display has a rolling update that is out of sync with the camera. Additionally the pixels in the display simply update (go from one value to the next without going to black in-between). If there was a “retrace” with a blanking as with a CRT, Laser Beam Scanner, or a Fiber Scanning display, there would be some kind of rolling brightness variation. The Fiber Scanning Display would cause some kind of flutter/brightness variation when it beat against the rolling shutter of the camera. Since the fiber scanning is going in a spiral, you won’t get a line but rather some kind of bulls-eye like effect. With zero persistence, the camera will detect the blanking if it can detect the frame update double image.

      The scanner retrace is at a whole different amplitude and will not go through the same places to get it to settle faster; otherwise you would have an even longer off-time while you wait for the fiber to settle. The fiber to a degree has a mind of it own and will follow its natural resonance and you can only force it so much.

      No matter what assumption I can make for how the fiber scans or if there are multiple scanning fibers, I don’t see any way a scanning fiber display or other display without persistence can make the “smoking gun” image.

  4. There is another possibility… If they very solidly attached the camera lens to the headset and they didn’t have any color sequential timewarping then it would be literally impossible to see any color sequential artifacts.

    Color sequential artifacts come from moving the camera relative to the display. But if the camera is fixed to the display they cannot be seen (unless they were forward projecting the color sub-frame images based off IMU movement as is done in Hololens, but this is a pretty advanced rendering feature that would be added late in product development).

    • The DLP or other field sequential artifacts will show up due to the camera even if you have it locked down. Most cameras have a rolling shutter action and even if it is sunk to the frame rate (which is it not in the Magic Leap Videos) the color fields are at a higher rate and will show up as color ripple/lines/flashing from frame to frame. This will be particularly noticeable in white areas.

      See this video as an example. Pause it when it starts and you will see the horizontal color bar. Then let it play and they will ripple.
      https://youtu.be/7ixBVmmPkRY?t=50

      • Thanks for the info. Most film stuff is 24 or 30 fps. You only have to have 60Hz or above to get rid of flicker and temporal aliasing (most famous in wagon wheel and propeller strobe effects) when you have a scanning type image source (the old film projectors double projected each frame at 24Hz to reduce flicker without having to have 2x the film — originally they didn’t do this trick and thus the “Flicks” nickname for movies).

        Assuming it is a 25mm lens, we need to know the sensor size. Blackmagic Cinema has two sensor sizes. The 4K Sensor Size 22mm x 11.88mm and the others are 15.81mm x 8.88mm. Note these are all narrower horizontally than a “full frame 35mm SLR” with 36mm by 24mm, which means that a 25mm lens does not give that wide an FOV. With the smaller one and a 25mm lens it works out to only about a 36 degree FOV and with their bigger sensor I only get 47.4 degrees for the horizontal FOV (I did a search and others have come to the same conclusion). Both of these are narrower than I would expect ML to be doing based on all the talk. They could have the cameras further back from the waveguide than the eye (or have a totally different optics as they only say it is ML “technology” not “the technology they will use in the product) or they could be cropping a lot so it does not tell us everything.

        I have shot field sequential videos with a number of different video and DSLR cameras (CMOS and CCD) from different companies and there is a wide variety of how readily you can see the color field breakup. It can be hard to see with some cameras. If could be that ML through luck or trial an error found a cameras with a sensor/shutter mechanism that does not show up field sequential artifacts, but in the past I have always found then in a frame every now and then even with the “best” cameras. It is possible that these cameras make for movie use do something better than the more consumer grade cameras and DSLRs. I have never use a Blackmagic Cinema camera, but certainly the slower the field rate and the more the sensor averages, the better it will hide field sequential artifact, but I would still expect to find some.

      • No, I think the display technology used in their 2016 “through Magic Leap optics” videos where either eMagin or Sony OLEDs. But, OLEDs are incompatible with diffractive waveguides. Most likely, Magic Leap is using LCOS with a chance they could be using DLP.

        I think Magic Leap was playing word games with the “through Magic Leap optics.” Yes, they were optics that Magic Leap put together, but no they are not what they plan to use in the final product.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading