Archive for December 29, 2016

Microvision Laser Beam Scanning: Everything Old Is New Again

Reintroducing a 5 Year Old Design?

Microvision, the 23 year old “startup” in Laser Beam Scanning (LBS), has been a fun topic on this blog since 2011. They are a classic example of a company that tries to make big news out of what other companies would consider to not be news worthy.

Microvision has been through a lot of “business models” in their 23 years. They have been through selling “engines”, building whole products (the ShowWX), licensing model with Sony selling engines, and now with their latests announcement “MicroVision Begins Shipping Samples to Customers of Its Small Form Factor Display Engine they are back to selling “engines.”

The funny thing is this “new” engine doesn’t look very much different from it “old” engine it was peddling about 5 years ago. Below I have show 3 laser microvision engines from 2017, 2012, and 2013 to roughly to the same scale and they all look remarkably similar. The 2012 and 2017 engine are from Microvision and the 2013 engine was inside the 2013 Pioneer aftermarket HUD. The Pioneer HUD appears use a nearly identical engine and within 3mm of the length of the “new” engine. 


The “new” engine is smaller than the 2014 Sony engine that used 5 lasers (two red, two green, and one blue) to support higher brightness and higher power with lower laser speckle shown at left.  It appears that the “new” Microvision engine is really at best a slightly modified 2012 model, with maybe some minor modification and newer laser diodes.

What is missing from Microvision’s announcement is any measurable/quantifiable performance information, such as the brightness (lumens) and power consumption (Watts). In my past studies of Microvision engines, they have proven to have much worse lumens per Watt compared to other (DLP and LCOS) technologies. I have also found their measurable resolution to be considerably less (about half in horizontally and vertically) than they their claimed resolution.

While Microvision says, “The sleek form factor and thinness of the engine make it an ideal choice for products such as smartphones,” one needs to understand that the size of the optical engine with is drive electronics is about equal to the entire contents of a typical smartphone. And the projector generally consumes more power than the rest of the phone which makes it both a battery size and a heat issue.

Everything VR & AR Podcast Interview with Karl Guttag About Magic Leap

With all the buzz surrounding Magic Leap and this blog’s technical findings about Magic Leap, I was asked to do an interview by the “Everything VR & AR Podcast” hosted by Kevin Harvell. The podcast is available on iTunes and by direct link to the interview here.

The interview starts with about 25 minutes of my background starting with my early days at Texas Instruments. So if you just want to hear about Magic Leap and AR you might want to skip ahead a bit. In the second part of the interview (about 40 minutes) we get into discussing how I went about figuring out what Magic Leap was doing. This includes discussing how the changes in the U.S. patent system signed into law in 2011 with the America Invents Act help make the information available for me to study.

There should be no great surprises for anyone that has followed this blog. It puts in words and summarizes a lot that I have written about in the last 2 months.

Update: I listen to the podcast and noticed that I misspoke a few times; it happens in live interviews.  An unfathomable mistake is that I talked about graduating college in 1972 but that was high school; I graduated from Bradley University with a B.S. in Electrical Engineering in 1976 and then received and MSEE from The University of Michigan in 1977 (and joined TI in 1977).  

I also think I greatly oversimplified the contribution of Mark Harward as a co-founder at Syndiant. Mark did much more than just have desigeners, he was the CEO, an investor, and and the company while I “played” with the technology, but I think Mark’s best skill was in hiring great people. Also, Josh Lund, Tupper Patnode, and Craig Waller were co-founders. 

 

ODG R-9: A Peak Behind the Video Curtain

Introduction

With all the hype about Hololens and Magic Leap (ML), Osterhout Design Group (ODG) often gets overlooked. ODG has not spent as much (but still spending 10’s of millions).  ODG has many more years working in field albeit primarily in the military/industrial market.

I don’t know about all the tracking, image generation, wireless, and other features, but ODG should have the best image quality of the three (ODG, Hololens, and ML).  Their image quality was reasonably well demonstrated in a short “through the optics” video ODG made (above and below are a couple crops from frames of that video). While you can only tell so much from a YouTube video (which limits the image quality), they are not afraid to show reasonably small text and large white areas (both of which would show up problems with lesser quality displays).

Update 2016-12-26: A reader “Paul” wrote that he has seen the “cars and ball” demo live. That while the display was locked down, the cubes were movable in the demo. Paul did not know where the computing was done and it could have been done on a separate computer. So it is possible that I got the dividing line between what was “real” and preplanned a bit off. I certainly don’t think that they detected that there was a clear and a black cube, and much of the demo had to have been pre-planned/staged. Certainly it is not a demonstration of what would happen if you were wearing the headset. 

Drawn To Contradictions

As I wrote last time, I’m not a fan of marketing hyperbole and I think calling their 1080p per eye a “4K experience” is at best deliberately confusing. I also had a problem with what Jame Mackie (independent) reporter said about the section of the video starting at 2:29 with the cars and balls in it and linked to here. What I was seeing was not what he was describing.

The sequence starts with a title slide saying, “Shot through ODG smart-glasses with an iPhone 6” which I think is true as far as it what is written. But the commentary by Jame Mackie was inaccurate and misleading:

So now for a real look at how the Holograms appear, as you can see the spatial and geometric tracking is very good. What really strikes me is the accuracy and positioning.  Look how these real life objects {referring to the blocks} sit so effortlessly with the Holograms

I don’t know what ODG told the reporter or if he just made it up, but at best the description is very misleading. I don’t believe there is any tracking being done and all the image rendering  was generated off-line.

What Real Virtual Reality Looks Like

Before getting into detail on the “fake” part of the video, it is instructive to look at a “real” clip. In another part of the video there is a sequence showing replacing the tape in a label maker (starting at 3:25).

In this case, they hand-held the camera rig with the glasses. In the first picture below you can see on the phone that that they are inserting virtual a virtual object, circled in green on the phone, and missing in the “real world”. 

As the handheld rig moves around the virtual elements moves and track with the camera movement reasonably well.  There is every indication that what you are seeing is what they can actually with tracking in an image generation. The virtual elements in three clips from the video are circled in green below.

The virtual elements are in the real demonstration are simple with no lighting effects or reflections off the table. Jame Mackie in the video talks as if he actually tried this demonstrations rather than just describing what he thinks the video shows.

First Clue – Camera Locked Down

The first clue that Cars and Balls video was setup/staged video is that the camera/headset never moves. If the tracking and everything was so good, why not prove it by moving rig with the headset and camera.

Locking the camera down makes it vastly easier to match up pre-recorded/drawn material. As soon as you see the camera locked down with a headset, you should be suspicious of whether some or all of the video has been faked.

Second Clue – Black Cube Highlights Disappeared

Take a look at the black cube below showing the camera rig setup and particularly the two edges of the black cube inside the orange ovals I added. Notice the highlight on the bottom half of each edge and how it looks like the front edge of the clear plastic cube. It looks to me like the black cube was made from a clear cube with the inside colored black. 

Now look at the crop at left from the first frames showing the through the iPhone and optics view. The highlight on the clear cube is still there but strangely the highlights on the black cube have disappeared. Either they switched out the cube or the highlights were taking out in post processing. It is hard to tell because the lighting is so dim.

Third Clue – Looks Too Good – Can’t Be Real Time

2016-12-16 Update: After thinking about it some more, the rending might be in real time. They probably knew there would be a clear and black  box and rendered accordingly with simpler rendering than ray tracing. Unknown is whether the headset or another computer did the rendering. 

According to comments by “Paul” he has seen the the system running. The Headset was locked-down which is a clue that is some “cheating” going on, but he said the blocks were not in a fixed location. 

Looking “too good” is a big giveaway. The cars in the video with all their reflections were clearly using much more complex ray-tracing that was computed off-line. Look at all the reflections of the cars at left. There are both cars reflecting off the table and off the clear cube the flashing light on the police car also acts like a light source in the way it reflect off the cube.

4th Clue: How Did The Headset Know The Cube Was Clear?

One of the first things that I noticed was the clear cube. How are the cameras and sensors going to know it is clear and how it will reflect/refract light? That would be a lot of expensive sensing and processing to figure this out just to deal with this case.

5th Clue: Black Cube Misaligned

On the right is a crop from a frame where the reflection of the car is wrong. From prior frames, I have outlined the black cube with red lines. But the yellow care is visible when it should be hidden by the black cube. There also a reflection in the side of the cube around where the render image is expecting the black cube to be (orange line shows the reflection point).

How It Was Done

2016-12-26 Updates (in blue): Based on the available evidence, the video is uses some amount of misdirection. The video was pre-rendered using a ray tracing computer model with a clear cube and a perfect black shiny cube on a shiny black table being modeled.  They knew that a clear and black cube would be in the scene and locked down the camera. They may have use the sensors to detect where the blocks are to know how to rendering the image. 

They either didn’t have the sensing and tracking ability or the the rendering ability to allow the camera to move.

Likely the grids you see in the video are NOT the headset detecting the scene but exactly the opposite; they are guides to the person setting up the “live” shot as to where to place the real cubes to match where they where in the model. They got the black cube in slightly the wrong place.

The final video was shot through the optics, but the cars and balls where running around the a clear and black cubes assuming they would be there when the video was rendered. No tracking, surface detection, or complex rendering was required, just the ability to playback a pre-recorded video.

Comments

I’m not trying to pick on ODG. Their hype so far less than what I have seen from Hololens and Magic Leap.  I don’t mind companies “simulating” what images will look like provided they indicate they are simulated effects. I certainly understand that through the optics videos and pictures will not look as good as simulated images. But when they jump back and forth between real and simulated effects and other tricks, you start to wonder what is “real.”

ODG R-9 (Horizon): 1080p Per Eye, Yes Really

Lazy Reporting – The Marketing Hyperbole’s Friend

While I have not ODG’s R-9 in person yet, I fully expect that it will look a lot better than Microsoft’s Hololens. I even think it will look better in terms of image quality than what I think ML is working on. But that is not the key point of this article.

But there is also a layer of marketing hyperbole and misreporting going on that I wanted to clear up. I’m just playing referee hear and calling it like a see them.

ODG 4K “Experience” with 2K (1080p) Per Eye


2016-12-28 Update – It appears I was a bit behind on the marketing hype vernacular being used in VR. Most VR displays today, such as Oculus, take a single flat panel and split it between two eyes. So each eye sees less than half (some pixels are cut off) of the pixels. Since bigger is better in marketing, VR makers like to quote the whole flat panel size and not the resolution per eye. 

ODG “marketing problem” is that historically a person working with near eye displays would talk in in terms of “resolution per eye” but this would not be as big by 2X as the flat panel based VR companies market. Rather than being at a marketing hype disadvantage, ODG apparently has adopted the VR flat panel vernacular, however misleading it might be. 


I have not met Jame Mackie nor have I watched a lot of his videos, but he obviously does not understand display technology well and I would take anything he says about video quality with a grain of salt. If should have understood that ODG’s R-9 has is not “4K” as in the title of his YouTube video: ODG 4K Augmented Reality Review, better than HoloLens ?. And specifically he should of asked questions when the ODG employee stated at about 2:22, “it’s two 1080p displays to each eye, so it is offering a 4K experience.

What the ODG marketing person was I think trying to say was that somehow having 1080p (also known as 2K) for each eye was like having a 2 times 2K or “4K equivalent” it is not. In stumbling to try and make the “4K equivalent” statement, the ODG person simply tripped over his own tongue to and said that there were two 1080p devices per eye, when he meant to say there were two 1080p devices in the glasses (one per eye). Unfortunately Jame Mackie didn’t know the difference and did not realize that this would have been impossible in the R-9’s form factor and didn’t follow up with a question. So the  false information got copied into the title of the video and was left as if it was true.

VRMA’s Micah Blumberg Asks The Right Questions and Get The Right Answer – 1080p Per Eye

This can be cleared up in the following video interview with Nima Shams, ODG’s VP of Headworn: “Project Horizon” AR VR Headset by VRMA Virtual Reality Media“. When asked by Micah Blumberg starting at about 3:50 into the video, “So this is the 4K headset” to which Nima Sham responds, “so it is 1080p to each eye” to which Blumberg astutely makes sure to clarify with, “so we’re seeing 1080p right now and not 4K” to which Nima Sham responds, “okay, yeah, you are seeing 2K to each eye independently“.  And they even added an overlay in the video “confirmed 2K per eye.” (see inside the read circle I added).

A Single 1080p OLED Microdisplay Per Eye

Even with “only” 1080p OLED microdisplay per eye with a simple optical path the ODG R-9 should have superior image quality compared to Hololens:

  1. OLEDs should give better contrast than Hololens’ Himax LCOS device
  2. There will be no field sequential color breakup with head or image movment as there can be with Hololens
  3. They have about the same pixels per arc-minute at Hololens but with more pixels they increase FOV from about 37 degrees to about 50 degrees.
  4. Using a simple plate combiner rather than the torturous path of Hololens’ waveguide, I would expect the pixels to be sharper and with little visible chroma aberrations and no “waveguide glow” (out of focus light around bright objects). So even though the angular resolution of the two is roughly the same, I would expect the R-9 to look sharper/higher resolution.

The known downsides compared to Hololens:

  1. The ODG R-9 does not appear to have enough “eye relief” to support wearing glasses.
  2. The device puts a lot of weight on the nose and ears of the user.

I’m not clear about the level of tracking but ODG’s R-9 does not appear to have the number of cameras and sensors that Hololens has for mapping/locking the real world. We will have to wait and see for more testing on this issue. I also don’t have information on how comparable the level of image and other processing is done by the ODG relative to Horizon.

Conclusion

Micah Blumberg showed the difference between just repeating what he is told and knowing enough to ask the right followup question. He knew that ODG had a 4K marketing message was confusing and that what he was being told was at odds with what he was being told so he made sure to clarify it. Unfortunately while James Makie got the “scoop” on the R-9 being the product name for Horizon, he totally misreported the resolution and other things in his report (more on that later).

Lazy and ill informed reporters are the friend and amplifier of marketing hyperbole. It appears that ODG is trying to equate dual 1080p displays per eye with being something like “4K” which is really is not. You need 1080p (also known as 2K) per eye to do stereo 1080p, but that is not the same as “4K” which which is defined as 3840×2060 resolution or 4 times the spatial resolution of 1080p. Beyond this, qualifiers of like “4K “Experience” which has no real meaning are easily dropped and ill informed reporters will report it as “4K” which does have a real meaning.

Also, my point is not meant to pick on ODG, they just happen to be the case at hand. Unfortunately, most of the display market is “liars poker.” Companies are fudging on display specs all the time. I rarely see a projector that meets or exceeds it “spec” lumens. Resolutions are often spec’ed in misleading ways (such as specifying the input rather than the “native” resolution). Contrast is another place were “creative marketing” is heavily used. The problem is that because “everyone is doing it” people feel they have to just to keep up.

The problem for me comes when I have to deal with people that have read false or misleading information. It gets hard to separate truth from marketing exaggeration.

This also goes back to why I didn’t put much stock in the magazine reports about Magic Leap looked. These reports were made by people that were easy to impress and likely not knowledgeable about display devices. They probably could not tell the display resolution by 2X in each direction or would notice even moderately severe image problems. If they were shown a flashy looking demo they would assume it was high resolution.

One More Thing – Misleading/Fake “True Video”

It will take a while to explain (maybe next time), I believe the James Makie video also falsely indicates at 2:29 in the video (the part with the cars and the metal balls on the table), that what is being shown is how the ODG R-9 works.

In fact, while the images of the cars and balls are generated by the R-9, there tracking of the real world and the reflections off the surfaces are a well orchestrated FAKE. Basically they were playing a pre-rendered video though the glasses (so that part is likely real). But clear and black boxes on the table where props there to “sell the viewer” that this was being rendered on the fly.  There also appears to be some post-processing in the video. Most notably, it looks like the black box was modified in post production. There are several clues in the video that will take a while to explain.

To be fair to ODG, the video does not claim to not be fake/processed, but the way it is presented within Jame Makie’s video is extremely misleading to say the least. It could be that the video was taken out of context.

For the record, I do believe the video starting at 4:02 which I have analyze before is a genuine through the optics video and is correctly so identified on the video. I’m not sure about the “tape replacement” video at 3:23, I think it may be genuine or it could be some cleaver orchestrating.

Kopin Entering OLED Microdisplay Market

Kopin Making OLED Microdisplays

Kopin announced today that they are getting into the OLED Microdisplay business. This is particularly notable because Kopin has been a long time (since 1999) manufacture of transmissive LCD microdisplays used in camera viewfinders and near eye display devices. They also bought Forth Dimension Displays back in 2011, a maker of high resolution ferroelectric reflective LCOS used in higher end near eye products.

OLED Microdisplays Trending in AR/VR Market

With the rare exception of the large and bulky Meta 2, microdisplays, (LCOS, DLP, OLED, and transmissive LCD), dominate the AR/MR see-through market. They also are a significant factor in VR and other non-see-through near eye displays

Kopins entry seems to be part of what may be a trend toward OLED Microdisplays used in near eye products. ODG’s next generation “Horizon” AR glasses is switching from LCOS (used in the current R7) to OLED microdisplays. Epson which was a direct competitor to Kopin in transmissive LCD, switched to OLED microdisplays in their new Moverio BT-300 AR glasses announced back in February.

OLED Microdisplays Could Make VR and Non-See-Through Headsets Smaller/Lighter

Today most of the VR headsets are following Oculus’s use of large flat panels with simple optics. This leads to large bulky headsets, but the cost of OLED and LCD flat panels is so low compared to other microdisplays with their optics that they win out. OLED microdisplays have been far too expensive to compete on price with the larger flat panels, but this could change as there are more entrants into the OLED microdisplay market.

OLEDs Don’t Work With Waveguides As Used By Hololens and Magic Leap

It should be noted that the broad spectrum and diffuse light emitted by OLED is generally incompatible with the flat waveguide optics such as used by Hololens and is expected from Magic Leap (ML). So don’t expect to see these being used by Hololens and ML anytime soon unless they radically redesign their optics. Illuminated microdisplays like DLP and LCOS can be illuminated by narrower spectrum light sources such as LED and even lasers and the light can be highly collimated by the illumination optics.

Transmissive LCD Microdisplays Can’t Compete As Resolution Increases

If anything, this announcement from Kopin is the last nail in the coffin of the transmissive LCD microdisplay in the future. OLED Microdisplays have the advantages over transmissive Micro-LCD in the ability to go to higher resolution and smaller pixels to keep the overall display size down for a given resolution when compared to transmissive LCD. OLEDs consume less power for the same brightness than transmissive LCD. OLED also have much better contrast. As resolution increases transmissive LCDs cannot compete.

OLEDs Microdisplays More Of A Mixed Set of Pros and Cons Compared to LCOS and DLP.

There is a mix of pro’s and con’s when comparing OLED microdisplays with LCOS and DLP. The Pro’s for OLED over LCOS and DLP include:

  1. Significantly simpler optical path (illumination path not in the way). Enables optical solutions not possible with reflective microdisplays
  2. Lower power for a given brightness
  3. Separate RGB subpixels so there is no field sequential color breakup
  4. Higher contrast.

The advantages for LCOS and DLP reflective technologies over OLED microdisplays include:

  1. Smaller pixel equals a smaller display for a given resoluion. DLP and LCOS pixels are typically from 2 to 10 times smaller in area per pixel.
  2. Ability to use narrow band light sources which enable the use of waveguides (flat optical combiners).
  3. Higher brightness
  4. Longer lifetime
  5. Lower cost even including the extra optics and illumination

Up until recently, the cost of OLED microdisplays were so high that only defense contractors and other applications that could afford the high cost could consider them. But that seems to be changing. Also historically the brightness and lifetimes of OLED microdisplays were limited. But companies are making progress.

OLED Microdisplay Competition

Kopin is long from being the first and certainly is not the biggest entry in the OLED microdisplay market. But Kopin does have a history of selling volume into the microdisplay market. The list of known competitors includes:

  1. Sony appears to be the biggest player. They have been building OLED microdisplays for many years for use in camera viewfinders. They are starting to bring higher resolution products to the market and bring the costs down.
  2. eMagin is a 23-year-old “startup”. They have a lot of base technology and are a “pure play” stock wise. But they have failed to break through and are in danger of being outrun by big companies
  3. MicoOLED – Small France startup – not sure where they really stand.
  4. Samsung – nothing announced but they have all the technology necessary to make them. Update: Ron Mertens of OLED-Info.com informed me that I was rumored that the second generation of Google Glass was considering a Samsung OLED microdisplay and that Samsung had presented a paper going back to 2011.
  5.  LG – nothing announced but they have all the technology necessary to make them.

I included Samsung and LG above not because I have seen or heard of them working on them, but I would be amazed if they didn’t at least have a significant R&D effort given their sets of expertise and their extreme interest in this market.

For More Information:

For more complete information on the OLED microdisplay market, you might want go to OLED-info that has been following both large flat panel and small OLED microdisplay devices for many years. They also have two reports available, OLED Microdisplays Market Report and OLED for VR and AR Market Report.

For those who want to know more about Kopin’s manufacturing plan, Chris Chinnock of Insight Media has an interesting article outlining Kopin’s fabless development strategy.

Magic Leap and Hololens and LCOS

LCOS Used In Hololens and Likely Magic Leap

It is well known that Microsoft’s Hololens uses two Himax manufactured Field Sequential Color (FSC) LCOS microdisplays. Additionally there are reports, particularly from KGI Securities analyst Ming-Chi Kuo as reported in Business Insider that Magic Leap (ML) is also using Himax’s LCOS. Further supporting this is that of all ML patent applications, ML patent application US 2016/0327789 which uses LCOS best fits the available evidence.

I have now from some additional evidence that ML is likely using LCOS. After discussing this new ML evidence, I will relay some Microsoft Hololens 2nd generation (or lack thereof) rumors.

Patent Application Tends To Confirm Magic Leaps Use of Field Sequential LCOS

I came across a bit of a strange patent that seems to confirm that ML is using field sequential color (FSC) LCOS. The patent application US 2016/0241827 was filed in January 2015 just 3 months before the lead inventor, Michael Kass then a ML Fellow, left ML. From what I can tell from their public LinkedIn profiles, Mr. Kass and his fellow inventor both worked on software at ML and not hardware and neither one has any background in hardware.

The patent application is directed towards reducing “Color-Breakup” for color sequential displays and shows an LCOS implementation. The concept they are proposing is at least 15 years old that I know of and it is well known to people in the projection industry that DLP’s projectors had “white segment” color wheels and later with LED illumination. Additionally the way they arranged the LEDs in their diagram above with 3 separate LEDs going to a dichroic mirror is how it is done for front projectors and not for a near eye display. The question I had on finding this application was:

Why are two ML people working on software with only a rudimentary knowledge display design and located in California filing for a patent on an “improvement” for field sequential color?

The only logical answer I could come up with is that they had look through ML prototypes that used an LCOS system and were bothered by seeing color breakup. I’m guessing they were told it was LCOS but did not know how it was designed so they grabbed a LCOS design off the internet (only one for a front projector and not for near eye). They didn’t know the history of FSC projectors using white segments, so they re-invented the 15+ year old concept of adding a “white” period where all the RGB colors are on in order to help reduce color breakup.

For bonus speculation, why did the lead inventor Mr. Kass who had only a month before filing this patent been promoted “Distinguished Fellow” then leave only 3 months after filing the provisional patent? Perhaps, just perhaps, he did not like the color breakup he was seeing (just a guess)?

It should be noted that it has been nearly two years since the provisional application was filed which would give ML time to change. But I doubt they could totally change directions as they would be too far down the road with the rest of the design. At least if, as they claim, they will have a product out “soon.” They might change the type of LCOS device either in resolution or manufacturer but it would seem unlikely that they could totally change the technology.

Hololens Rumored 2nd Generation Delayed?

There was a lot of talk that Hololens, after announcing that Hololens would be focusing first on business applications, would be coming out with a 2nd generation Hololens next year. This sometimes gets conflated with the 2nd generation being a “consumer version.” But apparently the costs to make Hololens are high particularly  with the custom waveguides having very low yield.

The recent scuttlebutt is that expected 2nd generation is on hold while Microsoft management figures out what they want to do with Hololens. For those that were hoping for a Consumer edition, the idea of focusing on “enterprise/business” sounds scarily similar to what Google did with Google Glass when if realized it did not have a high volume market. While Microsoft is continuing to expand sales of Hololens for businesses worldwide, one gets the feeling that Microsoft is trying to figure out if Hololens will have the size market anytime soon that is worthy of a company Microsoft’s size.

Update Dec 20, 2017 – I posed a question on the Reddit Hololens subgroup about finding a public source for issues with Himax and Hololens and they pointed to “A component maker suffers as Microsoft develops next-gen HoloLens” by Kevin Parrish on Dec. 14, 2017 in Digital Trends. In the article they cited Himax CEO Jordan Wu stating, “near-term headwinds” due to a “major AR customer’s shift in focus to the development of future-generation devices.” This would seem to imply that the “AR Customer,” of which Hololens is the most notable/likely, is switching from using a 720p to their new 1080p device on a Gen. 2 Hololens.  

Mixed Bag for Himax’s LCOS

So there is mounting evidence that ML is using LCOS and the most likely manufacturer is Himax. I have had some people write me that ML switched from Himax but I don’t know how credible their sources may be, so this I would categorize as rumor right now.

Either way, Himax can’t be shipping a lot of LCOS to ML right now. The lack of volume coming out of Hololens also means that there are not big new orders from Microsoft for Himax panels.

Meeting At CES 2017 January 5th to 8th

I have had a number of people ask if I was going to CES 2017 in Las Vega and we could meet. I’m going to be at the show from January 5th through the 8th.

If you would like to meet, please email me at info@kguttag.com.

If possible, please include you contact information, reason you or your company wants to meet, and the best dates, times, and if you have a place where you would like to meet if you have a preference.

 

Magic Leap: Focus Planes (Too) Are a Dead End

What Magic Leap Appears to be Doing

For this article I would like to dive down on the most likely display and optics Magic Leap (ML) is developing for their their Product Equivalent (PEQ). The PEQ was discussed in the “The Information” story “The Reality Behind Magic Leap.” As I explained in my  November 20, 2016 article Separating Magic and Reality (before the Dec 8th “The Information” story) the ML patent application US 2016/0327789 best fits the available evidence and if anything the “The Information” article reinforce that conclusion. Recapping the evidence:

  1. ML uses a “spatial light modulator” as stated in “The Information”
  2. Most likely an LCOS spatial light modulator and the Oct. 27th 2017 Inside Business citing “KGI Securities analyst Ming-Chi Kuo, who has a reputation for being tapped into the Asian consumer electronics supply chain” claims ML is using a Himax LCOS device.
  3. Focus planes to support vergence/accommodation per many ML presentations and their patent applications
  4. Uses waveguides which fit the description and pictures of what ML calls a “Photonics Chip”
  5. Does not have a separate focus mechanism as reported in the “The Information” article.
  6. Could fit the form factor as suggested in “The Information”
  7. Its the only patent that shows serious optical design that also uses what could be considered a “Photonics chip.”

I can’t say with certainty that the optical path is that of application 2016/0327789. It is just the only optical path in the ML patent applications that fits all the available evidence and and has a chance of working.

Field of View (FOV)

Rony Abovitz, ML CEO, is claiming a larger a larger FOV.  I would think ML would not want to be have lower angular resolution than Hololens. Keeping the same 1.7 arc minutes per pixel angular resolution as Hololens and ODG’s Horizon, this would give a horizontal FOV of about 54.4 degrees.

Note, there are rumors that Hololens is going to be moving to a 1080p device next year so ML may still not have an advantage by the time they actually have a product. There is a chance that ML will just use a 720p device, at least at first, and accept lower angular resolution of say 2.5 or greater to get into the 54+ FOV range. Supporting a larger FOV is not small trick with waveguides and is  one thing that ML might have over Hololoens; but then again Hololens is not standing still.

Sequential Focus Planes Domino Effect

The support of vergence/accommodation appears to be a paramount issue with ML. Light fields are woefully impractical for any reasonable resolution, so ML in their patent application and some of their demo videos show the concept of “focus planes.” But for every focus plane an image has to be generated and displayed.

The cost of having more than one display per eye including the optics to combine the multiple displays would be both very costly and physically large. So the only rational way ML could support focus planes is to use a single display device and sequentially display the focus planes. But as I will outline below, using sequential focus planes to address vergence/accommodation, comes at the cost of hurting other visual comfort issues.

Expect Field Sequential Color Breakup If Magic Leap Supports “Focus Planes”

Both high resolution LCOS and DLP displays use “field sequential color” where they have a single set of mirrors that display a single color plane at a time. To get the colors to fuse together in the eye they repeat the same colors multiple times per frame of an image. Where I have serious problems with ML using Himax LCOS is that instead of repeating colors to reduce the color breakup, they will be instead be showing different images to support Sequential Focus Planes. Even if they have just two focus planes as suggested in “The Information,” it means they will reduce the rate repeating of colors to help them fuse in the eye is cut in half.

The Hololens which also uses a field sequential color LCOS one can already detect breakup. Cutting the color update rate by 2 or more will make this problem significantly worse.

Another interesting factor is that field sequential color breakup tends to be more noticeable by people’s peripheral vision which is more motion/change sensitive. This means the problem will tend to get worse as the FOV increases.

I have worked many years with field sequential display devices, specifically LCOS. Based on this experience I expect that the human vision system  will do a poor job of “fusing” the colors at such slow color field update rates and I would expect people will see a lot of field sequential color breakup particularly when objects move.

In short, I expect a lot of color breakup to be noticeable if ML support focus planes with a field sequential color device (LCOS or DLP).

Focus Planes Hurt Latency/Lag and Will Cause Double Images

An important factor in human comfort is the latency/lag between any head movement and the display reacting can cause user discomfort. A web search will turn up thousands of references about this problem.

To support focus planes ML must use a display fast enough to support at least 120 frame per second. But to support just two focus planes it will take them 1/60th of a second to sequentially display both focus planes. Thus they have increase the total latency/lag from the time they sense movement until the display is updated by ~8.333 milliseconds and this is on top of any other processing latency. So really focus planes is trading off one discomfort issue, vergence/accommodation, for another, latency/lag.

Another issue which concerns me is how well sequential focus planes are doing to fuse in the eye. With fast movement the eye/brain visual system is takes its own asynchronous “snapshots” and tries to assemble the information and line it up. But as with field sequential color, it can put together time sequential information wrong, particularly if some objects in the image move and others don’t. The result will be double images, getting double images with sequential focus planes would be unavoidable with fast movement either in the virtual world or when a person moves their eyes. These problems will be compounded by color field sequential breakup.

Focus Planes Are a Dead End – Might Magic Leap Have Given Up On Them?

I don’t know all the behind the scenes issues with what ML told investors and maybe ML has been hemmed in by their own words and demos to investors. But as an engineer with most of my 37 years in the industry working with image generation and display, it looks to me that focus planes causes bigger problems than it solves.

What gets me is that they should have figured out that focus planes were hopeless in the first few months (much less if someone that knew what they were doing was there). Maybe they were ego driven and/or they built to much around the impression they made with their “Beast” demo system (big system using DLPs). Then maybe they hand waved away the problems sequential focus planes cause thinking they could fix them somehow or hoped that people won’t notice the problems. It would certainly not be the first time that a company committed to a direction and then felt that is had gone to far to change course. Then there is always the hope that “dumb consumers” won’t see the problems (in this case I think they will).

It is clear to me that like Fiber Scan Displays (FSD), focus planes are a dead end, period, full-stop. Vergence/accommodation is a real issue but only for objects that get reasonably close to the users. I think a much more rational way to address the issue is to use sensors to track the eyes/pupils and adjust the image accordingly as the eye’s focus changes relatively slowly it should be possible to keep up. In short, move the problem from the physical display and optics domain (that will remain costly and problematical), to the sensor and processing domain (that will more rapidly come down in cost).

If I’m at Hololens, ODG, or any other company working on an AR/MR systems and accept that vergence/accommodation is a problem needs to be to solve, I’m going to solve it with eye/pupil sensing and processing, not by screwing up everything else by doing it with optics and displays. ML’s competitors have had enough warning to already be well into developing solutions if they weren’t prior to ML making such a big deal about the already well known issue.

The question I’m left is if and when did Magic Leap figured this out and were they too committed by ego or what they told investors to focus planes to change at that point? I have not found evidence so far in their patent applications that they tried to changed course, but these patent applications will be about 18 months or more behind what they decided to do. But if they don’t use focus planes, they would have to admit that they are much closer to Hololens and other competitors than they would like the market to think.

Evergaze: Helping People See the Real World

Real World AR

Today I would like to forget about all the hype and glamor near eye products to have fun in a virtual world. Instead I’m going to talk a near eye device aimed at helping people to see and live in the real world.  The product is called the “seeBoost®” and it is made by the startup Evergaze in Richardson, Texas. I happen to know the founder and CEO Pat Antaki from working together on a near eye display back in 1998, long before it was fashionable. I’ve watched Pat bootstrap this company from its earliest days and asked him if I could be the first to write about seeBoost on my blog.

The Problem

Imagine you get Age Related Macular Degeration (AMD) or Diabetic Retinopathy. All your high-resolution vision and best color vision of the macular (and where high resolution fovea resides) is gone and you see something like the picture on the right. All you can use is your peripheral vision which is low in resolution, contrast, and color sensitivity. There are over 2 million people in the U.S that can still see but have worse than 20/60 vision in their better eye.

What would you pay to be able to read a book again and do other normal activities that require the ability to have “functional vision?” So not only is Evergaze aiming to help a large number of people, they are going after a sizable and growing market.

seeBoost Overview

seeBoost has 3 key parts, the lightweight near-to-eye display, a camera with high speed autofocus, and proprietary processing in an ASIC that remaps what the camera sees onto the functioning part of the user’s vision. They put the proprietary algorithms in hardware so they could have the image remapping and contrast enhancement performed with extremely low latency so that there is no perceptible delay when a person moves their head. As anyone that has used VR headsets will know, this important for wearing the device for long periods of time to avoid headaches and nausea.

A perhaps subtle but important point is that the camera and display are perfectly coaxial, so there is no parallax error as you move the object closer to your eye. The importance of centering the camera with the user’s eye for long term comfort was a major point made AR headset user and advocate Steve Mann in his March 2013, IEEE Spectrum article, “What I’ve learned from 35 years of wearing computerized eyewear”. Quoting from the article, “The slight misalignment seemed unimportant at the time, but it produced some strange and unpleasant result.” And in commenting on Google Glass Mr. Mann said, “The current prototypes of Google Glass position the camera well to the right side of the wearer’s right eye. Were that system to overlay live video imagery from the camera on top of the user’s view, the very same problems would surely crop up.”

Unlike traditional magnifying optics like a magnifying glass, in addition to being able to remap the camera image to the parts of the eye that can see, the depth of field and magnification amount are decoupled: you can get any magnification (from 1x to 8x) at any distance (2 inches to infinity). It also has digital image color reversal (black-to-white reversal, useful for reading pages with a lot of white). The device is very lightweight at 0.9 oz. including cable. The battery pack supports for 6 hours of continual use on a single charge.

Use Case

Imagine this use scenario: playing bridge with your friends. To look at the cards in your hand you may need 2x mag at 12 inches’ distance. The autofocus allows you to merely move the cards as close to your face as you like, the way a person would naturally use to make something larger. Having the camera coaxial with the display makes this all seem natural versus say having a camera above the eye. Looking at the table to see what cards are placed there, maybe you need 6x mag. at 2 feet. To see other people’s eyes and facial expressions around the table, you need 1-2x at 3-4 feet.

seeBoost is designed to help people see so they can better take part in the simple joys of normal life. The lightweight design mounts on top of a user’s prescription glasses and can help while walking, reading signs and literature, shopping, watching television, recognizing faces, cooking, and even playing sports like golf.

Another major design consideration was the narrow design so that it does not cover-up lateral and downwards peripheral vision of the eye.  This turns out to be important for people who don’t want to further lose peripheral vision. In this application, monocular(single eye) is for better situational awareness and peripheral vision.

seeBoost is a vision enhancement device rather it essentially a computer (or cell phone) monitor that you must plug into something. The user simply looks at the screen (through seeBoost), as seeBoost improves their vision for whatever they’re looking at, be it an electronic display or their grandchildren’s faces.

Assembled in the USA and Starting to Ship

This is not just some Kickstarter concept either. Evergaze has been testing prototypes with vision impaired patients for over a year and have already finished a number of studies. What’s more they recently started shipping product. To the left is an image that was taken though the seeBoost camera via its display and optics.

What’s more this product is manufactured in the US at a production line Evergaze set up in Richardson, TX. If you want to find out more about the company you can go their their YouTube Channel or if you know someone that needs a seeBoost, you can contact Pat Antaki via email: pantaki@evergaze.com

Magic Leap CSI: Display Device Fingerprints

Introduction

I have gotten a lot of questions as to how I could be so sure that Magic Leap (ML) was using Micro-OLEDs in all their “Through Magic Leap Technology” and not say a scanning fiber display as so many had thought. I was in a hurry to get people to the conclusion. For this post, I am going to step back and show how I knew. When display devices have video and still pictures taking whit a camera, every display type has its own identifiable “fingerprint” but you have to know where to look.

Sometimes in video it might only be a few frames that give the clue as to the device being used. In this article I am going cropped image from videos for most of the technologies that capture their distinctive artifacts as captured by the camera, but for laser scanning the distinctive artifacts are best seen in the whole image so I am going to use thumbnails size images.

This article should not be new information to this blog’s readers, but rather it details how I knew what technology was in the ML through “the technology” videos. For the plot twist at the end, you have to know to parse ML’s words, as in “the technology” is not what they are planning on using in their actual product. The ML “through the technology videos” are using totally different technology than what they plan to use in the product.

Most Small Cameras Today Use a Rolling Shutter

First it is important to understand that cameras capture images much differently than the human eye. Most small cameras today, particularly those in cell phones, have a “rolling shutter.” Photography.net has a good article describing a rolling shutter and some of its effects. A rolling shutter captures a horizontal band of pixels (the width of this band varies from camera to camera) as it scans down vertically. With “real world analog” movement this causes moving objects to be distorted. This happens very famously with airplane propellers (above right). With the various display technologies they will reveal different effects.

OLEDs (And color filter LCDs)

When an object moves on a display device the same object in the digital image will jump in its location between the two frames displayed. If the rolling shutter is open when the image is changing, the camera will capture a double image.  This is shown classically with the Micro-OLED device from an ODG Horizon prototype. The icons and text in the image was moving vertically and the camera captured contend from two frames. Larger flat panel OLEDs work pretty much the same way as can be see in this cropped image from a Meta 2 headset at right.

From a video image artifact point of view, it is hard to distinguish the artifacts with a rolling shutter camera between OLED and color filter (most common) LCDs. Unlike old CRTs and scanning systems, OLEDs and LCD don’t have any “blanking” where there is no image. They just simply quickly row by row change the RGB (and White sometimes) sub-pixels of the image from one frame to the next (this video taken with a high speed camera demonstrates how it works).

Color Field Sequential DLP and LCOS

DLP and LCOS devices used in near eye displays use what is known as “field sequential color” (FSC). They have one set of “mirrors” and in rapid sequence display only the red sub-image and flash a red light source (LED or laser) and then repeat this for green and blue. Generally they sequence these very rapidly and usually repeate the red, green, and blue sub-images multiple times so the eye will fuse the colors together even if there is motion. If the colors are not sequenced fast enough (and for many other reasons that would take too long to explain), a person’s eye will not fuse the image and they will see fringing of colors in what is known as  “field sequential color breakup,” also known pejoratively as “the rainbow effect”. Due to the way DLP and LCOS works, LCOS does not have to sequence quite as rapidly to get the images to fuse in the human eye which is a good thing because they can’t sequence as fast as DLP.

In the case of field sequential color when there is motion, the camera can capture the various sub images individually as seen above-left of the Hololens that uses FSC LCOS. As seen it looks sort of like print were the various colors are shifted. IF you study the image you can even tell the color sequence.

Vuzix uses FSC DLP and has similar artifacts but they are harder to spot. Generally DLPs sequence their colors faster than LCOS (by about 2x) so it can be significantly harder to capture them (that is a clue to if it is DLP or LCOS). On the right, I have captured two icons when sill and when they are moving and you can see how the colors separate. You will notice that you don’t see all the colors because the DLP is sequencing more rapidly that the Hololens LCOS.

DLP and LCOS also have “blanking” between colors where the LEDs (and lasers maybe in the future) are turned off while the color sub-images are changing. The blanking is extremely fast and will only be see using high speed cameras and/or setting a very fast shutter time on a DLSR.

DLP and LCOS for Use with ML “Focus Planes”

If you have a high speed camera or other sensing equipment you can tell even more about the differences between the way in which DLP and LCOS generate field sequential color. But a very important aspect for Magic Leap”s time sequential focus planes is that DLP an sequence fields much faster than LCOS and thus support more focus planes.

I will be getting more into this in a future article, but to do focus planes with DLP or LCOS, Magic Leap will have to trade repeating the same single color sub-images for different images corresponding to different focus planes. The obvious problem for those that understand FSC, that the color field rates will become so low that color breakup (the rainbow effect) would seem inevitable.

Laser Beam Scanning

Laser scanning systems are a bit like old CRTs, they scan from top to bottom and then have a blanking time while the scanning mirror retraces quickly to the top corner. The top image on the left was taken with DSL at a 1/60th of a second shutter speed that reveals the blanking roll bar (called a roll bar because it will be in a different place if the camera and video source are not running at exactly the same speed).

The next two images were taken with a rolling shutter camera of the exact same projector. The middle image shows a dark wide roll bar (it moves) and the bottom image shows a thin white roll-bar. These variations from the same projector and camera are due to the frame rates generated by the image and/or the camera’s shutter rate.

Fiber Scanning Display (FSD) Expected Artifacts

FSD displays/projectors are so rare that nobody has published a video of them. Their scan rates are generally low and they have “zero persistence” (similar to the to laser scanning) and they would look horrible in a video which I suspect is why no one has published a video of them.

I they were video’ed I would expect a circular blanking effect similar to the laser beam scanning but circular. Rather than rolling vertically it would “roll” from center to the outside or vice versa. I have put a could of very crudely simulated whole frame images at left.

So What Did the Magic Leap “Through The Technology” Videos Use?

There is a obvious artifact match between the artifacts in all the Magic Leap “Through the Technology” videos and OLEDs (or filter LCD which are much less common in near eye displays). You see the distinctive double image with no color breakup.

Nowhere on any frames can be found field sequential color artifacts. So this rules out FSC DLP and LCOS.

In looking at the whole frame videos you don’t see any roll-bars effects of any kind. So this totally rules out both laser beam scanning and fiber scanning displays.

We have a winner. The ML through the technology videos could only be done with OLEDs (or color filter LCDs).

But OLEDs Don’t Work With Thin Waveguides!!!

Like most compelling detective mysteries there is a plot twist. OLEDs unlike LCOS, DLP, and Laser Scanning output wide spectrum colors and these don’t work with the thin waveguides like the Photonic Chip that Rony Abovitz, ML CEO, likes to show.

This is how it became obvious that while the “Through The Magic Leap Technology” videos were NOT using the same “Magic Leap Technology” as Magic Leap is planning to use for their production product. And this agrees with the much publicized ML Article from “The Information.”

Appendix – Micro HTPS LCD (Highly Unlikely)

I need to add, just to be complete, that theoretically they could use color filter HTPS LCDs illuminated by either LEDs or lasers to get a narrow spectrum and fairly colliminated light that might work with the waveguide.  They would have similar artifacts to those seen in the ML videos. EPSON has made such a device illuminated by LEDs that was used in their earlier headsets, but even EPSON that is moving to Micro-OLEDs for their next generation. I’m not sure the HTPS could support frame rates high enough to support focus planes.  I think therefore that using color filter HTPS panels while theoretically possible is highly unlikely.