Archive for Laser Projection

Microvision (MVIS) Replaces CEO – A Soothsayer’s Retrospective

Ding-Dong

Microvision’s board finally got around to replacing their CEO today (officially, he is going to be “spending more time with his family“). Mind you, they appeared to have been plenty happy with Alexander Tokman after a decade of routinely losing  over $12 million/year (some years over $40M) for over the last decade. Microvision’s board was still giving him $792,892/year in compensation in 2016, which was a step down for him after receiving $950,561 in 2015 with a total of $3,754,437 in the 5 years from 2012 to 2016, (and then there will be 2017 and whatever golden parachute they give him). That is pretty good pay for someone that drove the stock from about $48/share when he took over July 7th 2005 to about $1.50 today. What’s more, executive compensation roughly doubled from 2012 to 2016 in spite of continuing losses.

As I have written before, Microvision for over 24 years appears to have been a company in the business of selling stock rather than product. I guess it does take a certain kind of talent to keep selling stock while the company continually loses money. Laser Beam Scanning has been the ultimate con-technology for the semi-technical literate. On the surface it may look like a good idea until you really understand it. Do the words attributed to P.T. Barnum come to mind?

Soothsayer

This blog, primarily discussing display technology, started talking about Microvision back in 2011 and Microvision quickly responded with a an SEC 8-K filing by calling me a “False Soothsayer.”   This led to my writing my 7 part Soothsayer Series about Microvision. Microvision painted a very misleading (to be generous) picture of the state of the green laser market. I called them out on it and they had the audacity to call me a “False Soothsayer.” It was then proven I was telling the truth.

The difference in this blog and tech sites that just repeat company marketing spiels, is that I try and analyze technology as an engineer, and were possible, measure things objectively. In the case of Microvision, the more I measured and understood the technology, the worse it looked. Microvision “fibbed” (to put it mildly) about power, resolution, cost, size, eye-safety, and just about everything that could be measured.

I  have explained on this blog how fundamentally flawed laser beam scanning is as a display technology. You can search this bog to find out the details (or hire me to help explain it). I tried to point out that even when the green laser cost came down, Laser Beam Scanning (LBS) was still fundamentally flawed in how it works and it will NEVER be a good display technology for a large market (there may be a few very small niche uses).

Though the years, I tested and publish images and data proving that Microvision was making false claims about resolution and power consumption. But no matter, there is no “marketing police” and Microvision was able to keep selling stock, to people that wanted to believe.

Pivoting More That A Ballerina

In addition to misleading people about the (false) virtues of Laser Beam Scanning, they kept “pivoting” both in terms of market and business model. When Mr. Tokman took over, he pivoted Microvision from Head Mounted Displays (HMDs) to Pico Projectors.

Every time Microvision failed with a product concept, business model, or market, they would announce a new “pivot.” Thus keeping Microvision a 24-year-old “start-up” with a “new” future.

When any rational business person could figure out that building a laser scanning pico projector would lose money, so Microvision funded development and paid companies to make lasers and engines for them. When still nobody would build a projector with a Microvision subsidized engine, Microvision built and sold the final product, the ShowWX and ShowWX+. This resulted in Microvision losing over $45M in 2011 and $27M in 2012.  It was a colossally bad business move, but making money was apparently never the point, Microvision was able to sell more stock based on making a product, and when the losses were found out, the stockholders got an 8 to 1 reverse split.

Microvision pivoted from making ShowWx projectors and selling the engines at a loss to then saying they would be just an I.P. company with Sony making engines. But when the Sony deal was not working out they got back in the engine making business. All the while through all these different “business models” they steadily kept losing about $1M/month and sometimes more. But most importantly with each pivot in business model and market thrust they could sell more stock.

Microvision continues to pivot in in the area of markets. First (pre-Tokman) they were focused on head mounted displays, then pico projectors, then when Google Glass was announced, the were back pushgin head mounted displays. They claimed to be good for gesture recognition when Microsoft Kinects was a hot product. More recently LIDAR for self driving cars (funny, there are a lot of LIDAR companies already around that didn’t need Microvision).  All the while, they keep the pie plates spinning in pico projectors, HUDs, and HMDs. They have a 24 year record of failing in one market and business strategy after another.

So What Is Microvision Up To Now?

If things were going as well as Microvision wanted you to believe, they wouldn’t allow Mr. Tokman to be “spending more time with his family.” The new CEO, Perry Mulligan has a background as a VP of  Operations for telecom companies and no background in displays other than sitting on Microvision’s Board for 10 years.

My best guess is that they are trying to pretty up the company for some type of acquisition or perhaps a new “pivot” with a big money raise. Most likely they will be pushing more into LIDAR as it is newer, less well understood, and a hot topic today.

I could also see them splitting off and selling their patent portfolio to a Non-Practicing Entity (NPE, or more commonly known as a “Patent Troll”).

Crass Commercial Message

Among other things I do these days is perform Technical Due Diligence in evaluating companies. Before your company spends $10M, $50M, $100, or (in the case of Magic Leap)  $500M you might you might want to have my experienced eye evaluate the technology.

I also help companies working on new display technologies. I have a very broad perspective, particularly in the areas of microdisplays,  HMDs, automotive HUD, and novel/new displays technologies.

You can connect with me on LinkedIn.

 

VAC By Oculus and Microsoft . . . Everywhere and Nowhere

Technically Interesting New Papers At Siggraph 2017

Both Oculus (Facebook) and Microsoft’s are presenting interesting technical research  papers at Siggraph 2017 (July 30th to August 3rd) that deal with Vergence/Accommodation (VAC).  Both have web pages (Oculus link and Microsoft link) with links to relatively easy to follow videos and the papers. But readers should take to heed the words on the Microsoft Page (which I think is applicable to both): “Note that this Microsoft Research publication is not necessarily indicative of any Microsoft product roadmap, but relates to basic research around holographic displays.” I can’t hope to try and get into all the technical details here, but they both have a lot well explained information with figures and for those that are interested, you can still learn a lot from them even if you have to skip over some of the heavy duty math. One other interesting thing is that both Oculus and Microsoft used phase controlled LCOS microdisplays at the heart of their technologies.

Briefly, VAC is the problem with stereoscopic 3-D where the apparent focus of objects does not agree with were they seem to appear with binocular vision. This problem can cause visual discomfort and headaches. This year I have been talking a lot about VAC thanks first to Magic Leap (ML article) and more recently Avegant (Avegant VAC article ) making big deals about and both raising a lot of money (Magic Leap over $1B) as a result. But least you think Magic Leap and Avegant are the only ones, there are dozens of research groups over the last decade working on VAC. Included in that number is Nvidia with a light field approach that they presented a paper in 2013 also at Siggraph (The 2013 Nvidia Paper with links embedded at the bottom of the Abstract to more information and a video)

The Oculus paper has a wealth of background/education information about VAC and figures that help explain the concepts. In many ways it is a great tutorial. They also have a very lengthy set of references that among other things confirm how many different groups have worked on VAC and this is only a partial list. I also recommend papers and videos on VAC by Gordon Wetzstein of Stanford. There is so much activity that I put “Everywhere” in the title.

I particularly liked Oculus’s Fig. 2 which is copied at the top of this article (they have several other very good figures as well as their video). They show the major classes of VAC, from a) do nothing, b) change focus (perhaps based on eye tracking), to c) Multifocal which is what I think Magic Leap and Avegant are doing, to d)&e) Oculus’s “focal surfaces(s), to f) light fields (ex. Nvidia’s 2013 paper). But light fields are in a way a short cut compared to real/true holograms which is what Microsoft’s 2017 paper is addressing (not shown in the table above but discussed in Oculus’s paper and video).

I put the “real” in front of the work “hologram” because confusingly Microsoft, for what appears to be marketing purposes, has chosen to call stereoscopic merged reality objects “holograms” which scientifically they are not. Thanks to Microsoft’s marketing clout and others choosing “if you can’t beat them joint them” in using the term, we now have the problem of what to call real/true holograms as discussed in Microsoft’s 2017 Siggraph paper.

High Level Conceptually:
  • Light Fields are a way to realize many of the effects of holograms such such as VAC and being able to see around objects. But light fields have piece-wise discontinuities. They can only reduce the discontinuities by massively trading off resolution; thus they need massive amounts of processing and native display resolution for a given visual resolution. Most of the processing and display resolution never makes it do the eye as based on where the eye is looking and focused, all but a small part of the generated image information is never seen. The redundancy with light fields tends to grow with a square law (X and Y).
  • Focus planes in effect try and cut down the Light Field square law redundancy problem by having the image redundancy grow linearly. They need multiple planes and then rely on your eye to do the blending between planes. Still the individual planes on “flat” and with a large continuous surface there would be discontinuities at the point where it would have to change planes (imagine a road going off in the distance).
  • Oculus Surfaces are in essence and improvement on focus planes where the surfaces try to conform more to the depth in the image and reduce the discontinuities. One could then argue whether it would be better to have more simple focus planes or fewer Focus Surfaces.
  • Holograms have at least an “n-cube” problem as they conceptually capture/display the image in X, Y, and Z. As the resolution increases the complexity grows extremely fast. Light fields have sometimes been described as “Quantized Holograms” at they put a finite limit on the computational and image content growth.
Oculus’s Focus Surface Approach

In a nutshell, Oculus is using an eMagin OLED to generate the image and a Jasper Display Phase Shift LCOS device to generate a “focus surface”. The focus changes focus continuously-gradually, and not on a per-pixel basis, which is why they call is a “surface”.  The figure on the right (taken from their video) shows the basic concept of a “focus surface” and how the surface roughly tracks the image depth. The paper (and video) go on to  discuss how having more than one surface and how the distance approximation “error” would compare with multi-focus planes (such as Magic Leap and Avegant).

While the hardware diagram above would suggest something that would fit in a headset, it is still at the optical breadboard stage. Even using microdisplays, it is a lot to put on a person’s head. Not to mention the cost of having in effect two displays (the LCOS one controlling the focus surface) plus all the additional optics. Below is a picture of the optical breadboard.

Microsoft (True This Time) Holograms

While Oculus’s hardware looks like something that could fit in a headset someday, Microsoft is much more of a research concept, although they did show a compact AR Prototype “glasses” (shown at right) that had a small subset of the capability of the larger optical breadboard.

Microsoft’s optical breadboard setup could support either Wide FOV or Multi-Focal (VAC) but not both at the same time (see picture below). Like other real time hologram approaches (and used by Oculus in their focal surface approach), Microsoft uses a Phase LCOS device.The Microsoft paper goes into some of the interesting things that can be done with holograms including correcting for aberrations in the optics and/or a person’s vision.

In many ways Holograms ultimate end game in display technology where comparatively everything else in with VAC is a hack/shortcut/simplification to avoid the massive computations and hardware complexities/difficulties of implementing real time holograms.

Resolution/Image Quality – Not So Much

The image quality in the Oculus Surface paper is by their admission very low both in terms of resolution and contrast. As they freely admit, it is a research prototype and not meant to be a product.

Some of these limitations are the nature of making a one-off experiment as the article points out but some of the issues may be more fundamental physics. One thing that concerns me (and pointed out in the article) in the Oculus design is that they have to pass all three colors through the same LC material and the LC’s behavior varies with wavelength. These problems would become more significant as resolution increases. I will give the Oculus paper props for both for is level of information and candor about many of the issues; it really is a very well done paper if you are interested in this subject.

It is harder to get at the resolution and image quality aspects of the the Microsoft Hologram paper as they show little images from different configurations. They can sort of move the problems around with Holograms; they can tune them and even the physical configuration for image quality, pupil size, or depth accommodation, but not all at the same time. Digital/real-time holograms can do some rather amazing things as as the Microsoft paper demonstrates but but they are still inordinately expensive both to compute and display and the image quality is inferior to more conventional methods. Solving for image quality (resolution/contrast), pupil/eyebox size, and VAC/image depth simultaneously makes the problems/cost tend to take off exponentially.

Don’t Expect to See These In Stores for Decades, If Ever

One has to realize that these are research projects going for some kind of bragging rights in showing the technical prowess, which both Oculus and Microsoft do impressively in their own ways. Note the Nvidia Light Field paper was presented at Siggraph 2013 years ago and supporting decent resolution with Light Fields is still a very far off dream. If their companies thought these concepts were even remotely practical and only a few years away, the companies would have kept them deep dark secrets. These are likely seen by their companies as so out in the future that there is no threat to letting their competition see what they are doing.

The Oculus Surface approach is conceptually better on a “per plane” than the “focus planes” VAC approaches, but then you have to ask are more simple planes better overall and/or less expensive? At a practical level I think the Oculus Surface would be more expensive and I would expect the image quality to be considerably worse. At best, the Oculus Surface would be a stop-gap improvement.

Real time high resolution holograms that will compete on image quality would seem to be even further out in time. This is why there are so many companies/researchers looking at short cuts to VAC with things like focus planes.

VAC in Context – Nowhere

VAC has been a known issue for a long time with companies and researchers working in head mounted displays. Magic Leap’s $1+B funding and their talk about VAC made it a cause célèbre in AR/VR and appears to have caused a number of projects to come out from behind closed doors (for V.C. funding or just bragging rights).

Yes, VAC is a real issue/problem particularly/only when 3-D stereoscopic objects appear to be closer than about 2 meters (6 feet) away. It causes not only perceptual problems, but can cause headaches and make people sick. Thus you have companies and researchers looking for solutions.

The problem IMO is that VAC is would be say about 20th (to pick a number) on my list of serious problems facing AR/VR. Much higher on the list are based image quality, ergonomic (weight distribution), power, and computing problems. Every VAC solution comes at some expense in terms of image quality (resolution/contrast/chromatic-abberations/etc).

Fundamentally, if you eye can pick what it focuses on, then there has to be a lot of redundant information presented to the eye that it will discard (not notice) as it focuses on what it does see. This translates into image information that must be displayed (but not seen), processing computations that are thrown away, and electrical power being consumed for image content that is not used.

I’m Conflicted

So I am conflicted. As a technologist, I find the work in VAC and beyond (Holograms address much more than VAC) fascinating. Both the Oculus and Microsoft articles are interesting and can be largely understood by someone without a PhD in the subject.

But in the end I am much more interested in technology that can reach a sizable market and on that score I don’t understand all the fuss about VAC.  I guess we will have to wait and see if Magic Leap changes the world or is another Segway or worse Theranos; you might be able to tell which way I am leaning based on what I understand.

Today, the image quality of headsets is pretty poor when compared to say direct view TVs and Monitors, the angular resolution (particularly of VR) is poor, the ergonomics are for the most part abysmal, and if you are going to wireless, the batteries are both too heavy and have too short a life. Anything that is done to address VAC makes these more basic problems not just a little worse, but much worse.

Microvision Laser Beam Scanning: Everything Old Is New Again

Reintroducing a 5 Year Old Design?

Microvision, the 23 year old “startup” in Laser Beam Scanning (LBS), has been a fun topic on this blog since 2011. They are a classic example of a company that tries to make big news out of what other companies would consider to not be news worthy.

Microvision has been through a lot of “business models” in their 23 years. They have been through selling “engines”, building whole products (the ShowWX), licensing model with Sony selling engines, and now with their latests announcement “MicroVision Begins Shipping Samples to Customers of Its Small Form Factor Display Engine they are back to selling “engines.”

The funny thing is this “new” engine doesn’t look very much different from it “old” engine it was peddling about 5 years ago. Below I have show 3 laser microvision engines from 2017, 2012, and 2013 to roughly to the same scale and they all look remarkably similar. The 2012 and 2017 engine are from Microvision and the 2013 engine was inside the 2013 Pioneer aftermarket HUD. The Pioneer HUD appears use a nearly identical engine and within 3mm of the length of the “new” engine. 


The “new” engine is smaller than the 2014 Sony engine that used 5 lasers (two red, two green, and one blue) to support higher brightness and higher power with lower laser speckle shown at left.  It appears that the “new” Microvision engine is really at best a slightly modified 2012 model, with maybe some minor modification and newer laser diodes.

What is missing from Microvision’s announcement is any measurable/quantifiable performance information, such as the brightness (lumens) and power consumption (Watts). In my past studies of Microvision engines, they have proven to have much worse lumens per Watt compared to other (DLP and LCOS) technologies. I have also found their measurable resolution to be considerably less (about half in horizontally and vertically) than they their claimed resolution.

While Microvision says, “The sleek form factor and thinness of the engine make it an ideal choice for products such as smartphones,” one needs to understand that the size of the optical engine with is drive electronics is about equal to the entire contents of a typical smartphone. And the projector generally consumes more power than the rest of the phone which makes it both a battery size and a heat issue.

Magic Leap – Fiber Scanning Display Follow UP

Some Newer Information On Fiber Scanning

Through some discussions and further searching I found some more information about Fiber Scanning Displays (FSD) that I wanted to share. If anything, this material further supports the contention that Magic Leap (ML) is not going to have a high resolution FSD anytime soon.

Most of the images available is about fiber scanning for use as a endoscope camera and not as a display device. The images are of things like body parts they really don’t show resolution or the amount of distortion in the image. Furthermore most of the images are from 2008 or older which gives quite a bit of time for improvement. I have found some information that was generated in the 2014 to 2015 time frame that I would like to share.

Ivan Yeoh’s 2015 PhD dissertation

2015-yeoh-laser-projection

In terms of more recent fiber scanning technology, Ivan Yeoh’s name seems to be a common link. Show at left is a laser projected image and the source test pattern from Ivan Yeoh’s 2015 PhD dissertation “Online Self-Calibrating Precision Scanning Fiber Technology with Piezoelectric Self-Sensing“at the University of Washington. It is the best quality image of a test pattern or known image that I have found of a FSD anywhere. The dissertation is about how to use feedback to control the piezoelectric drive of the fiber. While his paper is about the endoscope calibration, he nicely included this laser projected image.

The drive resulted in 180 spirals which would nominally be 360 pixels across at the equator of the image with a 50Hz frame rate. But based on the resolution chart, the effective resolution is about 1/8th of that or only ~40 pixels, but about half of this “loss” is due to resampling a rectilinear image onto the spiral. You should also note that there is considerably more distortion in the center of the image where the fiber will be moving more slowly.

2015-yeoh-endoscope-manual-calibrationYeoh also included some good images at right showing how had previously used a calibration setup to manually calibrate the endoscope before use as it would go out of calibration with various factors including temperature. These are camera images and based on the test charts they are able to resolve about 130 pixels across which is pretty close to the Nyquist sampling rate from a 360 samples across spiral. As expected the center of the image where the fiber is moving the slowest is the most distorted.

While a 360 pixel camera is still very low resolution by today’s standards, it is still 4 to 8 times better than the resolution of the laser projected image. Unfortunately Yeoh was concerned with distortion and does not really address resolution issues in his dissertation. My resolution comments are based on measurements I could make from the images he published and copied above.

Washington Patent Application Filed in 2014

uow-2016-fsd-applicationYeoh is also the lead inventor on the University of Washington patent application US 2016/0324403 filed in 2014 and published in June 2016. At left is Fig. 26 from that application. It is supposed to be of a checkerboard pattern which you may be able to make out. The figure is described as using a “spiral in and spiral out” process where the rather than having a retrace time, they just reverse the process. This applications appears to be related to Yeoh’s dissertation work. Yeoh is shown as living in Fort Lauderdale, FL on the application, near Magic Leap headquarters.   Yeoh is also listed as an inventor on the Magic Leap application US 2016/0328884 “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” that I discuss in my last article. It would appear that Yeoh is or has worked for Magic Leap.

2008 YouTube Video

ideal-versus-actually-spiral-scan

Additionally, I would like to include some images from a 2008 YouTube Video that kmanmx from the Reddit Magic Leap subreddit alerted me to. White this is old, it has a nice picture of the fiber scanning process both as a whole and with close-up image near the start of the spiral process.

For reference on the closeup image I have added the size of a “pixel” for a 250 spiral / 500 pixel image (red square) and what a 1080p pixel (green square) would be if you cropped the circle to a 16:9 aspect ratio. As you hopefully can see the spacing and jitter variations-error in the scan process are several 1080p pixels in size. While this information is from 2008, the more recent evidence above does not show a tremendous improvement in resolution.

Other Issues

So far I have mostly concentrated on the issue of resolution, but there are other serious issues that have to be overcome. What is interesting in the Magic Leap and University of Washington patent literature is the lack of patent activity to address the other issues associated with generating a fiber scanned image. If Magic Leap were serious and had solved these issues with FSD, one would expect to see patent activity in making FSD work at high resolution.

One major issue that may not be apparent to the casual observer is the the controlling/driving the lasers over an extremely large dynamic range. In addition to support the typical 256 (8-bits) per color and supporting overall brightness adjustment based on the ambient light, the speed of the scan varies by a large amount an they must compensate for this or end up with a very bright center where the scan is moving more slowly. When you combine it all together they would seem to need to control the lasers over a greater than 2000:1 dynamic range from a dim pixel at the center to a brightest pixel at the periphery.

Conclusion

Looking at all the evidence there is just nothing there to convince me that Magic Leap is anywhere close to having perfected a FSD to the point that it could be competitive with a conventional display device like LCOS, DLP or Micro-OLED, not less the 50 megapixel resolutions they talk about. Overall, there is reasons to doubt that a electromechanical scan process is going to in the long run compete with an all electronic method.

It very well could be that Magic Leap had hoped that FSD would work and/or it was just a good way to convince investors that they had a technology that would lead to super high resolution in the future. But there is zero evidence that have seriously improved on what the University of Washington has done. They may still be pursuing it as an R&D effort but there is no reason to believe that they will have it in a product anytime soon.

All roads point to ML using either LCOS (per Business Insider of October 2016) or a DLP based what I have heard is in some prototypes. This would mean they will likely have either 720p or 1080p resolution display, or the same as others such as Hololens (which will likely have a 1080p version soon).

The whole FSD is about trying to break through the physical pixel barrier of conventional technologies.  There are various physics (diffraction is becoming a serious issue) and material issues that will likely make it tough to make physical pixels much smaller than 3 micron.

Even if there was a display resolution breakthrough (which I doubt based on the evidence), there are issues as to whether this resolution could make it through the optics. As the resolution improves the optics have to also improve or else they will limit the resolution. This is a factor that particularly concerns me with the waveguide technologies I have seen to date that appear to be at the heart of Magic Leap optics.

Magic Leap – No Fiber Scan Display (FSD)

Sorry, No Fiber Scan Displays

For those that only want my conclusion, I will cut to the chase. Anyone that believes Magic Leap (ML) is going to have a Laser Fiber Scanned Display (FSD) anytime soon (as in the next decade) is going to be sorely disappointed. FSDs is one of those concepts that sounds like it would work until you look at it carefully. Developed at the University of Washington in the mid to late 2000’s, they were able to generate some very poor quality images in 2009 and as best I can find, nothing better since.

The fundamental problem with this technology is that wiggling a fiber is very hard to control accurately enough to make a quality display. This problem is particularly true when the scanning fiber has to come to near rest in the center of the image. It is next to impossible (and impossible at a rational cost) to have the wiggling fiber tip with finite mass and its own resonate frequency follow a highly accurate and totally repeatable path.

Magic Leap has patents applications related to FSDs showing two different ways to try and increase the resolution, provide they could ever make a decent low resolution display in the first place. Effectively, they have patents that doubled down on FSD, one was the “array of FSDs” which I discussed in the Appendix of my last article that would be insanely expensive and would not work optically in a near eye system and the other doubles down on a single FSD that ML calls “Dynamic Region Resolution” (DRR) which I will discuss below after discussing the FSD basics.

The ML patent applications on the subject of FSD read more like technical fairy tales of what they wished they could do with a bit of technical detail and drawing scattered in to make it sound plausible. But the really tough problems of making it work are never even discussed, no less solutions proposed.

Fiber Scanning Display (FSD) Basics

ml-spiral-scanThe concept of the Fiber Scanning Display (FSD) is simple enough, two piezoelectric vibrators connected to one side of an optical fiber cause the fiber tip follow a spiral path starting from the center and a working its way out. The amplitude of the vibration starts at zero in the center and then gradually increases in amplitue causing the fiber to both speed up and follow a spiral path. At the fiber tip accelerates the tip moves outward radially. The spacing of each orbit is a function of the increase in speed.

ml-fiber-scanning-basic

Red, Green, and Blue (RGB) lasers are combined and coupled into the fiber at the stationary end. As the fiber moves, the lasers turn on an off to create “pixels” that come out the spiraling end of the fiber. At the end a scan, the lasers are turned off and drive is gradually reduced to bring the fiber tip back to the starting point under control (if they just stopped the vibration, it would wiggle uncontrollably).  This retrace period while faster than the scan takes a significant amount of time since it is a mechanical process.

An obvious issue is how well they can control a wiggling optical fiber. As the documents point out, the fiber will want to oscillate based on its resonance frequency that can be stimulated by the piezoelectric vibrators. Still, one would expect that the motion will not be perfectly stable, particularly at the beginning when it is moving slowly and has no momentum.  Then there is the issue as how well it will follow the exactly the same path from frame to frame when the image is supposed to be still.

One major complication I did not see covered in any of the ML or University of Washington (which originated the concept) documents or applications is what it takes to control the laser accurately enough. The fiber speeding up from near zero at its center to maximum speed as the end of the scan. At the center of the spiral the tip moving very slowly (near zero speed). If you turned a laser on for the same amount of time and brightness as the center, pixels would be many times closer together and brighter at the center than the periphery. The ML applications even recognize that increasing the resolution of a single electromechanical  FSD is impossible for all practical purposes.

Remember that they are electromechanically vibrating one end of the fiber to cause the tip to move in a spiral to cover the area of a circle. There is a limit to how fast they can move the fiber, how well they can control it, and the fact that they want fill a wide rectangular area so a lot of the circular area will be cut off.

Looking through everything I could find that was published on the FSD, including Schowengerdt (ML co-founder and Chief Scientist) et al’s SID 2009 paper “1-mm Diameter, Full-color Scanning Fiber Pico Projector” and SID2010 paper, “Near-to-Eye Display using Scanning Fiber Display Engine” only low resolution still images are available and no videos. Below are two images from the SID 2009 paper along with the “Lenna” standard image reproduced in one of them, perhaps sadly, these are best FSD images I could find anywhere. What’s more, there has never been a public demonstration of it producing a video which I believe would show additional temporal and motion problems. 2009-fsd-images2

What you can see in both of the actual FSD images is that the center is much brighter than the periphery. From the Lenna FSD image you can see how distorted the image is particularly in the center (look at Lenna’s eye in the center and the brim of the hat for example). Even the outer parts of the image are pretty distorted. They don’t even have an decent brightness control of the pixels and didn’t even attempt to show color reproduction (requiring extremely precise laser control). Yes the images are old, but there are a series of extremely hard problems outlined above that are likely not solvable which is likely why we have not seen any better pictures of an FSD from ANYONE (ML or others) in the last 7 years.

While ML may have improved upon the earlier University of Washington work, there is obviously nothing they are proud enough to publish, no less a video of it working. It is obvious that non of the released ML videos use a FSD.

Maybe ML had improved it enough to show some promise to get investors to believe it was possible (just speculating). But even if they could perfect the basic FSD, by their own admission in the patent applications, the resolution would be too low to support a high resolution near eye display. They would need to come up with a plausible way to further increase the effective resolution to meet the Magic Leap hype of “50 Mega Pixels.”

“Dynamic Region Resolution (DRR) – 50 Mega Pixels ???

Magic Leap on more than one occasion has talked about the need to 50 Megapixels to support the field of view (FOV) they want with the angular resolution of 1-arcminute/pixel that they say is desirable. Suspending the disbelief that they could even make a good low resolution FSD, they doubled down with what they call “Dynamic Region Resolution” (DRR).

US 2016/0328884 (‘884) “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” shows the concept. This would appear to answer the question of how ML convinced investors that having a 50 megapixel equivalent display could be plausible (but not possible).

ml-variable-scan-thinThe application shows what could be considered to be a “foveated display”, where various area’s of the display varies in image density based on where it will be projected onto the human’s retina. The idea is to have high pixel density where the image will project on the highest resolution part of the eye, the fovea, and that resolution is “wasted” on the parts of the eye that can’t resolve it.

The concept is simple enough as shown in ‘884’s figures 17a and 17b (left). The concept is to track the pupil to see where the eye is looking (indicated by the red “X” in the figures) and then adjust the scan speed, line density, and sequential pixel density based on where the eye is looking. Fig 17a show the pattern for when the eye is looking at the center of the image where they would accelerate more slowly in the center of the scan. In Fig. 17b they show the scanning density to be higher where the eye is looking at some point in the middle of the image. They increase the line density in a ring that covers where the eye is looking.

Starting at the center the fiber tip is always accelerating.  For denser lines they just accelerate less, for less dense areas they accelerate at a higher rate so this sound plausible. The devil is in the details in how the fiber tip behaves as it acceleration rate changes.

Tracking the pupil accurately enough seems very possible with today’s technology. The patent application discusses how wide the band of high resolution needs to be to cover a reasonable range of eye movement from frame to frame which make it sound plausible. Some of the obvious fallacies with this approach include:

  1. Control the a wiggling fiber with enough precision to meet the high resolution and to do it repeatedly from scan to scan. They can’t even do it at low resolution with constant acceleration.
  2. Stability/tracking of the fiber as it increase and decreases its acceleration.
  3. Controlling the laser brightness accurately at both the highest and lowest resolution regions.  This will be particularly tricky as the the fiber increases or decreases it acceleration rate.
  4. The rest of the optics including any lenses and waveguides must support the highest resolution possible for the use to be able to see it. This means that the other optics need to be extremely high precision (and expensive)
What about Focus Planes?

Beyond the above is the need to support ML’s whole focus plane (“poor person’s light field”) concept.  To support focus planes they need 2 to 6 or more images per eye per frame time (say 1/60th of a second). The fiber scanning process is so slow that even producing a single low resolution and highly distorted image in 1/60th is barely possible, no less multiple images per 1/60th of a second to support the plane concept.  So to support the focus plane concept they would need a FSD per focus plane with all its associated lasers and control circuitry; the size and cost to produce would become astronomical.

Conclusion – A Way to Convince the Gullible

The whole FSD appears to me to be a dead end other than to convince the gullible that it is plausible. Even getting a FSD to produce a single low resolution image would take more than one miracle.  The idea of a DRR just doubles down on a concept that cannot produce a decent low resolution image.

The overall impression I get from the ML patent applications is that they were written to impress people (investors?) that didn’t look at the details too carefully. I can see how one can get sucked into the whole DRR concept as the applications gives numbers and graphs that try and show it is plausible; but they ignore the huge issues that they have not figured out.

Magic Leap – The Display Technology Used in their Videos

So, what display technology is Magic Leap (ML) using, at least in their posted videos?   I believe the videos rule out a number of the possible display devices, and by a process of elimination it leaves only one likely technology. Hint: it is NOT laser fiber scanning prominently shown number of ML patents and article about ML.

Qualifiers

Magic Leap, could be posting deliberately misleading videos that show technology and/or deliberately bad videos to throw off people analyzing them; but I doubt it. It is certainly possible that the display technology shown in the videos is a prototype that uses different technology from what they are going to use in their products.   I am hearing that ML has a number of different levels of systems.  So what is being shown in the videos may or may not what they go to production with.

A “Smoking Gun Frame” 

So with all the qualifiers out of the way, below is a frame capture from Magic Leaps “A New Morning” while they are panning the headset and camera. The panning actions cause temporal (time based) frame shutter artifact in the form of partial ghost images as a result of the camera and the display running asynchronously and/or different frame rates. This one frame along with other artifacts you don’t see when playing the video, tells a lot about the display technology used to generate the image.

ml-new-morning-text-images-pan-leftIf you look at the left red oval you will see at the green arrow a double/ghost image starting and continuing below that point.  This is where the camera caught the display in its display update process. Also if you look at the right side of the image you will notice that the lower 3 circular icons (in the red oval) have double images where the top one does not (the 2nd to the top has a faint ghost as it is at the top of the field transition). By comparison, there is not a double image of the real world’s lamp arm (see center red oval) verifying that the roll bar is from the ML image generation.

ml-new-morning-text-whole-frameUpdate 2016-11-10: I have upload for those that would want to look at it.   Click on the thumbnail at left to see the whole 1920×1080 frame capture (I left the highlighting ovals that I overlaid).

Update 2016-11-14 I found a better “smoking gun” frame below at 1:23 in the video.  In this frame you can see the transition from one frame to the next.  In playing the video the frame transition slowly moves up from frame to frame indicating that they are asynchronous but at almost the same frame rate (or an integer multiple thereof like 1/60 or 1/30th)

ml-smoking-gun-002

 In addition to the “Smoking Gun Frame” above, I have looked at the “A New Morning Video” as well the “ILMxLAB and ‘Lost Droids’ Mixed Reality Test” and the early “Magic Leap Demo” that are stated to be “Shot directly through Magic Leap technology . . . without use of special effects or compositing.”through the optics.”   I was looking for any other artifacts that would be indicative of the various possible technologies

Display Technologies it Can’t Be

Based on the image above and other video evidence, I think it save to rule out the following display technologies:

  1. Laser Fiber Scanning Display – either a single or multiple  fiber scanning display as shown in Magic Leaps patents and articles (and for which their CTO is famous for working on prior to joining ML).  A fiber scan display scans in a spiral (or if they are arrayed an array of spirals) with a “retrace/blanking” time to get back to the starting point.  This blanking would show up as diagonal black line(s) and/or flicker in the video (sort of like an old CRT would show up with a horizontal black retrace line).  Also, if it is laser fiber scanning, I would expect to see evidence of laser speckle which is not there. Laser speckle will come through even if the image is out of focus.  There is nothing to suggest in this image and its video that there is a scanning process with blanking or that lasers are being used at all.  Through my study of Laser Beam Scanning (and I am old enough to have photographed CRTs) there is nothing in the still frame nor videos that is indicative of a scanning processes that has a retrace.
  2. Field Sequential DLP or LCOS – There is absolutely no field sequential color rolling, flashing, or flickering in the video or in any still captures I have made. Field sequential displays, display only one color at a time very rapidly. When these rapid color field changes beat against the camera’s scanning/shutter process.  This will show up as color variances and/or flicker and not as a simple double image. This is particularly important because it has been reported that Himax which makes field sequential LCOS devices, is making projector engines for Magic Leap. So either they are not using Himax or they are changing technology for the actual product.  I have seen many years of DLP and LCOS displays both live and through many types of video and still cameras and I see nothing that suggest field sequential color is being used.
  3. Laser Beam Scanning with a mirror – As with CRTs and fiber scanning, there has to be a blanking/retrace period between frames will show up in the videos as roll bar (dark and/or light) and it would roll/move over time.  I’m including this just to be complete as this was never suggested anywhere with respect to ML.
UPDATE Nov 17, 2016

Based on other evidence that as recently come in, even though I have not found video evidence of Field Sequential Color artifacts in any of the Magic Leap Videos, I’m more open to thinking that it could be LCOS or (less likely) DLP and maybe the camera sensor is doing more to average out the color fields than other cameras I have used in the past.  

Display Technologies That it Could Be 

Below are a list of possible technologies that could generate video images consistent with what has been shown by Magic Leap to date including the still frame above:

  1. Mico-OLED (about 10 known companies) – Very small OLEDs on silicon or similar substrates. As list of the some of the known makers is given here at OLED-info (Epson has recently joined this list and I would bet that Samsung and others are working on them internally). Micro-OLEDs both A) are small enough toinject an image into a waveguide for a small headset and B) has the the display characteristics that behave the way the image in the video is behaving.
  2. Transmissive Color Filter HTPS (Epson) – While Epson was making transmissive color filter HTPS devices, their most recent headset has switch to a Micro-OLED panel suggesting they themselves are moving away.  Additionally while Meta first generation used Epson’s HTPS, they moved away to a large OLED (with a very large spherical reflective combiner).  This technology is challenged in going to high resolution and small size.
  3. Transmissive Color Filter LCOS (Kopin) – is the only company making Color Filter Transmissive LCOS but they have not been that active as of last as a component supplier and they have serious issues with a roadmap to higher resolution and size.
  4. Color Filter reflective LCOS– I’m putting this in here more for completeness as it is less likely.  While in theory it could produce the images, it generally has lower contrast (which would translate into lack of transparency and a milkiness to the image) and color saturation.   This would fit with Himax as a supplier as they have color filter LCOS devices.
  5. Large Panel LCD or OLED – This would suggest a large headset that is doing something similar to the Meta 2.   Would tend to rule this out because it would go against everything else Magic Leap shows in their patents and what they have said publicly.   It’s just that it could have generated the image in the video.
And the “Winner” is I believe . . . Micro-OLED (see update above) 

By a process of elimination including getting rid of the “possible but unlikely” ones from above, it strongly points to it being Micro-OLED display device. Let me say, I have no personal reason to favor it being Micro-OLED, one could argue it might be to my advantage based on my experience for it to be LCOS if anything.

Before I started any serious analysis, I didn’t have an opinion. I started out doubtful that it was field sequential or and scanning (fiber/beam) devices due to the lack of any indicative artifacts in the video, but it was the “smoking gun frame” that convince me that that if the camera was catching temporal artifacts, it should have been catching the other artifact.

I’m basing this conclusions on the facts as I see them.  Period, full stop.   I would be happy to discuss this conclusion (if asked rationally) in the comments section.

Disclosure . . . I Just Bought Some Stock Based on My Conclusion and My Reasoning for Doing So

The last time I played this game of “what’s inside” I was the first to identify that a Himax LCOS panel was inside Google Glass which resulted in their market cap going up almost $100M in a couple of hours.  I had zero shares of Hixmax when this happened, my technical conclusion now as it was then was based on what I saw.

Unlike my call on Himax in Google Glass I have no idea which company make the device Magic Leap appears to using nor if Magic Leap will change technologies for their production device.  I have zero inside information and am basing this entirely on the information I have given above (you have been warned).   Not only is the information public, but it is based on videos that are many months old.

I  looked at companie on the OLED Microdisplay List by www.oled-info.com (who has followed OLED for a long time).  It turned out all the companies were either part of a very large company or were private companies, except for one, namely eMagin.

I have know of eMagin since 1998 and they have been around since 1993.  They essentially mirror Microvision doing Laser Beam Scanning and was also founded in 1993, a time where you could go public without revenue.  eMagin has spent/loss a lot of shareholder money and is worth about 1/100th from their peak in March 2000.

I have NOT done any serious technical, due diligence, or other stock analysis of eMagin and I am not a stock expert. 

I’m NOT saying that eMagine is in Magic Leap. I’m NOT saying that Micro-OLED is necessarily better than any other technology.  All I am saying is that I think that someone’s Micro-OLED technology is being using the Magic Leap prototype and that Magic Leap is such hotly followed company that it might (or might not) affect the stock price of companies making Micro-OLEDs..

So, unlike the Google Glass and Himax case above, I decided to place a small “stock bet” (for me) on ability to identify the technology (but not the company) by buying some eMagin stock  on the open market at $2.40 this morning, 2016-11-09 (symbol EMAN). I’m just putting my money where my mouth is so to speak (and NOT, once again, stock advice) and playing a hunch.  I’m just making a full disclosure in letting you know what I have done.

My Plans for Next Time

I have some other significant conclusions I have drawn from looking at Magic Leap’s video about the waveguide/display technology that I plan to show and discuss next time.

Near Eye AR/VR and HUD Metrics For Resolution, FOV, Brightness, and Eyebox/Pupil

Image result for oculus riftI’m planning on following up on my earlier articles about AR/VR Head Mounted Displays
(HMD) that also relate to Heads Up Displays (HUD) with some more articles, but first I would like to get some basic Image result for hololenstechnical concepts out of the way.  It turns out that the metrics we care about for projectors while related don’t work for hud-renaultmeasuring HMD’s and HUDs.

I’m going to try and give some “working man’s” definitions rather than precise technical definitions.  I be giving a few some real world examples and calculations to show you some of the challenges.

Pixels versus Angular Resolution

Pixels are pretty well understood, at least with today’s displays that have physical pixels like LCDs, OLEDs, DLP, and LCOS.  Scanning displays like CRTs and laser beam scanning, generally have additional resolution losses due to imperfections in the scanning process and as my other articles have pointed out they have much lower resolution than the physical pixel devices.

When we get to HUDs and HMDs, we really want to consider the angular resolution, typically measured in “arc-minutes” which are 1/60th of a degree; simply put this is the angular size that a pixel covers from the viewing position. Consumers in general haven’t understood arc-minutes, and so many companies have in the past talked in terms of a certain size and resolution display viewed from a given distance; for example a 60-inch diagonal 1080P viewed at 6 feet, but since the size of the display, resolution and viewing distance are all variables its is hard to compare displays or what this even means with a near eye device.

A common “standard” for good resolution is 300 pixels per inch viewed at 12-inches (considered reading distance) which translates to about one-arc-minute per pixel.  People with very good vision can actually distinguish about twice this resolution or down to about 1/2 an arc-minute in their central vision, but for most purposes one-arc-minute is a reasonable goal.

One thing nice about the one-arc-minute per pixel goal is that the math is very simple.  Simply multiply the degrees in the FOV horizontally (or vertically) by 60 and you have the number of pixels required to meet the goal.  If you stray much below the goal, then you are into 1970’s era “chunky pixels”.

Field of View (FOV) and Resolution – Why 9,000 by 8100 pixels per eye are needed for a 150 degree horizontal FOV. 

As you probably know, the human eye’s retina has variable resolution.  The human eyes has roughly elliptical FOV of about 150 to 170 degrees horizontally by 135 to 150 degrees vertically, but the generally good discriminating FOV is only about 40 degree (+/-20 degrees) wide, with reasonably sharp vision, the macular, being about 17-20 degrees and the fovea with the very best resolution covers only about 3 degrees of the eye’s visual field.   The eye/brain processing is very complex, however, and the eye moves to aim the higher resolving part of the retina at a subject of interest; one would want the something on the order of the one-arc-minute goal in the central part of the display (and since having a variable resolution display would be a very complex matter, it end up being the goal for the whole display).

And going back to our 60″, 1080p display viewed from 6 feet, the pixel size in this example is ~1.16 arc-minutes and the horizontal field of view of view will be about 37 degrees or just about covering the generally good resolution part of the eye’s retina.

Image result for oculus rift

Image from Extreme Tech

Now lets consider the latest Oculus Rift VR display.  It spec’s 1200 x 1080 pixels with about a 94 horz. by 93 vertical FOV per eye or a very chunky ~4.7 arc-minutes per pixel; in terms of angular resolution is roughly like looking at a iPhone 6 or 7 from 5 feet away (or conversely like your iPhone pixels are 5X as big).   To get to the 1 arc-minute per pixel goal of say viewing today’s iPhones at reading distance (say you want to virtually simulate your iPhone), they would need a 5,640 by 5,580 display or a single OLED display with about 12,000 by 7,000 pixels (allowing for a gap between the eyes the optics)!!!  If they wanted to cover the 150 by 135 FOV, we are then talking 9,000 by 8,100 per eye or about a 20,000 by 9000 flat panel requirement.

Not as apparent but equally important is that the optical quality to support these types of resolutions would be if possible exceeding expensive.   You need extremely high precision optics to bring the image in focus from such short range.   You can forget about the lower cost and weight Fresnel optics (and issues with “God rays”) used in Oculus Rift.

We are into what I call “silly number territory” that will not be affordable for well beyond 10 years.  There are even questions if any know technology could achieve these resolutions in a size that could fit on a person’s head as there are a number of physical limits to the pixel size.

People in gaming are apparently living with this appallingly low (1970’s era TV game) angular resolution for games and videos (although the God rays can be very annoying based on the content), but clearly it not a replacement for a good high resolution display.

Now lets consider Microsoft’s Hololens, it most criticized issue is is smaller (relative to the VR headsets such as Oculus) FOV of about 30 by 17.5 degrees.  It has a 1268 by 720 pixel display per eye which translates into about 1.41 arc-minutes per pixel which while not horrible is short of the goal above.   If they had used 1920×1080 (full HD) microdisplay devices which are becoming available,  then they would have been very near the 1 arc-minute goal at this FOV.

Let’s understand here that it is not as simple as changing out the display, they will also have to upgrade the “light guide” that the use as an combiner to support the higher resolution.   Still this is all reasonably possible within the next few years.   Microsoft might even choose to grow the FOV to around 40 degrees horizontally rather and keep the lower angular resolution a 1080p display.  Most people will not seriously notice the 1.4X angular resolution different (but they will by about 2x).

Commentary on FOV

I know people want everything, but I really don’t understand the criticism of the FOV of Hololens.  What we can see here is a bit of “choose your poison.”  With existing affordable (or even not so affordable) technology you can’t support a wide field of view while simultaneously good angular resolution, it is simply not realistice.   One can imaging optics that would let you zoom between a wide FOV with lower angular resolution and a smaller FOV with higher angular resolution.  The control of this zooming function could perhaps be controlled by the content or feedback from the user’s eyes and/or brain activity.

Lumens versus Candelas/Meter2 (cd/m2 or nits)

With an HMD or HUD, what we care about is the light that reaches the eye.   In a typical front projector system, only an extremely small percentage of the light that goes out of the projector, reflects off the screen and makes it back to any person’s eye, the vast majority of the light goes to illuminating the room.   With a HMD or HUD all we care about is the light that makes it into the eye.

Projector lumens or luminous flux, simply put, are a measure of the total light output and for a projector is usually measure when outputting a solid white image.   To get the light that makes it to the eye we have to account for the light hits a screen, and then absorbed, scattered, and reflects back at an angle that will get back to the eye.  Only an exceeding small percentage (a small fraction of 1%) of the projected light will make it into the eye in a typical front projector setup.

With HMDs and HUDs we talk about brightness in terms candelas-per-Meter-Squared (cd/m2), also referred to as “nits” (while considered an obsolete term, it is still often used because it is easier to write and say).  Cd/m2 (or luminance) is measure of brightness in a given direction which tells us how bright the light appears to the eye looking in a particular direction.   For a good quick explanation of lumens, cd/m2 I would recommend a Compuphase article.

Image result for hololens

Hololens appears to be “luminosity challenged” (lacking in  cd/m2) and have resorted to putting sunglasses on outer shield even for indoor use.  The light blocking shield is clearly a crutch to make up for a lack of brightness in the display.   Even with the shield, it can’t compete with bright light outdoors which is 10 to 50 times brighter than a well lit indoor room.

This of course is not an issue for the VR headsets typified by Oculus Rift.  They totally block the outside light, but it is a serious issue for AR type headsets, people don’t normally wear sunglasses indoors.

Now lets consider a HUD display.  A common automotive spec for a HUD in sunlight is to have 15,000 cd/m2 whereas a typical smartphone is between 500 and 600 cd/m2 our about 1/30th the luminosity of what is needed.  When you are driving a car down the road, you may be driving in the direction of the sun so you need a very bright display in order to see it.

The way HUDs work, you have a “combiner” (which may be the car’s windshield) that combines the image being generated with the light from the real world.  A combiner typical only reflects about 20% to 30% of the light which means that the display before the combiner needs to have on the order of 30,000 to 50,000 cd/m2 to support the 15,000 cd/m2 as seen in the combiner.  When you consider that you smartphone or computer monitor only has about 400 to 600 cd/m2 , it gives you some idea of the optical tricks that must be played to get a display image that is bright enough.

phone-hudYou will see many “smartphone HUDs” that simply have a holder for a smarphone and combiner (semi-mirror) such as the one pictured at right on Amazon or on crowdfunding sites, but rest assured they will NOT work in bright sunlight and only marginal in typical daylight conditions. Even with combiners that block more than 50% of the daylight (not really much of a see-through display at this point) they don’t work in daylight.   There is a reason why companies are making purpose built HUDs.

The cd/m2 also is a big issue for outdoor head mount display use. Depending on the application, they may need 10,000 cd/m2 or more and this can become very challenging with some types of displays and keeping within the power and cooling budgets.

At the other extreme at night or dark indoors you might want the display to have less than 100 cd/m2 to avoid blinding the user to their surrounding.  Note the SMPTE spec for movie theaters is only about 50 cd/mso even at 100 cd/m2 you would be about 2X the brightness of a movie theater.  If the device much go from bright sunlight to night use, you could be talking over a 1,500 to 1 dynamic range which turns out to be a non-trivial challenge to do well with today’s LEDs or Lasers.

Eye-Box and Exit Pupil

Since AR HMDs and HUDs generate images for a user’s eye in a particular place, yet need to compete with the ambient light, the optical system is designed to concentrate light in the direction of the eye.  As a consequence, the image will only be visible in a given solid angle “eye-box” (with HUDs) or “pupil” (with near eye displays).   There is also a trade-off in making the eyebox or pupil bigger and the ease of use, as the bigger the eye-box or pupil, the easier it will be the use.

With HUD systems there can be a pretty simple trade-off in eye-box size and cd/m2 and the lumens that must be generated.   Using some optical tricks can help keep from needing an extremely bright and power hungry light source.   Conceptually a HUD is in some ways like a head mounted display but with very long eye relief. With such large eye relieve and the ability of the person to move their whole head, the eyebox for a HUD has significantly larger than the exit pupil of near eye optics.  Because the eyebox is so much larger a HUD is going to need much more light to work with.

For near eye optical design, getting a large exit pupil is a more complex issue as it comes with trade-offs in cost, brightness, optical complexity, size, weight, and eye-relief (how far the optics are from the viewer’s eye).

Too small a pupil and/or with more eye-relief, and a near eye device is difficult to use as any small movement of the device causes you to to not be able to see the whole image.  Most people’s first encounter with an exit pupil is with binoculars or a telescope and how the image cuts offs unless the optics are centered well on the user’s eye.

Conclusions

While I can see that people are excited about the possibilities of AR and VR technologies, I still have a hard time seeing how the numbers add up so to speak for having what I would consider to be a mass market product.  I see people being critical of Hololens’ lower FOV without being realistic about how they could go higher without drastically sacrificing angular resolution.

Clearly there can be product niches where the device could serve, but I think people have unrealistic expectations for how fast the field of view can grow for product like Hololens.   For “real work” I think the lower field of view and high angular resolution approach (as with Hololens) makes more sense for more applications.   Maybe game players in the VR space are more willing to accept 1970’s type angular resolution, but I wonder for how long.

I don’t see any technology that will be practical in high volume (or even very expensive at low volume) that is going to simultaneously solve the angular resolution and FOV that some people want. AR displays are often brightness challenged, particularly for outdoor use.  Layered on top of these issues are those size, weight, cost, and power consumption which we will have to save these issues for another day.

 

Laser Beam Scanning Versus Laser-LCOS Resolution Comparison

cen-img_9783-celluon-with-uo

Side By Side Center Patterns (click on image for full size picture)

I apologize for being away for so long.  The pictures above and below were taken over a year ago and I meant to format and publish them back then but some other business and life events got in the way.

The purpose of this article is to compare the resolution of the Celluon PicoPro Laser Beam Scanning (LBS) projector and the UO Smart Beam Laser LCOS projector.   This is not meant to be a full review of both products, although I will make a few comments here and there, but rather, it is to compare the resolution between the two products.  Both projectors claim to have 720P resolution but only one of them actually has that “native/real” resolution.

This is in a way a continuation of the serious I have written about the PicoPro with optics developed by Sony and the beam scanning mirror and control by Microvision and in particular articles http://wp.me/p20SKR-gY and http://wp.me/p20SKR-hf.  With this article I am now included some comparison pictures I took of the UO Smart Beam projector (https://www.amazon.com/UO-Smart-Beam-Laser-Projector-KDCUSA/dp/B014QZ4FLO).

As per my prior articles, the Celluon PicoPro has no where close to it stated 1920×720 (non-standard) nor even 1280×720 (720P) resolution.  The UO projector while not perfect, does demonstrate 720P resolution reasonably well, but it does suffer from chroma aberrations (color separation) at the top of the image due to optical 100% offset (this is to be expected to some extent).

Let me be up front, I worked on the LCOS panel used in the UO projector when I was at Syndiant but I had nothing to do with the UO projector itself.   Take that as bias if you want, but the pictures I think tell the story.  I did not have any contact with either UO (nor Celluon for that matter) in preparing this article.

I also want to be clear that both the UO projector and the Celluon PicoPro tested are now over 1 year old and there may have been improvements since then.  I saw serious problems with both products, in particular with the color balance, the Celluon is too red (“white” is pink) and the UO very red deficient (“whilte is significantly blue-green).   The color is so far off on the Celluon that it would be a show stopper for me ever wanting to buy one as a consumer (hopefully UO has or will fix this).   Frankly, I think both projectors have serious flaws (if you want to know more, ask and I will write a follow-up article).

The UO Smart Beam had the big advantage in that it has “100% offset” which means that when placed on table-top, it will project upward not hitting the table without any keystone.   The PicoPro has zero offset and shoots straight out.  If you put it flat on a table the lower half of the image will shoot into the tabletop. Celluon includes a cheap and rather silly monopod that you can used to have the projector “float” above the table surface and then you can tilt it up and get a keystone image.  To take the picture, I had to mount the PicoPro on a much taller tripod and then shoot over the Projector so the image would not be keystoned

I understand that the next generation of the Celluon and a similar Sony MPCL1 projector (which has a “kickstand) have “digital keystone correction” which is not as good a solution as 100% offset as it reduces the resolution of the image; this is the “cheap/poor” way out and they really should have 100% offset like the UO projector (interestingly, earlier Microvision ShowWX projector with lower resolution had 100% offset) .

For the record – I like the Celluon PicoPro flatter form factor better; I’m not a fan of the UO cube as hurts the ability to put the projector in one’s pocket or a typical carrying bag.

Both the PicoPro with laser scanning and the Smart Beam with lasers illuminating an LCOS microdisplay have no focus knob and have a wide focus range (from about 50cm/1.5 feet to infinity), although they are both less sharp at the closer range.  The PicoPro with LBS is a Class 3R laser product whereas the Smart Beam with laser “illumination” of LCOS is only Class 1.   The measure dbrightness of the PicoPro was about 32 Lumens as rated when cold but dropped under 30 when heated up.  The UO while rated at 60 lumens was about 48 lumens when cold and about 45 when warmed up or more significantly below its “spec.”

Now onto the main discussion of resolution.  The picture at the top of this article shows the center crop from 720P test pattern generated by both projectors with the Smart Beam image on the left and the PicoPro on the right.   There is also an inset of the Smart Beam’s 1 pixel wide text pattern near the PicoPro’s 1 pixel wide pattern for comparison This test pattern shows a series of 1 pixel, 2 pixel and 3 pixel wide horizontal and vertical lines.

What you should hopefully notice is that the UO clearly resolves even the 1 pixel wide lines and the black lines are black whereas the 1 pixel wide lines are at best blurry and the 2 and even 3 pixel wide lines doe get to a very good black level (as in the contrast is very poor).  And the center is the very best case for the Celluon LBS whereas for the UO with it 100% offset it is a medium case (the best case is lower center).

The worst case for both projectors is one of the upper corners and below is a similar comparison of their upper right corner.  As before, I have included and inset of the UO’s single pixel image.

ur-img_9783-celluon-with-uo-overlay

Side By Side Center Patterns (click on image for full size picture)

What you should notice is that while there are still distinct 1 pixel wide lines in both directions in the UO projector, 1 pixel wide lines in the case of the Celluon LBS are a blurry mess.  Clearly they can’t resolve 1 pixel wide lines at 720P.

Because of the 100% offset optics the best case for the UO projector is at the bottom of the image (this is true almost any 100% offset optics) and this case is not much different than the center case for Celluon projector (see below):

lcen-celluon-with-uo-overlay

Below is a side by side picture I took (click on it for a full size image). The camera’s “white point” was an average between the two projectors (Celluon is too red/blue&green deficient and the UO is red deficient). The image below is NOT what I used in the cropped test patterns above as the 1 pixel features were too near the resolution limit of the Canon 70D camera (5472 by 3648 pixels) for the 1 pixel features.  So I used individual shots from each projector to double “sample” by the camera of the projected images.

side-by-side-img_0339-celluon-uo

For the Celluon PicoPro image I used the picture below (originally taken in RAW but digital lens corrected, cropped, and later converted to JPG for posting – click on image for full size):

img_9783-celluon-with-uo-overlay

For the UO Smart Beam image, I use the following image (also taken in RAW, digital lens corrected, straighten slightly, cropped and later converted to JPG for posting):

img_0231-uo-test-chart

As is my usual practice, I am including the test pattern (in lossless PNG format below for anyone who wants to verify and/or challenge my results:

interlace res-chart-720P G100A

I promise I will publish any pictures by anyone that can show better results with the PicoPro or any other LBS projector (Or UO projector for that matter) with the test pattern (or similar) above (I went to considerable effect to take the best possible PicoPro image that I could with a Canon 70D Camera).

Celluon/Sony/Microvision Optical Path

Celluon Light Path Labled KGOnTech

Today I’m going to give a bit of a guided tour through the Celluon optical path.  This optical engine was developed by Sony probably based on Microvision’s earlier work and using Microvision’s scanning mirror.   I’m going to give a “tour” of the optics and then give some comment on what I see in terms of efficiency (light loss) and cost.

Referring to the picture above and starting with the lasers at the bottom, there a 5 of them (two each of red and green and one blue) that are in a metal chassis (and not visible in the picture).   Each laser goes to it own beam spreading and alignment lens set.  These lenses enlarge the diameter of each laser beam and they are glued in place after alignment.  Note that the beams at this point are spread wider than the size of the scanning mirror and will be converged/focus back later in the optics.

Side Note: One reason for spreading the laser beams bigger than the scanning mirror is to reduce precision required of the optical components (making very small high precision optics with no/extremely-small defects becomes exponentially expensive).  But a better explanation is that it supports the despeckling process.  With the wider beam they can pass the light through more different paths before focusing it back.  There is a downside to this as seen in the Celluon output, namely is still too big when exiting the projector and thus the images are out of focus at short projection distances. 

After the beam spreading lenses there is glass plate at a 45 degree angle that splits a part of the light from the lasers down to a light sensors for each laser.   The light sensors are used to give feedback on the output of each laser and adjust to adjust them based on how they change with temperature and aging.

Side Note:  Laser heating and the changing of the laser output is a big issue with laser scanning. The lasers very quickly change in temperature/output.  In tests I have done, you can see the effect of bright objects on one side of the screen affecting the color on the other side of the screen in spite of the optical feedback.   

Most of the light from the sensor deflector continues to a complex structure of about 15 different pieces of optically coated solid glass elements glued together into a complex many faceted structure. There are about 3 times as many surfaces/components as would be required for simply combining 3 laser beams.   This structure is being used to combine the various colors into a single beam and has some speckle reducing structures.  As will be discussed later, having the light go through so many elements, each with their optical losses (and cost) results in loosing over half the light.

lenovo 21s cropFor reference compare this to the optical structure shown in the Lenovo video for their prototype laser projector in a smartphone at left (which uses an STMicro engine see).  There are just 3 lenses, 1 mirror (for red) and two dichroic plate combiners to combine the green and blue and a flat window. The Celluon/Sony/Microvision engine by comparison is using many more elements and instead of simple plate combiners they are using prisms which while having better optical performance, are considerably more expensive.  The Lenovo/STM engine does not show/have the speckle reduction elements nor the distortion correction elements (its two mirror scanning process inherently has less distortion) of the Celluon/Sony design.

Starting with the far left red laser light path, it goes to a “Half Mirror and 2nd Mirror” pair.   This two mirror assembly likely being done for speckle reduction.  Speckle is caused by light interfering with itself and by having the light follow different path lengths (the light off the 2nd mirror will follow a slightly longer path) it will reduce the speckle.  The next element is a red-pass/green-reflect dichroic mirror that combines left red and green lasers followed by a red&green-pass/blue-reflect dichroic combiner.

Then working from the right, there is another speckle reduction half-mirror/2nd-mirror pair for the right hand green laser followed by a green-pass/red-reflect dichroic mirror to combine the right side green and red lasers.  A polarizing combiner is (almost certainly) used to combine the 3 lasers on the left with the two lasers on the right into a single beam.

After the polarizing combiner there is a mirror that directs the combined light through a filter encased between two glass plates.  Most likely this filter either depolarizes or circularly polarizes the light because on exiting this section into the open air the previously polarized laser light has little if any linear polarization.   Next the light goes through a 3rd set of despeckling mirror pairs.   The light reflects off another mirror and exits into a short air gap.

Following the air gap there is a “Turning Block” that is likely part of the despeckling.   The material in the block probably has some light scattering properties to vary slightly the light path length and thus reduce speckle and thus the reason for the size/thickness of the block.   There is a curved light entry surface that will have a lens effect.

Light exiting the Turning Block goes through a lens that focuses the spread light back to a smaller beam that will reflect off the beam scanning mirror.  This lens set the way the beam diverges after it exits the projector.

After the converging lens the light reflects off a mirror that sends the light into the beam scanning mirror assembly.  The beam scanning mirror assembly, designed by Microvision, is it own complex structure and among other things has some strong magnets in it (supporting the magnetic mirror deflection).

Side Note: The STM/bTendo design in the Lenovo projector uses two simpler mirrors that move in only one axis rather than a single complex mirror that has to move in two axes.  The STM mirrors both likely uses a simple electrostatic only design whereas Microvision’s dual axis uses electrostatic for one direction and electromagnetic for the other.  

Finally, the light exits the projector via a Scanning Correction Lens that is made of plastic. It appears to be the only plastic optical element as all the other elements that could be easily accessed.   Yes, even though this is a laser scanning projector, it still has a correction lens, in this case to correct the otherwise “bow-tie” distorted scanning process.

Cost Issues

In addition to the obvious cost of the lasers (and needing 5 of them rather than just 3) and the Scanning Mirror Assembly, there are a large number of optically coated glass elements.  Addtionally, instead of using lower cost plate elements, the Celluon/Sony/Microvision engine use much more expensive solid prisms for the combiner and despeckling elements.   Each of these has to be precisely made, coated, and glued together. The cost of each element is a function of the quality/optical efficiency and which can vary significantly, but I would think there would be at least $20 to $30 of raw cost in just the glass elements even at moderately high volumes (and it could be considerably more).

Then there is a lot to assemble with precise alignment of all the various optics.  Finally, all of the lasers must be individually aligned after the unit with all the other elements has been assemble.

Optical Efficiency (>50% of the laser light is lost)

The light in the optical engine passes through and/or reflects off a large number of optical interfaces and there are light losses at each of these interfaces.  It is the “death by a thousand cuts” because while each element might have a 1% to 10% or more lose, the effects are multiplicative.   The use of solid rather than plate optics reduces the losses but as at added cost.  You can see in the picture of the walls of the chassis spots of colored light that has “escaped” the optical path and is lost.  You can also see the light glowing off optical elements including the lens; all of this is lost light.  The light that goes to the light sensors is also lost.

Celluon laser lable IMG_9715

Laser Warning Label From Celluon Case

Some percentage of the light that is spread will not be converged back onto the mirror.  Additionally, there are scattering losses in the Correction Lens and Turning block and in the rest of the optics.

When it is multiplied out, more than 50% of the laser light is lost in the optics.

This 50% light loss percentage agrees with the package labeling (see picture on the left) that says the laser light output for Green is 50mW even thought they are using two green lasers each of which likely outputs 50mW or more.

Next Time: Power Consumption

The Celluon system consumes ~2.6 Watts to put up a “black” image and ~6.1 Watts to put up a 32-lumen white image.  The delta between white and black being about 3.5 Watts or about 9 lumens per delta Watt from back to white.  For reference, the newer DLP projectors using LEDs can produce about double the delta lumens per Watt.  Next time, I plan on drilling down in the power consumption numbers.

Lenovo’s STMicro Based Prototype Laser Projector (part 1)

Lenovo Tech World Projector 001At Lenovo at their Tech World on May 27th 2015 showed a Laser Beam Scanning (LBS) projector integrated into a cell phone prototype (to be clear, a prototype and not a product).   White there has been no announcement of the maker of the LBS projector, there is no doubt that is made by STM as I will show below (to give credit where it is due, this was first shown on a blog by Paul Anderson focused on Microvision )

ST-720p- to Lenove comparison 2The comparison at left is base on video by Lenovo that included an exploded views of the projector and pictures of STM’s 720p projector from an article from Picoprojector-info.com on Jan 18, 2013.   I have drawn lines comparing various elements such as the size and placement of connectors and other components, the size and placement of the 3 major I.C.’s, and even the silk screen “STM” in the same place on both the Lenovo video and the STM article’s photo (circled in yellow).

While there are some minor differences, there are so many direct matches that there can be no doubt that Lenovo is using STM.

The next interesting to consider is how this design compares to the LBS design of Microvision and Sony in the Celluon projector.   The Lenovo video shows the projector as being about 34mm by 26mm by 5mm thick.  To check this I took the a photo from the STM to CelluonTO SCALE  003Picoprojector.com
article and was able to fit the light engine and electronic into a 34mm by 26mm rectangle arranged as they are in the Lenovo video (yet one more verification that it is STM).   I then took a picture I took of the Celluon board to the same scale and show the same 34x26mm rectangle on it.   The STM optics plus electronics are 1/4 the area and 1/5th the volume (STM is 5mm thick versus Microvision/Sony’s 7mm).

The Microvision/Sony is has probably about double the lumens/brightness of the STM module due to have two green and two red lasers and I have not had a chance to compare the image quality.   Taking out the extra two lasers would make the Microvision/Sony engine optics/heat-sinking smaller by about 25% and have a smaller impact on the board space, but this would still leave them over 3X bigger than STM.   The obvious next question is why.

One reason is that the STM either has a simpler electronics design or is more integrated and/or some combination thereof.  In particular the Microvision/Sony design requires an external DRAM (large rectangular chip in the Microvision/Sony).    STM probably still needs DRAM, but it is likely integrated into one of their chips.

There are not a lot of details on the STM optics (developed by bTendo of Israel before being acquired by STM).   But what we do know is STM uses separate simpler and smaller horizontal and vertical mirrors versus Microvision significantly larger and more complex single mirror assembly.  Comparing the photos above, the Microvision mirror assembly alone is almost as big as STM’s entire optical engine with lasers.   The Microvision mirror assembly has a lot of parts other than the MEMs mirror including some very strong magnets.  Generally the optical path of the Microvision engine requires a lot of space to enter and exit the Microvision mirror from the “right” directions.

btendo optics

On the right I have captured two frames from the Lenovo video showing the optics from two directions.  What you should notice is that the mirror assembly is perpendicular to the incoming laser light.  There appears to be a block of optics (pointed to by the red arrow in the two pictures) that redirects the light down to the first mirror and then returning it to the second mirror.  The horizontal scanning mirror is clearly shown in the video but it is not clear (so I took an educated guess) as to the location of the vertical scanning mirror.

Also shown at the right is bTendo patent 8,228,579 showing the path of light for their two scanning mirror design.   It does not show the more complex block of optics required to direct the light down to the vertical mirror and then redirect it back down to the horizontal mirror and then out as would be required in the Lenovo design.    You might also notice that there is a flat clear glass/plastic output cover shown in the at the 21s point in the video, this is very different from the Microvision/Celluon/Sony design show below.

Microvision mirror with measurements

Microvision Mirror Assembly and Exit Lens

Shown at left is the Microvision/Celluon beam scanning mirror and the “Exit” Lens.   First notices the size and complexity of the scanning mirror assembly with magnets and coils.  You can see the single round mirror with its horizontal hinge (green arrow) and the vertical hinge (yellow arrow) on the larger oval yoke.   The single mirror/pivot point causes an inherently bow-tied image.  You can see how distorted the mirror looks through the Exit Lens (see red arrow); this is caused by the exit lens correcting for the bow-tie effect.  This significant corrective lens is also a likely source of chroma aberrations in the final image.

Conclusions

All the above does not mean that the Leveno/STM is going to be a successful product.   I have not had a chance to evaluated the Lenovo projector and I still have serious reservations about any embedded projector succeeding in a cell phone (I outlined my reasons in an August 2013 article and I think they still hold true).    Being less than 1/5th the volume of the Microvision/Sony design is necessary but I don’t think is sufficient.

This comparison only shows that the STM design is much smaller than Microvisions and Microvision has only made relatively small incremental progress in size since the ShowWX announced in 2009) and Sony so far has not improved on it much, at least so far.