Magic Leap “A Riddle Wrapped in an Enigma”

magic-leap-combiner-cropSo what is Magic Leap doing?  That is the $1.4 billion dollar question. I have been studying their patents as well as videos and articles about them and frankly a lot of it does not add up.   The “Hype Factor” is clearly off the chart with major and high tech news/video outlets covering them with a majors marketing machine spending part of the $1.4B, yet no device has been shown publicly, only a few “through the Magic Leap” online videos (6 months ago and 1 year ago).   Usually something this much over-hyped ends up like Segway (I’m not the first to make the Segway comparison to Magic Leap) or more recently Google Glass.

Magic Leap appears to be moving on many different technological fronts at once (high resolution fiber scanning display technology, multi-focus- combiner/light fields, and mega-processing to support the image processing required) which almost always is a losing strategy even for a large company no less a startup, albeit a well funded one. What’s more, and the primary subject of this article, they appear to be moving on many different fronts/technologies with respect to the multi-focus-combiner.

Based on the image above from Wired in April 2016 and other articles talking about a “photonic chip,” a marketing name for their combiner not used in any of their patent applications that I could find.   By definition, a photonic device would have some optical property that is altered electronically, but based on other comments made by Magic Leap and looking at the patents, the so called “chip” is just as likely a totally passive device.

ml-scanned-fiber-applicationIt is also well known that Magic Leap is working on piezo scanned laser fiber displays, a display technology initially developed by Magic Leap’s CTO while at the University of Washington (click left for a bigger image). Note that is projects a spiraling cone of light.

A single scanning display is relatively low resolution and so to achieve Magic Leaps resolution goals will require arrays of these scanning fibers as outlined in their US Application 2015/0268415.

Magic Leap is moving in so many different directions at the same time. I plan on covering the scanning fiber display in more detail much more detail in the near future.   

Background – Nvidia and Stanford Light Fields

A key concept running through everything about Magic Leap it is that their combiner supports at least multiple focus depths at the same time.   The term “Light Fields” is often used in connection with Magic Leap, but what they are doing is not classic light fields such as Nvidia has demonstrated (very good article and video is here).   Or even what Stanford’s Gordon Wetzstein work talks about with compressive light field displays (example here) and several of his YouTube videos, in particular this one that discusses light fields and the compressive display.   (More on this background at the end). 

A key think to understand about “light fields” and Magic Leaps multi-focus-planes is that they are based on controlling the angles of the rays of light as it controls the focus distance.   The rays of light that will make it through the eye’s pupil from a point on far away objects come in nearly parallel, whereas the rays from a nearby point have a wider range of angles.

Magic Leap Patents

Magic Leaps patents show a mix of related and very different types of waveguide combiners.   Most in-line with what Magic Leap talks about in the press and videos are the ones that include multi-plane waveguides and scanned laser fiber displays.   These include US patent applications US20150241705 (‘705) and the 490 page US20160026253 (‘253).  I have clipped out some of the key figures from each below (click on the images to see larger images).

ml-combiners-from-us-20150241705Fig. 8 from the ‘705 patent uses a multi-layer electrically switched diffraction grating waveguide (but they don’t say what technology they expect to use to cause the switching). In addition to switching each diffraction grating makes the image focus differently as shown in Fig. 9.  While this “fits” with the “photonic chip” language by Magic Leap, I’m less inclined to believe this is what Magic Leap is doing based on the evidence to date (although Digilens has developed switchable SBGs in their waveguides).

Fig. 6 likely comes closer to what Magic Leap seems to be working on, at least in the long term. In this case there is one or more laser scanning fiber displays for each layer of the diffraction grating (similar to Fig. 8 but passive/fixed). The gratings layers in this setup are passive and based on which display is “on” chooses the grating layer and thus chooses the focus.  Also note the ” collimation element 6” between the scanning fibers 602a-e and the waveguide 122. They take the cone of rays from the spiral scanning fiber and turns them into an array of parallel (collimated) rays. Below shows a prototype from the June 2016 “Wired” article with two each of red, green, blue fibers per eye (6 total)ml-combiners-from-us-20150346495which would support two simultaneous focus points (in future articles I plan on going into more about the scanning fiber displays).

wired-photo-croppedAbove I have put together a series of figures from Magic Leap’s US patent application 2015/0346495.  Most of these are difference approaches to accomplish essentially the same effect, namely to create 2 or more images in layers that appear to be in focus at different distances.  In some approaches they will generate the various focused images time sequentially and rely on the eye’s persistence of vision to fuse them (the Stanford Compressive Display works sequentially).  You may note that some of the combiner technologies shown above are not that flat including what is known as  “free form optics” (Fig. 22G above) that would be compatible with a panel (DLP, LCOS, or Micro-OLED display).

And Now for something completely different

ml-495-application

To the left patent application 2015/0346495 that shows a very different optical arrangement with a totally different set of inventors from the prior patents.   This device supports multiple focus effects via a Variable Focus Element (VFE).   What they do is generate a series of images sequentially and change the focus between images and use the persistence of the human visual system to fuse the various focused images.

This is a totally different approach to achieve the same effect.   It does requires a very fast image generating device which would tend to favor DLP and OLED over say LCOS as the display device.   I have questions as to how well the time sequential layers will work with a moving image and would there be temporal breakup-effect.

There are also a number of patents with totally different optical engines and totally different inventors (and not principles of Magic Leap) with free-form (very thick/non-flat) optics  20160011419 and 20160154245 which would fit with using an LCOS (or DLP) panel instead of the laser fiber scanning display.

I have heard from more than one source that at least some early prototypes by Magic Leap used DLPs.  This would suggest some form of time sequential focusing.

Problems I See with the “Photonic Chip” Magic Leap Showed in the June 2016 Wired picture

hololens-combiner-002-sm“Edge injection” waveguide – There needs to be an area to inject the light.  All the waveguide structures in Magic Leaps patents show use “side/edge” injection of the image.  Compare to the Microsoft’s Hololens (at right)which injects  the image
light in the face (highlighted with the green dots).   With a edge injected waveguide, the waveguide would need to be thicker for even a single layer, no less the multiple layers with multiple focus distances that Magic Leap is requires.

lumusLumus (at left) has series of exit prisms similar to a single layer of the Magic Leap ‘495 application Figs. 5H, 6A, 8A, and 10.  Lumus does edge injection but at roughly a 45 degree angle (see circled edge) which gives more area to inject the image and gets the light started at an angle sufficient for Total Internal Reflection (TIR).  There is nothing like this in the Magic Leap chip.

magic-leap-combiner-cropLooking at the Magic Leap chip” (right) there is not obvious place for light to be “injected”.  One would expect to see some discernible structure such as an angled edge or a some structure like in the ‘705 application Fig. 8 for injecting the light. Beyond this, what about the injecting multiple images for the various focus layers.  There is a “tab” at the top which would seem to be either for mounting or it could be a light injection area for a surface injection like Hololens, but then I would expect to see some blurring/color or other evidence of diffractive structure (like Hololens does) to cause the light to bend about 45 degrees for TIR in such a short distance.

Another concern is that you don’t see any structure other than some blurring/diffusion in the Magic Leap chip.  Notice in both the Lumus and Microsoft combiners you can see structures, a blurring/color change in the case of Hololens and the exit prisms in the case of Lumus.

Beyond this if they are using their piezo scanned laser fiber display, it generates a light spiral angular cone of light that has to be “columated” (make the light rays parallel which is shown in the patent applications) so they can make their focus effects work. There would need to be a structure for doing the columation.   If they are using a more conventional display such as DLP, LCOS, or MicroOLED they are going to need a larger light injection area.

My conclusion is that at best this Magic Leap chip shown is either part of their combiner (one layer) or just a mock-up of what they hope to make someday.   I haven’t had a chance to look at or through it and anyone that has is under NDA, but based on the evidence I have, it seems unlikely that what is shown is function.

Pupil/Eyebox

I’m curious to see how small/critical the pupil/eyebox will be for their combiner.   On the one hand they want light at a the right angles to create the focusing effects and on the other hand they will will diverse/diffused light to give a large enough pupil/eyebox which could be at a cross purpose.  I’m wondering how critical it will be to position the eye in precisely the right place.   This is a question and not a criticism per say.

What, Himax LCOS? Business Insider OCT 27, 2016 (“Magic Leap Lite”?)

I had been studying the various patents and articles for some time and then last week’s Business Insider (see: http://www.businessinsider.in/Magic-Leap-could-be-gearing-up-for-a-2017-launch/articleshow/55097808.cms) throws a big curve ball.  The article attributes KGI Securities analyst Ming-Chi Kuo as saying:

“the high cost of some of Magic Leap’s components, such as a micro projector from Himax that costs about $35 to $45 per unit.”

I have no idea as to whether this is true or not, but if true it suggests something very different.   Using a Himax LCOS device is inconsistent with about everything Magic Leap has filed patents on. Even the sequentially focusing display would at best be tough with the Himax LCOS as it has a significantly lower field sequential rate than DLP.

If true, it would suggest that Magic Leap going to put out a “Magic Leap Very Lite” product based around some of their developments. Maybe this will be more of a software, user interface, and developer device. But I don’t see how they get close to what they have talked about to date.  The highest resolution Himax production device is 1366×768.

More Observations on Stanford’s Compressive Display and Magic Leap

Both are based on greatly reducing the image content from the general/brute force case so that a feasible system might be possible.  The Stanford approach is different from what Magic Leap appears to be doing.  The Stanford System has a display panel and a “modulator” panel that selects the lights rays (via controlling the angle of light that gets through) from display panel.  In contrast Magic Leap generates multiple layers of images with different focus associated with each layer in an additive manner.   This should mean that there two approaches to things like “occlusion” where parts of an image hide something behind it will have to be different (it would seem to be more easily dealt with in the Stanford approach I would think).

A key point that Dr. Wetztein makes is that brute force light fields (ala Nvidia which hugely sacrifices resolution) are impractical (too much to display and too much to process) so you have to find ways to drastically reduce the display information.  Dr. Wetztein also comments (a passing comment in the video) the that the problems are greatly reduced if you can track the eye.  Reducing the necessary image content has to be at the hear the heart of Magic Leap as well.  In all the incarnations in the patent art and Magic Leap’s comments point to supporting simultaneously two or more focus points.   Eye tracking is another key point in Magic Leap’s patents.

One might wonder if you can eye track and if you can tell the focus point of the eyes, you could eliminate the need to the light field display altogether and generate an image that appears to be focused and blurred based on the focus point of the eye.  Dr. Wetztein points out that one of the big reasons for having light fields is to deal with the eyes focus not agreeing with where the two eyes are aimed

Conclusion

Summing it all up, I am skeptical that Magic Leap is going to live up to the hype, at least anytime soon.  $1.4B can buy a lot of marketing as well as technology development, but it looks to me that to accomplish what Magic Leap wants to do, is not going to be feasible for a long time. Assuming they can make it work (I wonder about the fiber scanning display), there is then the issue of feasibility (The Concord SST airplane was “possible” but it was not “feasible” for example).

If they do enter the market in 2017 as some have suggested, it is almost certainly going to be a small subset of what they plan to do. It could be like Apple’s Newton that arguably was too far ahead of its time to fulfill its vision or it could be the next SST/Segway.

Next time I am planning on writing about Magic Leap’s scanning fiber display.

AR/MR Combiners Part 2 – Hololens

hololens-combiner-with-patent

Microsoft’s Hololens is perhaps the most most well known device using flat “waveguide” optics to “combine” the real world with computer graphics. Note there are no actual “holograms” anywhere in Hololens by the scientific definition.

At left is a picture from the Verge Teardown of a Hololens SDK engine and a from a US Patent Application 2016/0231568 I have added some red and green dots to the “waveguides” in the Verge picture to help you see their outlines.

diffraction grating is a type of Diffractive Optical Element (DOE) and has a series of very fine linear structures with a period/repeated spacing on the order of the wavelengths of light (as in extremely small). hololens-diffraction-gratingA diffraction grating acts like a lens/prism to bend the light and as an unwanted side effect the light also is split separated by wavelength (see top figure at left) as well has affecting the polarization of the light. If it were a simple grating, the light would symmetrically split the light in two directions (top figure at left) but as the patent points out if the structure is tilted then more of the light will go in the desired direction (bottom figure at left).   This is a very small structure (on the order of the wavelength of the light) must be formed on the surface of the flat waveguide.

Optical waveguides use the fact that once light enters glass or clear plastics at a certain angle or shallower, it is will totally reflect, what is known as Total Internal Reflection or TIR.  The TIR critical angle is around 45 degrees for the typical glass and plastics with their coatings used in optics.

hololens-patent-sideviewHololens use the diffraction grating (52 in Fig 3B above) to bend or “incouple” the light or the light so that it will TIR (see figure at right).   The light then TIR’s off of the flat surfaces around within the glass and hits off a triangular “fold zone” (in Fig. 3B above) which causes light to turn ~90 degrees down to the “exit zone” DOE (16 in Fig. 3B).  The exit zone DOE causes the angle of the light to be reduced so it will no longer TIR so it can exit the glass toward the eye.

Another function of the waveguides, particularly the exit waveguide 16 is to perform “pupil expansion” or slightly diffusing the light so that the image can be viewed from a wider angle.   Additionally, it is waveguide 16 that the user sees the real world through and invariably it has to have some negative effect from seeing the world through a slightly diffuse diffraction grating.

Hololens is far from the first to use DOE’s to enter and exit a flat waveguide (there are many examples) and they appear to have acquired the basic technology from Nokia’s efforts of about 10 years ago.   Other’s have used holographic optical elements (HOE) which perform similar functions to DOEs and still others have use more prismatic structure in the waveguides, but each of these alternatives solves some issues as the expense of others.

A big issue for the flat combiners I have seen to date has been chroma aberrations, the breaking up of white light into colors and out of focus and haze effects.   In bending the light at about 45 degrees is like going through a “prism” and the color separate, follow slightly different paths through the waveguide and are put back together by the exit grating.  The process is not perfect and thus there is some error/haze/blur that can be multiple pixels wide. Additionally as pointed out earlier, the user is invariably looking  at the real world through the structure meant to cause the light to exit the from the waveguide toward the eye and it has to have at least some negative effect.

There is a nice short 2013 article on flat combiners by (one author being a Google employee) that discusses some of the issues with various combiners including the Nokia one on which Hololens is base.  In particular they stated:

“The main problems of such architecture are the complexity of the master fabrication and mass replication as well as the small angular bandwidth (related to the resulting FOV). In order to mimic the holographic Bragg effect, sub-wavelength tilted structures with a high aspect ratio are needed, difficult to mass replicate for low cost volume production”  

Base on what I have heard from a couple of sources, the yield is indeed currently low and thus the manufacturing cost is high in making the Hololens combiner.   This may or may not be a solvable (in terms of meeting a consumer acceptable price) problem with volume production.

hololens-odg-comparisonWhile the Hololens combiner is a marvel of optical technology, one has to go back and try and understand why they wanted a thin flat combiner rather than say the vastly simpler (and less expensive maybe by over 10X) tilted flat combiner that say Osterhout Design Group (ODG), for example, is currently using.   Maybe it is for some planned greater advantage in the long term, but when you look at the current Hololens flat combiner, the size/width of the combiner would seem to have little effect on the overall size of the resulting device.  Interestingly, Microsoft has spent about $150 million in licensing fees to ODG.

Conclusions

Now step back and look at the size of the whole Hololens structure with the concentric bands going around the users head.  There is inner band to grip the user’s head while the electronics is held in the outer band.  There is a large nose bridge to distribute the weight on the persons nose and a big curve shield (usually dark tinted) in front of the combiner.  You have to ask, did the flat optical combiner make a difference?

I don’t know reasons/rational/advantages of why Hololens has gone with a vastly more complex combiner structure.   Clearly at the present, it does not give a significant (if any) size advantage.   It almost looks like they had this high tech combiner technology and decided to use it regardless (maybe it was the starting point of the whole program).

Microsoft is likely investing several billion dollars into Hololens. Google likely spent over $1 billion on the comparatively very simple Google Glass (not to mention their investment in Magic Leap). Closely realated, Facebook spent $2b to acquire Oculus Rift. Certainly big money is being thrown around, but is it being spent wisely?

Side Comments: No Holograms Anywhere to be Found

What Microsoft calls “Holograms” are the marketing name Microsoft has given to Mixed Reality (MR).   It is rather funny to see technical people that know better stumble around saying things like “holograms, but not really holograms, . . .”  Unfortunately due to the size and marketing clout of Microsoft others such as Metavision has started calling what they are doing “holograms” too (but this does not make is true).

Then again probably over 99% of what the public thinks are “holograms” are not.  Usually they are simple optical combiner effects cause by partial reflections off of glass or plastic.

Perhaps ironically, while Microsoft talks of holograms and the product as the “Hololens” there are as best I can find no holograms used even static ones that could have been used in the waveguide optics (they use diffraction gratings instead).

Also interestingly, the patent application is assigned to Microsoft Technology Licensing, LLC., a recently separated company from Microsoft Inc.  This would appear to be in anticipation of future patent licensing/litigation (see for example).

Next Time on Combiners

Next time on this subject, I plan on discussing Magic Leap the $1.4 Billion invested “startup” and what it looks like they may be doing.   I was originally planning on covering it with Hololens, but it became clear that it was too much to try and cover in one article.

AR/MR Optics for Combining Light for a See-Through Display (Part 1)

combiners-sample-cropIn general, people find the combining of an image with the real world somewhat magical; we see this with heads up displays (HUDs) as well as Augmented/Mixed Reality (AR/MR) headsets.   Unlike Starwars R2D2 projection into thin air which was pure movie magic (i.e. fake/impossible), light rays need something to bounce off to redirect them into a person’s eye from the image source.  We call this optical device that combines the computer image with the real world a “combiner.”

In effect, a combiner works like a partial mirror.  It reflects or redirects the display light to the eye while letting light through from the real world.  This is not, repeat not, a hologram which it is being mistakenly called by several companies today.  Over 99% people think or call “holograms” today are not, but rather simple optical combining (also known as the Pepper’s Ghost effect).

I’m only going to cover a few of the more popular/newer/more-interesting combiner examples.  For a more complete and more technical survey, I would highly recommend a presentation by Kessler Optics. My goal here is not to make anyone an optics expert but rather to gain insight into what companies are doing why.

With headsets, the display device(s) is too near for the human eye to focus and there are other issues such as making a big enough “pupil/eyebox” so the alignment of the display to the eye is not overly critical. With one exception (the Meta 2) there are separate optics  that move apparent focus point out (usually they try to put it in a person’s “far” vision as this is more comfortable when mixing with the real word”.  In the case of Magic Leap, they appear to be taking the focus issue to a new level with “light fields” that I plan to discuss the next article.

With combiners there is both the effect you want, i.e. redirecting the computer image into the person’s eye, with the potentially undesirable effects the combiner will cause in seeing through it to the real world.  A partial list of the issues includes:

  1. Dimming
  2. Distortion
  3. Double/ghost images
  4. Diffraction effects of color separation and blurring
  5. Seeing the edge of the combiner

In addition to the optical issues, the combiner adds weight, cost, and size.  Then there are aesthetic issues, particularly how they make the user’s eye look/or if they affect how others see the user’s eyes; humans are very sensitive to how other people’s eye look (see the EPSON BT-300 below as an example).

FOV and Combiner Size

There is a lot of desire to support a wide Field Of View (FOV) and for combiners a wide FOV means the combiner has to be big.  The wider the FOV and the farther the combiner is from the eye the bigger the combiner has to get (there is not way around this fact, it is a matter of physics).   One way companies “cheat” is to not support a person wearing their glasses at all (like Google Glass did).

The simple (not taking everything into effect) equation (in excel) to computer the minimum width of a combiner is =2*TAN(RADIANS(A1/2))*B1 where A1 is the FOV in degrees and and B1 is the distance to farthest part combiner.  Glasses are typically about 0.6 to 0.8 inches from the eye and the size of the glasses and the frames you want about 1.2 inches or more of eye relief. For a 40 degree wide FOV at 1.2 inches this translates to 0.9″, at 60 degrees 1.4″ and for 100 degrees it is 2.9″ which starts becoming impractical (typical lenses on glasses are about 2″ wide).

For, very wide FOV displays (over 100 degree), the combiner has to be so near your eye that supporting glasses becomes impossible. The formula above will let your try your own assumptions.

Popular/Recent Combiner Types (Part 1)

Below, I am going to go through the most common beam combiner options.  I’m going to start with the simpler/older combiner technologies and work my way to the “waveguide” beam splitters of some of the newest designs in Part 2.  I’m going to try and hit on the main types, but there are many big and small variations within a type

gg-combinerSolid Beam Splitter (Google Glass and Epson BT-300)

These are often used with a polarizing beam splitter polarized when using LCOS microdisplays, but they can also be simple mirrors.  They generally are small due to weight and cost issues such as with the Google Glass at left.  Due to their small size, the user will see the blurry edges of the beam splitter in their field of view which is considered highly undesirable.  bt-300Also as seen in the Epson BT-300 picture (at right), they can make a person’s eyes look strange.  As seen with both the Google Glass and Epson, they have been used with the projector engine(s) on the sides.

Google glass has only about a 13 degree FOV (and did not support using a person’s glasses) and about 1.21 arc-minutes/pixel angular resolution with is on the small end compared to most other headset displays.    The BT-300 about 23 degree (and has enough eye relief to supports most glasses) horizontally and has dual 1280×720 pixels per eye giving it a 1.1 arc-minutes/pixel angular resolution.  Clearly these are on the low end of what people are expecting in terms of FOV and the solid beam quickly becomes too large, heavy, and expensive at the FOV grows.  Interesting they are both are on the small end of their apparent pixel size.

meta-2-combiner-02bSpherical/Semi-Spherical Large Combiner (Meta 2)

While most of the AR/MR companies today are trying to make flatter combiners to support a wide FOV with small microdisplays for each eye, Meta has gone in the opposite direction with dual very large semi-spherical combiners with a single OLED flat panel to support an “almost 90 degree FOV”. Note in the picture of the Meta 2 device that there are essentially two hemispheres integrated together with a single large OLED flat panel above.

Meta 2 uses a 2560 by 1440 pixel display that is split between two eyes.  Allowing for some overlap there will be about 1200 pixel per eye to cover 90 degrees FOV resulting in a rather chunkylarge (similar to Oculus Rift) 4.5 arc-minutes/pixel which I find somewhat poor (a high resolution display would be closer to 1 a-m/pixel).

navdy-unitThe effect of the dual spherical combiners is to act as a magnifying mirror that also move the focus point out in space so the use can focus. The amount of magnification and the apparent focus point is a function of A) the distance from the display to the combiner, B) the distance from the eye to the combiner, and C) the curvature.   I’m pretty familiar with this optical arrangement since the optical design it did at Navdy had  similarly curved combiner, but because the distance from the display to the combiner and the eye to the combiner were so much more, the curvature was less (larger radius).

I wonder if their very low angular resolution was as a result of their design choice of the the large spherical combiner and the OLED display’s available that they could use.   To get the “focus” correct they would need a smaller (more curved) radius for the combiner which also increases the magnification and thus the big chunky pixels.  In theory they could swap out the display for something with higher resolution but it would take over doubling the horizontal resolution to have a decent angular resolution.

I would also be curious how well this large of a plastic combiner will keep its shape over time. It is a coated mirror and thus any minor perturbations are double.  Additionally and strain in the plastic (and there is always stress/strain in plasic) will cause polarization effect issues, say whenlink-ahmd viewing and LCD monitor through it.   It is interesting because it is so different, although the basic idea has been around for a number of years such as by a company called Link (see picture on the right).

Overall, Meta is bucking the trend toward smaller and lighter, and I find their angular resolution disappointing The image quality based on some on-line see-through videos (see for example this video) is reasonably good but you really can’t tell angular resolution from the video clips I have seen.  I do give them big props for showing REAL/TRUE video’s through they optics.

It should be noted that their system at $949 for a development kit is about 1/3 that of Hololens and the ODG R-7 with only 720p per eye but higher than the BT-300 at $750.   So at least on a relative basis, they look to be much more cost effective, if quite a bit larger.

odg-002-cropTilted Thin Flat or Slightly Curved (ODG)

With a wide FOV tilted combiner, the microdisplay and optics are locate above in a “brow” with the plate tilted (about 45 degrees) as shown at left on an Osterhout Design Group (ODG) model R-7 with 1280 by 720 pixel microdisplays per eye.   The R-7 has about a 37 degree FOV and a comparatively OK 1.7 arc-minutes/pixel angular resolution.

odg-rr-7-eyesTilted Plate combiners have the advantage of being the simplest and least expensive way to provide a large field of view while being relatively light weight.

The biggest drawback of the plate combiner is that it takes up a lot of volume/distance in front of the eye since the plate is tilted at about 45 degrees from front to back.  As the FOV gets bigger the volume/distance required also increase.
odg-horizons-50d-fovODG is now talking about a  next model called “Horizon” (early picture at left). Note in the picture at left how the Combiner (see red dots) has become much larger. They claim to have >50 degree FOV and with a 1920 x 1080 display per eyethis works out to an angular resolution of about 1.6 arc-minutes/pixel which is comparitively good.

Their combiner is bigger than absolutely necessary for the ~50 degree FOV.  Likely this is to get the edges of the combiner farther into a person’s peripheral vision to make them less noticeable.

The combiner is still tilted but it looks like it may have some curvature to it which will tend to act as a last stage of magnification and move the focus point out a bit.   The combiner in this picture is also darker than the one in the older R-7 combiner and may have additional coatings on it.

ODG has many years of experience and has done many different designs (for example, see this presentation on Linked-In).  They certainly know about the various forms of flat optical waveguides such as Microsoft’s Hololens is using that I am going to be talking about next time.  In fact,  that Microsoft’s licensed Patent from ODG for  about $150M US — see).

Today, flat or slightly curved thin combiners like ODG is using probably the best all around technology today in terms of size, weight, cost, and perhaps most importantly image quality.   Plate combiners don’t require the optical “gymnastics” and the level of technology and precision that the flat waveguides require.

Next time — High Tech Flat Waveguides

Flat waveguides using diffraction (DOE) and/or holographic optical elements (HOE) are what many think will be the future of combiners.  They certainly are the most technically sophisticated. They promise to make the optics thinner and lighter but the question is whether they have the optical quality and yield/cost to compete yet with simpler methods like what ODG is using on the R-7 and Horizon.

Microsoft and Magic Leap each are spending literally over $1B US each and both are going with some form of flat, thin waveguides. This is a subject to itself that I plan to cover next time.

 

Near Eye AR/VR and HUD Metrics For Resolution, FOV, Brightness, and Eyebox/Pupil

Image result for oculus riftI’m planning on following up on my earlier articles about AR/VR Head Mounted Displays
(HMD) that also relate to Heads Up Displays (HUD) with some more articles, but first I would like to get some basic Image result for hololenstechnical concepts out of the way.  It turns out that the metrics we care about for projectors while related don’t work for hud-renaultmeasuring HMD’s and HUDs.

I’m going to try and give some “working man’s” definitions rather than precise technical definitions.  I be giving a few some real world examples and calculations to show you some of the challenges.

Pixels versus Angular Resolution

Pixels are pretty well understood, at least with today’s displays that have physical pixels like LCDs, OLEDs, DLP, and LCOS.  Scanning displays like CRTs and laser beam scanning, generally have additional resolution losses due to imperfections in the scanning process and as my other articles have pointed out they have much lower resolution than the physical pixel devices.

When we get to HUDs and HMDs, we really want to consider the angular resolution, typically measured in “arc-minutes” which are 1/60th of a degree; simply put this is the angular size that a pixel covers from the viewing position. Consumers in general haven’t understood arc-minutes, and so many companies have in the past talked in terms of a certain size and resolution display viewed from a given distance; for example a 60-inch diagonal 1080P viewed at 6 feet, but since the size of the display, resolution and viewing distance are all variables its is hard to compare displays or what this even means with a near eye device.

A common “standard” for good resolution is 300 pixels per inch viewed at 12-inches (considered reading distance) which translates to about one-arc-minute per pixel.  People with very good vision can actually distinguish about twice this resolution or down to about 1/2 an arc-minute in their central vision, but for most purposes one-arc-minute is a reasonable goal.

One thing nice about the one-arc-minute per pixel goal is that the math is very simple.  Simply multiply the degrees in the FOV horizontally (or vertically) by 60 and you have the number of pixels required to meet the goal.  If you stray much below the goal, then you are into 1970’s era “chunky pixels”.

Field of View (FOV) and Resolution – Why 9,000 by 8100 pixels per eye are needed for a 150 degree horizontal FOV. 

As you probably know, the human eye’s retina has variable resolution.  The human eyes has roughly elliptical FOV of about 150 to 170 degrees horizontally by 135 to 150 degrees vertically, but the generally good discriminating FOV is only about 40 degree (+/-20 degrees) wide, with reasonably sharp vision, the macular, being about 17-20 degrees and the fovea with the very best resolution covers only about 3 degrees of the eye’s visual field.   The eye/brain processing is very complex, however, and the eye moves to aim the higher resolving part of the retina at a subject of interest; one would want the something on the order of the one-arc-minute goal in the central part of the display (and since having a variable resolution display would be a very complex matter, it end up being the goal for the whole display).

And going back to our 60″, 1080p display viewed from 6 feet, the pixel size in this example is ~1.16 arc-minutes and the horizontal field of view of view will be about 37 degrees or just about covering the generally good resolution part of the eye’s retina.

Image result for oculus rift

Image from Extreme Tech

Now lets consider the latest Oculus Rift VR display.  It spec’s 1200 x 1080 pixels with about a 94 horz. by 93 vertical FOV per eye or a very chunky ~4.7 arc-minutes per pixel; in terms of angular resolution is roughly like looking at a iPhone 6 or 7 from 5 feet away (or conversely like your iPhone pixels are 5X as big).   To get to the 1 arc-minute per pixel goal of say viewing today’s iPhones at reading distance (say you want to virtually simulate your iPhone), they would need a 5,640 by 5,580 display or a single OLED display with about 12,000 by 7,000 pixels (allowing for a gap between the eyes the optics)!!!  If they wanted to cover the 150 by 135 FOV, we are then talking 9,000 by 8,100 per eye or about a 20,000 by 9000 flat panel requirement.

Not as apparent but equally important is that the optical quality to support these types of resolutions would be if possible exceeding expensive.   You need extremely high precision optics to bring the image in focus from such short range.   You can forget about the lower cost and weight Fresnel optics (and issues with “God rays”) used in Oculus Rift.

We are into what I call “silly number territory” that will not be affordable for well beyond 10 years.  There are even questions if any know technology could achieve these resolutions in a size that could fit on a person’s head as there are a number of physical limits to the pixel size.

People in gaming are apparently living with this appallingly low (1970’s era TV game) angular resolution for games and videos (although the God rays can be very annoying based on the content), but clearly it not a replacement for a good high resolution display.

Now lets consider Microsoft’s Hololens, it most criticized issue is is smaller (relative to the VR headsets such as Oculus) FOV of about 30 by 17.5 degrees.  It has a 1268 by 720 pixel display per eye which translates into about 1.41 arc-minutes per pixel which while not horrible is short of the goal above.   If they had used 1920×1080 (full HD) microdisplay devices which are becoming available,  then they would have been very near the 1 arc-minute goal at this FOV.

Let’s understand here that it is not as simple as changing out the display, they will also have to upgrade the “light guide” that the use as an combiner to support the higher resolution.   Still this is all reasonably possible within the next few years.   Microsoft might even choose to grow the FOV to around 40 degrees horizontally rather and keep the lower angular resolution a 1080p display.  Most people will not seriously notice the 1.4X angular resolution different (but they will by about 2x).

Commentary on FOV

I know people want everything, but I really don’t understand the criticism of the FOV of Hololens.  What we can see here is a bit of “choose your poison.”  With existing affordable (or even not so affordable) technology you can’t support a wide field of view while simultaneously good angular resolution, it is simply not realistice.   One can imaging optics that would let you zoom between a wide FOV with lower angular resolution and a smaller FOV with higher angular resolution.  The control of this zooming function could perhaps be controlled by the content or feedback from the user’s eyes and/or brain activity.

Lumens versus Candelas/Meter2 (cd/m2 or nits)

With an HMD or HUD, what we care about is the light that reaches the eye.   In a typical front projector system, only an extremely small percentage of the light that goes out of the projector, reflects off the screen and makes it back to any person’s eye, the vast majority of the light goes to illuminating the room.   With a HMD or HUD all we care about is the light that makes it into the eye.

Projector lumens or luminous flux, simply put, are a measure of the total light output and for a projector is usually measure when outputting a solid white image.   To get the light that makes it to the eye we have to account for the light hits a screen, and then absorbed, scattered, and reflects back at an angle that will get back to the eye.  Only an exceeding small percentage (a small fraction of 1%) of the projected light will make it into the eye in a typical front projector setup.

With HMDs and HUDs we talk about brightness in terms candelas-per-Meter-Squared (cd/m2), also referred to as “nits” (while considered an obsolete term, it is still often used because it is easier to write and say).  Cd/m2 (or luminance) is measure of brightness in a given direction which tells us how bright the light appears to the eye looking in a particular direction.   For a good quick explanation of lumens, cd/m2 I would recommend a Compuphase article.

Image result for hololens

Hololens appears to be “luminosity challenged” (lacking in  cd/m2) and have resorted to putting sunglasses on outer shield even for indoor use.  The light blocking shield is clearly a crutch to make up for a lack of brightness in the display.   Even with the shield, it can’t compete with bright light outdoors which is 10 to 50 times brighter than a well lit indoor room.

This of course is not an issue for the VR headsets typified by Oculus Rift.  They totally block the outside light, but it is a serious issue for AR type headsets, people don’t normally wear sunglasses indoors.

Now lets consider a HUD display.  A common automotive spec for a HUD in sunlight is to have 15,000 cd/m2 whereas a typical smartphone is between 500 and 600 cd/m2 our about 1/30th the luminosity of what is needed.  When you are driving a car down the road, you may be driving in the direction of the sun so you need a very bright display in order to see it.

The way HUDs work, you have a “combiner” (which may be the car’s windshield) that combines the image being generated with the light from the real world.  A combiner typical only reflects about 20% to 30% of the light which means that the display before the combiner needs to have on the order of 30,000 to 50,000 cd/m2 to support the 15,000 cd/m2 as seen in the combiner.  When you consider that you smartphone or computer monitor only has about 400 to 600 cd/m2 , it gives you some idea of the optical tricks that must be played to get a display image that is bright enough.

phone-hudYou will see many “smartphone HUDs” that simply have a holder for a smarphone and combiner (semi-mirror) such as the one pictured at right on Amazon or on crowdfunding sites, but rest assured they will NOT work in bright sunlight and only marginal in typical daylight conditions. Even with combiners that block more than 50% of the daylight (not really much of a see-through display at this point) they don’t work in daylight.   There is a reason why companies are making purpose built HUDs.

The cd/m2 also is a big issue for outdoor head mount display use. Depending on the application, they may need 10,000 cd/m2 or more and this can become very challenging with some types of displays and keeping within the power and cooling budgets.

At the other extreme at night or dark indoors you might want the display to have less than 100 cd/m2 to avoid blinding the user to their surrounding.  Note the SMPTE spec for movie theaters is only about 50 cd/mso even at 100 cd/m2 you would be about 2X the brightness of a movie theater.  If the device much go from bright sunlight to night use, you could be talking over a 1,500 to 1 dynamic range which turns out to be a non-trivial challenge to do well with today’s LEDs or Lasers.

Eye-Box and Exit Pupil

Since AR HMDs and HUDs generate images for a user’s eye in a particular place, yet need to compete with the ambient light, the optical system is designed to concentrate light in the direction of the eye.  As a consequence, the image will only be visible in a given solid angle “eye-box” (with HUDs) or “pupil” (with near eye displays).   There is also a trade-off in making the eyebox or pupil bigger and the ease of use, as the bigger the eye-box or pupil, the easier it will be the use.

With HUD systems there can be a pretty simple trade-off in eye-box size and cd/m2 and the lumens that must be generated.   Using some optical tricks can help keep from needing an extremely bright and power hungry light source.   Conceptually a HUD is in some ways like a head mounted display but with very long eye relief. With such large eye relieve and the ability of the person to move their whole head, the eyebox for a HUD has significantly larger than the exit pupil of near eye optics.  Because the eyebox is so much larger a HUD is going to need much more light to work with.

For near eye optical design, getting a large exit pupil is a more complex issue as it comes with trade-offs in cost, brightness, optical complexity, size, weight, and eye-relief (how far the optics are from the viewer’s eye).

Too small a pupil and/or with more eye-relief, and a near eye device is difficult to use as any small movement of the device causes you to to not be able to see the whole image.  Most people’s first encounter with an exit pupil is with binoculars or a telescope and how the image cuts offs unless the optics are centered well on the user’s eye.

Conclusions

While I can see that people are excited about the possibilities of AR and VR technologies, I still have a hard time seeing how the numbers add up so to speak for having what I would consider to be a mass market product.  I see people being critical of Hololens’ lower FOV without being realistic about how they could go higher without drastically sacrificing angular resolution.

Clearly there can be product niches where the device could serve, but I think people have unrealistic expectations for how fast the field of view can grow for product like Hololens.   For “real work” I think the lower field of view and high angular resolution approach (as with Hololens) makes more sense for more applications.   Maybe game players in the VR space are more willing to accept 1970’s type angular resolution, but I wonder for how long.

I don’t see any technology that will be practical in high volume (or even very expensive at low volume) that is going to simultaneously solve the angular resolution and FOV that some people want. AR displays are often brightness challenged, particularly for outdoor use.  Layered on top of these issues are those size, weight, cost, and power consumption which we will have to save these issues for another day.

 

Wrist Projector Scams – Ritot, Cicret, the new eyeHand

ritot-cicret-eyehand-001Wrist Projectors are the crowdfund scams that keeps on giving with new ones cropping up every 6 months to a year. When I say scam, I mean that there is zero chance that they will ever deliver anything even remotely close what they are promising. They have obviously “Photoshopped”/Fake pictures to “show” projected images that are not even close to possible in the the real world and violate the laws of physics (are forever impossible). While I have pointed out in this blog where I believe that Microvision has lied and mislead investors and showed very fake images with the laser beam scanning technology, even they are not total scammers like Ritot, Cicret, and eyeHand.

According to Ritot’s Indiegogo campaign, they have taken in $1,401,510 from 8917 suckers (they call them “backers”).   Cicret according to their website has a haul of $625,000 from 10,618 gullible people.

Just when you think that Ritot and Cicret had found all the suckers for wrist projectors, now CrowdFunder reports that eyeHand has raised $585,000 from individuals and claims to have raised another $2,500,000 in equity from “investors” (if they are real then they are fools, if not, then it is just part of the scam). A million here, $500K there, pretty soon you are talking real money.

Apparently Dell’s marking is believing these scams (I would hope their technical people know better) and has show video Ads that showed a similar impossible projectors.  One thing I will give them is that they did a more convincing “simulation” (no projecting “black”) and they say in the Ads that these are “concepts” and not real products. See for example the following stills from their Dell’s videos (click to see larger image).  It looks to me like they combined a real projected image (with the projector off camera and perpendicular to the arm/hand) and then add fake projector rays to try and suggest it came from the dummy device on the arm): dell-ritots-three

Ritot was the first of these scams I was alerted to and I help contribute some technical content to the DropKicker article http://drop-kicker.com/2014/08/ritot-projection-watch/. I am the “Reader K” that they thanked in the author’s note at the beginning of the article.  A number of others have called out the Ritot and Cicret as being scams but that did not keep them from continuing to raise money nor has it stopped the new copycat eyeHand scam.

The some of key problems with the wrist projector:

  1. Very shallow angle of projection.  Projectors normally project on a surface that is perpendicular to the direction of projection, but the wrist projectors have to project onto a surface that is nearly parallel to the direction of projection.  Their concepts show a projector that is only a few (2 to 4) millimeters above the surface. When these scammers later show “prototypes” they radically change the projection distance and projection angle.
  2. Extremely short projection distance.  The near side of the projection is only a few millimeters away while the far side of the image could be 10X or 50X further away.  There is no optics or laser scanning technology on earth that can do this.  There is no way to get such a wide image at such a short distance from the projector.  As light falls off with the square of distance, this results in an impossible illumination problem (the far side being over 100X dimmer than the near side).
  3. Projecting in ambient light – All three of the scammers show concept images where the projected image is darker than the surrounding skin.  This is absolutely impossible and violates the laws of physics.   The “black” of the image is set by the ambient light and the skin, the projector can only add light, it is impossible to remove light with a projector.  This shows ignorance and/or a callous regard for the the truth by the scammers.
  4. The blocking of the image by hairs, veins, and muscles.  At such a shallow angle (per #1 above) everything is in the way.
  5. There is no projector small enough.  These projector engines with their electronics that exist are more than 20X bigger in volume than what would be required to fit.
  6. The size of the orifice through with the light emerges is too small to support the size of the image that they want to project
  7.  The battery required to make them daylight readable would be bigger than the whole projector that they show.  These scammers would have you believe that a projector could work off a trivially small battery.
  8. Cicret and eyeHand show “touch interfaces” that won’t work due to the shallow angle.  The shadows cast by fingers working the touch interface would block the light to the rest of the image and made “multi-touch” impossible.   This also goes back to the shallow angle issue #1 above.

The issues above hold true whether the projection technology uses DLP, LCOS, or Laser Beam Scanning.

Cicret and Ritot have both made “progress reports” showing stills and  videos using projectors more than 20 times bigger and much higher and farther away (to reduce the projection angle) than the sleek wrist watch models they show in their 3-D CAD models.   Even then they  keep off-camera much/most of the electronics and battery/power-supply necessary needed to drive the optics that the show.

The image below is from a Cicret “prototype” video Feb of 2015 where they simply strapped a Microvision ShowWX+ HMDI upside down to a person’s wrist (I wonder how many thousand dollars they used engineering this prototype). They goofed in the video and showed enough of the projector that I could identify (red oval) the underside of the Microvision projector (the video also shows the distinctive diagonal roll bar of a Microvision LBS projector).  I have show the rest of the projector roughly to scale in the image below that they cropped off when shooting the video.  What you can’t tell in this video is that the projector is also a couple of inches above the surface of the arm in order to project a reasonable image.

cicret-001b

So you might think Cicret was going to use laser beam scanning, but no, their October 2016 “prototype” is showing a panel (DLP or LCOS) projector.  Basically it looks like they are just clamping whatever projector they find to a person’s wrist, there is no technology they are developing.  In this latest case, it looks like what they have done is found a small production projector taken its guts out and put it in a 3-D printed case.  Note the top of the case is going to be approximately 2 inches above a person’s wrist and how far away the image is from the projector.

cicret-002e

Ritot also has made update to keep their suckers on the hook.   Apparently Indiegogo only rule is that you much keep lying to your “backers” (for more on the subject of how Indiegogo condones fraud click here).  These updates at best show how little these scammers understood projection technology.   I guess one could argue that they were too incompetent to know they were lying.  ritot-demo-2014

On the left is a “demo” Ritot shows in 2014 after raising over $1M.  It is simply an off the shelf development system projector and note there is no power supply.  Note they are showing it straight on/perpendicular to the wrist from several inches away.

ritot-2015By 2015 Rito had their own development system and some basic optics.  Notice how big the electronics board is relative to the optics and that even this does not show the power source.

By April 2016 they showed an optical engine (ONLY) strapped to a persons wrist.  ritot-2016-04-20-at-25sCut off in the picture is the all the video drive electronics (see the flex cable in the red oval) that is off camera and likely a driver board similar to the one in the 2015 update  and the power supplies/battery.

In the April 2016 you should notice how the person’s wrist is bent to make make it more perpendicular to the direction of the projected image.  Also not that the image is distorted and about the size of an Apple watch’s image.   I will also guarantee that you will not have a decent view-able image when used outdoors in daylight.

The eyeHand scam has not shown anything like a prototype, just a poorly faked (projecting black) image.  From the low angle they show in their fake image, the projected would be blocked by the base of the thumb even if the person hold their hand flat.  To make it work at all they would have to move the projector well up the person’s arm and then bend the wrist, but then the person could not view it very well unless they hold their arm at an uncomfortable angle.  Then you have the problem of keeping the person from moving/relaxing their wrist and loosing the projection surface.   And of course it would not be view-able outdoors in daylight.

It it not like others have been trying to point out that these projectors are scams.  Google search “Ritot scam” or “Cicret scam” and you will find a number of references.  As best I can find, this blog is the first to call out the eyeHand scam:

  • The most technically in depth article was by Drop-Kicker on the Ritot scam
  • Captain Delusional has a  comic take on the Cicret scam on YouTube – He has some good insights on the issue of touch control but also makes some technical mistakes such as his comments on laser beam scanning (you can’t remove the laser scanning roll-bar by syncing the camera — also laser scanning has the same fall-off in brightness due do the scanning process).
  • Geek Forever had an article on the Ritot Scam 
  • A video about the Ritot Scam on Youtube
  • KickScammed about Ritot from 2014

The problem with scam startups is that they tarnish all the other startups trying to find a way to get started.  Unfortunately, the best liars/swindlers often do the best with crowdfunding.  The more they are willing to lie/exaggerate, the better it makes their product sound.

Indiegogo has proven time and again to have extremely low standards (basically if the company keep posting lies, they are good to go – MANY people tried to tell Indiegogo about the Ritot Scam but to no avail before Ritot got the funds). Kickstarter has some standards but the bar is not that large but at least I have not see a wrist projector on Kickstarter yet. Since the crowdfunding sites get a cut of the action whether the project delivers or not, their financial incentives are on the side of the companies and the people funding. There is no bar for companies that go with direct websites, it is purely caveat emptor.

I suspect that since the wrist projector scam has worked at least three (3) times so far, we will see other using it.   At least with eyeHand you have a good idea of what it will look like in two years (hint – like Ritot and Cicret).

Laser Beam Scanning Versus Laser-LCOS Resolution Comparison

cen-img_9783-celluon-with-uo

Side By Side Center Patterns (click on image for full size picture)

I apologize for being away for so long.  The pictures above and below were taken over a year ago and I meant to format and publish them back then but some other business and life events got in the way.

The purpose of this article is to compare the resolution of the Celluon PicoPro Laser Beam Scanning (LBS) projector and the UO Smart Beam Laser LCOS projector.   This is not meant to be a full review of both products, although I will make a few comments here and there, but rather, it is to compare the resolution between the two products.  Both projectors claim to have 720P resolution but only one of them actually has that “native/real” resolution.

This is in a way a continuation of the serious I have written about the PicoPro with optics developed by Sony and the beam scanning mirror and control by Microvision and in particular articles http://wp.me/p20SKR-gY and http://wp.me/p20SKR-hf.  With this article I am now included some comparison pictures I took of the UO Smart Beam projector (https://www.amazon.com/UO-Smart-Beam-Laser-Projector-KDCUSA/dp/B014QZ4FLO).

As per my prior articles, the Celluon PicoPro has no where close to it stated 1920×720 (non-standard) nor even 1280×720 (720P) resolution.  The UO projector while not perfect, does demonstrate 720P resolution reasonably well, but it does suffer from chroma aberrations (color separation) at the top of the image due to optical 100% offset (this is to be expected to some extent).

Let me be up front, I worked on the LCOS panel used in the UO projector when I was at Syndiant but I had nothing to do with the UO projector itself.   Take that as bias if you want, but the pictures I think tell the story.  I did not have any contact with either UO (nor Celluon for that matter) in preparing this article.

I also want to be clear that both the UO projector and the Celluon PicoPro tested are now over 1 year old and there may have been improvements since then.  I saw serious problems with both products, in particular with the color balance, the Celluon is too red (“white” is pink) and the UO very red deficient (“whilte is significantly blue-green).   The color is so far off on the Celluon that it would be a show stopper for me ever wanting to buy one as a consumer (hopefully UO has or will fix this).   Frankly, I think both projectors have serious flaws (if you want to know more, ask and I will write a follow-up article).

The UO Smart Beam had the big advantage in that it has “100% offset” which means that when placed on table-top, it will project upward not hitting the table without any keystone.   The PicoPro has zero offset and shoots straight out.  If you put it flat on a table the lower half of the image will shoot into the tabletop. Celluon includes a cheap and rather silly monopod that you can used to have the projector “float” above the table surface and then you can tilt it up and get a keystone image.  To take the picture, I had to mount the PicoPro on a much taller tripod and then shoot over the Projector so the image would not be keystoned

I understand that the next generation of the Celluon and a similar Sony MPCL1 projector (which has a “kickstand) have “digital keystone correction” which is not as good a solution as 100% offset as it reduces the resolution of the image; this is the “cheap/poor” way out and they really should have 100% offset like the UO projector (interestingly, earlier Microvision ShowWX projector with lower resolution had 100% offset) .

For the record – I like the Celluon PicoPro flatter form factor better; I’m not a fan of the UO cube as hurts the ability to put the projector in one’s pocket or a typical carrying bag.

Both the PicoPro with laser scanning and the Smart Beam with lasers illuminating an LCOS microdisplay have no focus knob and have a wide focus range (from about 50cm/1.5 feet to infinity), although they are both less sharp at the closer range.  The PicoPro with LBS is a Class 3R laser product whereas the Smart Beam with laser “illumination” of LCOS is only Class 1.   The measure dbrightness of the PicoPro was about 32 Lumens as rated when cold but dropped under 30 when heated up.  The UO while rated at 60 lumens was about 48 lumens when cold and about 45 when warmed up or more significantly below its “spec.”

Now onto the main discussion of resolution.  The picture at the top of this article shows the center crop from 720P test pattern generated by both projectors with the Smart Beam image on the left and the PicoPro on the right.   There is also an inset of the Smart Beam’s 1 pixel wide text pattern near the PicoPro’s 1 pixel wide pattern for comparison This test pattern shows a series of 1 pixel, 2 pixel and 3 pixel wide horizontal and vertical lines.

What you should hopefully notice is that the UO clearly resolves even the 1 pixel wide lines and the black lines are black whereas the 1 pixel wide lines are at best blurry and the 2 and even 3 pixel wide lines doe get to a very good black level (as in the contrast is very poor).  And the center is the very best case for the Celluon LBS whereas for the UO with it 100% offset it is a medium case (the best case is lower center).

The worst case for both projectors is one of the upper corners and below is a similar comparison of their upper right corner.  As before, I have included and inset of the UO’s single pixel image.

ur-img_9783-celluon-with-uo-overlay

Side By Side Center Patterns (click on image for full size picture)

What you should notice is that while there are still distinct 1 pixel wide lines in both directions in the UO projector, 1 pixel wide lines in the case of the Celluon LBS are a blurry mess.  Clearly they can’t resolve 1 pixel wide lines at 720P.

Because of the 100% offset optics the best case for the UO projector is at the bottom of the image (this is true almost any 100% offset optics) and this case is not much different than the center case for Celluon projector (see below):

lcen-celluon-with-uo-overlay

Below is a side by side picture I took (click on it for a full size image). The camera’s “white point” was an average between the two projectors (Celluon is too red/blue&green deficient and the UO is red deficient). The image below is NOT what I used in the cropped test patterns above as the 1 pixel features were too near the resolution limit of the Canon 70D camera (5472 by 3648 pixels) for the 1 pixel features.  So I used individual shots from each projector to double “sample” by the camera of the projected images.

side-by-side-img_0339-celluon-uo

For the Celluon PicoPro image I used the picture below (originally taken in RAW but digital lens corrected, cropped, and later converted to JPG for posting – click on image for full size):

img_9783-celluon-with-uo-overlay

For the UO Smart Beam image, I use the following image (also taken in RAW, digital lens corrected, straighten slightly, cropped and later converted to JPG for posting):

img_0231-uo-test-chart

As is my usual practice, I am including the test pattern (in lossless PNG format below for anyone who wants to verify and/or challenge my results:

interlace res-chart-720P G100A

I promise I will publish any pictures by anyone that can show better results with the PicoPro or any other LBS projector (Or UO projector for that matter) with the test pattern (or similar) above (I went to considerable effect to take the best possible PicoPro image that I could with a Canon 70D Camera).

Celluon Laser Beam Scanning Power Consumption (Over 6 Watts at 32 Lumens)

Celluon Power MeasurementsOn the left are a series of power measurements I made on the Celluon PicoPro projector with an optical engine designed by Sony using a Microvision scanning mirror.  The power was calculated based on the voltage and current from current coming from the battery using the HDMI input.

The first 6 measurements were with a solid image of the black/white/color indicated.  For the last 3 measurements I did an image that was half black on the left and the other half white, an image that was top half black, and a screen of 1 pixel wide vertical stripes.    The reason for the various colors/patterns was to gain some additional insight into the power consumption (and will be covered in a future article).  In addition to the power (in Watts) added  a column with the delta power from the Black image.

Celluon PicoPro Battery IMG_8069

Picture of Celluon PicoPro Battery

The Celluon PicoPro consumes 2.57 Watts for a fully black image (there are color lines at the bottom, presumably for laser brightness calibration) and 6.14W for a 32 lumen full white image.   When you consider that a smart phone running with the GPS only consumes about 2.5W and a smart phone LCD on full brightness consumes about 1W to 1.5W, over 6W is a lot of power (Displaymate has and excellent article on smartphone displays that includes the power consumption).   The Celluon has a 3260mah / 12.3Wh battery which is bigger than what goes in even large smartphones (and fills most of the left side of the case).

So why does the Celluon unit not need a fan, the answer is A) it only outputs 32-lumens and B) it use a lot of thermal management build into the case to spread the heat from the projector.  In the picture below I have shown some of the key aspects of the thermal management.  I have flipped over the projector and indicated with dashed rectangles were the thermal pads (a light blue color) go to the projector unit.  In addition the cast aluminum body used to hold the lasers and the optics which acts as a heat sink to spread the heat, there is gray flexible heat spreading material lining the entire top and bottom of the case plus a more hidden, a heat sink amalgamation essentially dedicated to the lasers as well as aluminum fins around the sides of the case.

2015-07-22_Case Heat Sinking 003

The heat spreading material on the left (as view) top of the case is pretty much dedicated to the battery, but all the rest of the heat spreading, particularly along the bottom of the case goes to the projector.

The most interesting feature is that there is a dedicated heat path from the area where the lasers are held in the cast body to the a heat sink “hidden chamber” or what I have nicknamed “the thermal corset”.   You should notice that there are three (3) light blue heat pads on the right side of the case top and that the middle one is isolated from the other two.  This middle one is also thicker and goes through a hole in the main case body to a chamber that filled with a heat sink material and then covered with an outer case.   This also explains why the Cellouon unit looks like it is in two parts from the outside.

Don’t get me wrong, having a fanless projector is desirable, but it is not due to the “magic” of using lasers.  Quite to the contrary, the Celluon unit has comparitively poor lumens per Watt, about double the power of what a similar DLP projector would take for the same lumens.

You may want to notice in the table that if you add up the “delta” red, green, and blue it totals to a lot more than the delta white.  The reason for this is that the Celluon unit never puts out “pure” fully saturated primary colors.  It always mixes a significant amount of the other two colors (I have verified this with several methods including using color filters over the output and using a spectral-meter).    This has to be done (and is done with LED projectors as well) so that the colors called for by standard movies and pictures are not over-saturated (if you don’t do this, green grass, for example” will look like it is glowing).

Another interesting result is that the device consumes more power if I put up a pattern were the left half is black and the right half is white rather than having the top half black and the bottom half white.   This probably has something to do with laser heating and not getting a chance to cool down between lines.

I also put up a pattern with alternating 1 pixel wide vertical lines and it should be noted that the power is between that of the left/right half screen image and the full white image.

So what does this mean in actual use?   With “typical” movie content, the image is typically about 25% to 33% (depends on the movie) of full white so the projector will be consuming about 4 Watts per hour which with a 12.3Wh battery will go about 3 hours.   But if you are web browsing, the content is often more like 90% of full white so it will be consuming over 6W per hour or 4 to 6 times what a typical smartphone displays consumes.    Note this is before you add in the power consumed in getting and processing the data (say from the internet).

Conclusion

The Celluon projector may be fanless,  but not because it is efficient.  From a product perspective, it does do a good job with its “thermal corset” of hiding/managing the power.

This study works from the “top down” by measuring the power and seeing where the heat is going in the case, the next time I plan to work some “bottom’s up” numbers to help show what causes the high power consumption and how it might change in the future.

Celluon/Sony/Microvision Optical Path

Celluon Light Path Labled KGOnTech

Today I’m going to give a bit of a guided tour through the Celluon optical path.  This optical engine was developed by Sony probably based on Microvision’s earlier work and using Microvision’s scanning mirror.   I’m going to give a “tour” of the optics and then give some comment on what I see in terms of efficiency (light loss) and cost.

Referring to the picture above and starting with the lasers at the bottom, there a 5 of them (two each of red and green and one blue) that are in a metal chassis (and not visible in the picture).   Each laser goes to it own beam spreading and alignment lens set.  These lenses enlarge the diameter of each laser beam and they are glued in place after alignment.  Note that the beams at this point are spread wider than the size of the scanning mirror and will be converged/focus back later in the optics.

Side Note: One reason for spreading the laser beams bigger than the scanning mirror is to reduce precision required of the optical components (making very small high precision optics with no/extremely-small defects becomes exponentially expensive).  But a better explanation is that it supports the despeckling process.  With the wider beam they can pass the light through more different paths before focusing it back.  There is a downside to this as seen in the Celluon output, namely is still too big when exiting the projector and thus the images are out of focus at short projection distances. 

After the beam spreading lenses there is glass plate at a 45 degree angle that splits a part of the light from the lasers down to a light sensors for each laser.   The light sensors are used to give feedback on the output of each laser and adjust to adjust them based on how they change with temperature and aging.

Side Note:  Laser heating and the changing of the laser output is a big issue with laser scanning. The lasers very quickly change in temperature/output.  In tests I have done, you can see the effect of bright objects on one side of the screen affecting the color on the other side of the screen in spite of the optical feedback.   

Most of the light from the sensor deflector continues to a complex structure of about 15 different pieces of optically coated solid glass elements glued together into a complex many faceted structure. There are about 3 times as many surfaces/components as would be required for simply combining 3 laser beams.   This structure is being used to combine the various colors into a single beam and has some speckle reducing structures.  As will be discussed later, having the light go through so many elements, each with their optical losses (and cost) results in loosing over half the light.

lenovo 21s cropFor reference compare this to the optical structure shown in the Lenovo video for their prototype laser projector in a smartphone at left (which uses an STMicro engine see).  There are just 3 lenses, 1 mirror (for red) and two dichroic plate combiners to combine the green and blue and a flat window. The Celluon/Sony/Microvision engine by comparison is using many more elements and instead of simple plate combiners they are using prisms which while having better optical performance, are considerably more expensive.  The Lenovo/STM engine does not show/have the speckle reduction elements nor the distortion correction elements (its two mirror scanning process inherently has less distortion) of the Celluon/Sony design.

Starting with the far left red laser light path, it goes to a “Half Mirror and 2nd Mirror” pair.   This two mirror assembly likely being done for speckle reduction.  Speckle is caused by light interfering with itself and by having the light follow different path lengths (the light off the 2nd mirror will follow a slightly longer path) it will reduce the speckle.  The next element is a red-pass/green-reflect dichroic mirror that combines left red and green lasers followed by a red&green-pass/blue-reflect dichroic combiner.

Then working from the right, there is another speckle reduction half-mirror/2nd-mirror pair for the right hand green laser followed by a green-pass/red-reflect dichroic mirror to combine the right side green and red lasers.  A polarizing combiner is (almost certainly) used to combine the 3 lasers on the left with the two lasers on the right into a single beam.

After the polarizing combiner there is a mirror that directs the combined light through a filter encased between two glass plates.  Most likely this filter either depolarizes or circularly polarizes the light because on exiting this section into the open air the previously polarized laser light has little if any linear polarization.   Next the light goes through a 3rd set of despeckling mirror pairs.   The light reflects off another mirror and exits into a short air gap.

Following the air gap there is a “Turning Block” that is likely part of the despeckling.   The material in the block probably has some light scattering properties to vary slightly the light path length and thus reduce speckle and thus the reason for the size/thickness of the block.   There is a curved light entry surface that will have a lens effect.

Light exiting the Turning Block goes through a lens that focuses the spread light back to a smaller beam that will reflect off the beam scanning mirror.  This lens set the way the beam diverges after it exits the projector.

After the converging lens the light reflects off a mirror that sends the light into the beam scanning mirror assembly.  The beam scanning mirror assembly, designed by Microvision, is it own complex structure and among other things has some strong magnets in it (supporting the magnetic mirror deflection).

Side Note: The STM/bTendo design in the Lenovo projector uses two simpler mirrors that move in only one axis rather than a single complex mirror that has to move in two axes.  The STM mirrors both likely uses a simple electrostatic only design whereas Microvision’s dual axis uses electrostatic for one direction and electromagnetic for the other.  

Finally, the light exits the projector via a Scanning Correction Lens that is made of plastic. It appears to be the only plastic optical element as all the other elements that could be easily accessed.   Yes, even though this is a laser scanning projector, it still has a correction lens, in this case to correct the otherwise “bow-tie” distorted scanning process.

Cost Issues

In addition to the obvious cost of the lasers (and needing 5 of them rather than just 3) and the Scanning Mirror Assembly, there are a large number of optically coated glass elements.  Addtionally, instead of using lower cost plate elements, the Celluon/Sony/Microvision engine use much more expensive solid prisms for the combiner and despeckling elements.   Each of these has to be precisely made, coated, and glued together. The cost of each element is a function of the quality/optical efficiency and which can vary significantly, but I would think there would be at least $20 to $30 of raw cost in just the glass elements even at moderately high volumes (and it could be considerably more).

Then there is a lot to assemble with precise alignment of all the various optics.  Finally, all of the lasers must be individually aligned after the unit with all the other elements has been assemble.

Optical Efficiency (>50% of the laser light is lost)

The light in the optical engine passes through and/or reflects off a large number of optical interfaces and there are light losses at each of these interfaces.  It is the “death by a thousand cuts” because while each element might have a 1% to 10% or more lose, the effects are multiplicative.   The use of solid rather than plate optics reduces the losses but as at added cost.  You can see in the picture of the walls of the chassis spots of colored light that has “escaped” the optical path and is lost.  You can also see the light glowing off optical elements including the lens; all of this is lost light.  The light that goes to the light sensors is also lost.

Celluon laser lable IMG_9715

Laser Warning Label From Celluon Case

Some percentage of the light that is spread will not be converged back onto the mirror.  Additionally, there are scattering losses in the Correction Lens and Turning block and in the rest of the optics.

When it is multiplied out, more than 50% of the laser light is lost in the optics.

This 50% light loss percentage agrees with the package labeling (see picture on the left) that says the laser light output for Green is 50mW even thought they are using two green lasers each of which likely outputs 50mW or more.

Next Time: Power Consumption

The Celluon system consumes ~2.6 Watts to put up a “black” image and ~6.1 Watts to put up a 32-lumen white image.  The delta between white and black being about 3.5 Watts or about 9 lumens per delta Watt from back to white.  For reference, the newer DLP projectors using LEDs can produce about double the delta lumens per Watt.  Next time, I plan on drilling down in the power consumption numbers.

Lenovo’s STMicro Based Prototype Laser Projector (part 1)

Lenovo Tech World Projector 001At Lenovo at their Tech World on May 27th 2015 showed a Laser Beam Scanning (LBS) projector integrated into a cell phone prototype (to be clear, a prototype and not a product).   White there has been no announcement of the maker of the LBS projector, there is no doubt that is made by STM as I will show below (to give credit where it is due, this was first shown on a blog by Paul Anderson focused on Microvision )

ST-720p- to Lenove comparison 2The comparison at left is base on video by Lenovo that included an exploded views of the projector and pictures of STM’s 720p projector from an article from Picoprojector-info.com on Jan 18, 2013.   I have drawn lines comparing various elements such as the size and placement of connectors and other components, the size and placement of the 3 major I.C.’s, and even the silk screen “STM” in the same place on both the Lenovo video and the STM article’s photo (circled in yellow).

While there are some minor differences, there are so many direct matches that there can be no doubt that Lenovo is using STM.

The next interesting to consider is how this design compares to the LBS design of Microvision and Sony in the Celluon projector.   The Lenovo video shows the projector as being about 34mm by 26mm by 5mm thick.  To check this I took the a photo from the STM to CelluonTO SCALE  003Picoprojector.com
article and was able to fit the light engine and electronic into a 34mm by 26mm rectangle arranged as they are in the Lenovo video (yet one more verification that it is STM).   I then took a picture I took of the Celluon board to the same scale and show the same 34x26mm rectangle on it.   The STM optics plus electronics are 1/4 the area and 1/5th the volume (STM is 5mm thick versus Microvision/Sony’s 7mm).

The Microvision/Sony is has probably about double the lumens/brightness of the STM module due to have two green and two red lasers and I have not had a chance to compare the image quality.   Taking out the extra two lasers would make the Microvision/Sony engine optics/heat-sinking smaller by about 25% and have a smaller impact on the board space, but this would still leave them over 3X bigger than STM.   The obvious next question is why.

One reason is that the STM either has a simpler electronics design or is more integrated and/or some combination thereof.  In particular the Microvision/Sony design requires an external DRAM (large rectangular chip in the Microvision/Sony).    STM probably still needs DRAM, but it is likely integrated into one of their chips.

There are not a lot of details on the STM optics (developed by bTendo of Israel before being acquired by STM).   But what we do know is STM uses separate simpler and smaller horizontal and vertical mirrors versus Microvision significantly larger and more complex single mirror assembly.  Comparing the photos above, the Microvision mirror assembly alone is almost as big as STM’s entire optical engine with lasers.   The Microvision mirror assembly has a lot of parts other than the MEMs mirror including some very strong magnets.  Generally the optical path of the Microvision engine requires a lot of space to enter and exit the Microvision mirror from the “right” directions.

btendo optics

On the right I have captured two frames from the Lenovo video showing the optics from two directions.  What you should notice is that the mirror assembly is perpendicular to the incoming laser light.  There appears to be a block of optics (pointed to by the red arrow in the two pictures) that redirects the light down to the first mirror and then returning it to the second mirror.  The horizontal scanning mirror is clearly shown in the video but it is not clear (so I took an educated guess) as to the location of the vertical scanning mirror.

Also shown at the right is bTendo patent 8,228,579 showing the path of light for their two scanning mirror design.   It does not show the more complex block of optics required to direct the light down to the vertical mirror and then redirect it back down to the horizontal mirror and then out as would be required in the Lenovo design.    You might also notice that there is a flat clear glass/plastic output cover shown in the at the 21s point in the video, this is very different from the Microvision/Celluon/Sony design show below.

Microvision mirror with measurements

Microvision Mirror Assembly and Exit Lens

Shown at left is the Microvision/Celluon beam scanning mirror and the “Exit” Lens.   First notices the size and complexity of the scanning mirror assembly with magnets and coils.  You can see the single round mirror with its horizontal hinge (green arrow) and the vertical hinge (yellow arrow) on the larger oval yoke.   The single mirror/pivot point causes an inherently bow-tied image.  You can see how distorted the mirror looks through the Exit Lens (see red arrow); this is caused by the exit lens correcting for the bow-tie effect.  This significant corrective lens is also a likely source of chroma aberrations in the final image.

Conclusions

All the above does not mean that the Leveno/STM is going to be a successful product.   I have not had a chance to evaluated the Lenovo projector and I still have serious reservations about any embedded projector succeeding in a cell phone (I outlined my reasons in an August 2013 article and I think they still hold true).    Being less than 1/5th the volume of the Microvision/Sony design is necessary but I don’t think is sufficient.

This comparison only shows that the STM design is much smaller than Microvisions and Microvision has only made relatively small incremental progress in size since the ShowWX announced in 2009) and Sony so far has not improved on it much, at least so far.

IRIS HUD on Indiegogo Appears to be Repackaged Pioneer HUD(s)

The startup IRIS has started and Indiegogo presale campaign for not just one (a major challenge for a new company)  but two different HUD designs, one “laser” and one DLP based.    Their video and “story” talk about how they designed this HUD and even show some CAD pictures, 3-D printing (of what?), and a CNC milling machine (but not showing what is being made).

The problem is that this “new” unit looks almost identical at every point to Pioneer HUD announced shipped in Japan in 2012 (with a slightly updated version in 2013) see such as, “The Verge” article from May 2012.   Pioneer’s model was also a “Laser HUD” and used a Microvision beam scanning mirror and laser control electronics.

Pioneer then in late 2013 Pioneer introduced a less expensive model based on Texas Instrument’s DLP that I wrote about on Seeking Alpha.   And low and behold IRIS also has a DLP version.  Where the Laser version was sold with Pioneer’s proprietary navigation system, the DLP version was sold in Europe that connect to a smart-phone.

According to IRIS’s Indiegogo campaign,

This limited quantity of Laser (30) and DLP (300) units are being assembled and will be ready to ship at the end of the campaign.  

Assuming that if IRIS is actually going to be delivering these products (that is always a big “if” for a new high-tech product on Indiegogo), the only rational conclusion is that they are shipping Pioneer’s unsold inventory of at least Laser and DLP engines if not whole systems.

Below are a series of comparison photos with alternating photos of the IRIS HUD and the Pioneer Laser HUD.   I have draw lines connecting corresponding elements between the IRIS and Pioneer HUDs.   I will go into some more of the business issues after the photos.

IRIS Pioneer Comparison 003

IRIS does claim to be adding features that were not in the either the Laser or DLP based Pioneer systems, specifically they say they are adding “gesture recognition” and connection to the OBD (on-board diagnostics) port.   Being I think most generous, it could be that they are taking the old unsold Pioneer units and modifying them.   I could be OK with this, but I am always a bit distrustful when I catch someone fudging on what they did.

Pioneer DLP hud2Interestingly, the Pioneer DLP HUD (left) while it worked with smartphones, as does IRIS’s HUD, it looks quite different and it optically different in just about every way but the combiner.   The Pioneer Laser HUD rear projected on a screen behind a large plastic lens that is then viewed via the combiner (the “combiner” is that large curved plastic mostly transparent but slightly mirrored lens at the front of the unit).  The Pioneer DLP HUD front projects on a a screen that is then seen reflected in- and magnified by- the combiner.

Additionally, the Pioneer Laser HUD required you to remove your sun visor to mount the unit where their DLP HUD strapped to the sun visor (see the photo above).   This got me curious how they could be selling two radically different designs that also mounted differently while showing a single product so I posted the follow question and got the response below on IRIS’s Facebook page:

Karl Guttag‎, “Is the case and mounting the same for the Laser and the DLP versions of the product?

IRIS “Yes, Absolutely the same!

I guess it is possible that they took Pioneer Laser HUD cases and reworked/redesigned them to fit the DLP and added gesture recognition and OBD.   That would seem to me to be a pretty major effort for a small team with little known funding.

Yet they say they are going to ship units at the end of their Indiegogo campaign this month which would suggest they have them in-stock.   If they are so close to having real product, then I would have expected them to be out there demonstrating them to reviewers and not just showing the carefully staged video on Indiegogo.   Maybe they have something, but maybe it does not work very well.   Something just does not seem to add up.

BTW, I have had the opportunity to see both the Pioneer Laser and DLP based HUDs.  Frankly, neither one seems very practical.  The Laser HUD requires you to remove your sun visor to mount it and they give you a small sun visor that only goes up and down (can’t block your side window and and does not cover enough).  Additionally unless you are very short, the combiner tends to cut through your critical forward vision.   The DLP version was worse in that it mounted below the sun visor and totally blocks the forward vision if you are tall and/or your seat adjust to a high position.  The bottom line, there are reasons why the Pioneer units did not sell well.

Disclaimer:

I previously worked as CTO for Navdy which is also developing an aftermarket HUD product and could be seen as a competitor for IRIS.  I currently have no financial interest in Navdy.   Because of my prior position at Navdy and knowledge of non-public information, it is not appropriate for me to comment on their product.