AR/MR Optics for Combining Light for a See-Through Display (Part 1)

combiners-sample-cropIn general, people find the combining of an image with the real world somewhat magical; we see this with heads up displays (HUDs) as well as Augmented/Mixed Reality (AR/MR) headsets.   Unlike Starwars R2D2 projection into thin air which was pure movie magic (i.e. fake/impossible), light rays need something to bounce off to redirect them into a person’s eye from the image source.  We call this optical device that combines the computer image with the real world a “combiner.”

In effect, a combiner works like a partial mirror.  It reflects or redirects the display light to the eye while letting light through from the real world.  This is not, repeat not, a hologram which it is being mistakenly called by several companies today.  Over 99% people think or call “holograms” today are not, but rather simple optical combining (also known as the Pepper’s Ghost effect).

I’m only going to cover a few of the more popular/newer/more-interesting combiner examples.  For a more complete and more technical survey, I would highly recommend a presentation by Kessler Optics. My goal here is not to make anyone an optics expert but rather to gain insight into what companies are doing why.

With headsets, the display device(s) is too near for the human eye to focus and there are other issues such as making a big enough “pupil/eyebox” so the alignment of the display to the eye is not overly critical. With one exception (the Meta 2) there are separate optics  that move apparent focus point out (usually they try to put it in a person’s “far” vision as this is more comfortable when mixing with the real word”.  In the case of Magic Leap, they appear to be taking the focus issue to a new level with “light fields” that I plan to discuss the next article.

With combiners there is both the effect you want, i.e. redirecting the computer image into the person’s eye, with the potentially undesirable effects the combiner will cause in seeing through it to the real world.  A partial list of the issues includes:

  1. Dimming
  2. Distortion
  3. Double/ghost images
  4. Diffraction effects of color separation and blurring
  5. Seeing the edge of the combiner

In addition to the optical issues, the combiner adds weight, cost, and size.  Then there are aesthetic issues, particularly how they make the user’s eye look/or if they affect how others see the user’s eyes; humans are very sensitive to how other people’s eye look (see the EPSON BT-300 below as an example).

FOV and Combiner Size

There is a lot of desire to support a wide Field Of View (FOV) and for combiners a wide FOV means the combiner has to be big.  The wider the FOV and the farther the combiner is from the eye the bigger the combiner has to get (there is not way around this fact, it is a matter of physics).   One way companies “cheat” is to not support a person wearing their glasses at all (like Google Glass did).

The simple (not taking everything into effect) equation (in excel) to computer the minimum width of a combiner is =2*TAN(RADIANS(A1/2))*B1 where A1 is the FOV in degrees and and B1 is the distance to farthest part combiner.  Glasses are typically about 0.6 to 0.8 inches from the eye and the size of the glasses and the frames you want about 1.2 inches or more of eye relief. For a 40 degree wide FOV at 1.2 inches this translates to 0.9″, at 60 degrees 1.4″ and for 100 degrees it is 2.9″ which starts becoming impractical (typical lenses on glasses are about 2″ wide).

For, very wide FOV displays (over 100 degree), the combiner has to be so near your eye that supporting glasses becomes impossible. The formula above will let your try your own assumptions.

Popular/Recent Combiner Types (Part 1)

Below, I am going to go through the most common beam combiner options.  I’m going to start with the simpler/older combiner technologies and work my way to the “waveguide” beam splitters of some of the newest designs in Part 2.  I’m going to try and hit on the main types, but there are many big and small variations within a type

gg-combinerSolid Beam Splitter (Google Glass and Epson BT-300)

These are often used with a polarizing beam splitter polarized when using LCOS microdisplays, but they can also be simple mirrors.  They generally are small due to weight and cost issues such as with the Google Glass at left.  Due to their small size, the user will see the blurry edges of the beam splitter in their field of view which is considered highly undesirable.  bt-300Also as seen in the Epson BT-300 picture (at right), they can make a person’s eyes look strange.  As seen with both the Google Glass and Epson, they have been used with the projector engine(s) on the sides.

Google glass has only about a 13 degree FOV (and did not support using a person’s glasses) and about 1.21 arc-minutes/pixel angular resolution with is on the small end compared to most other headset displays.    The BT-300 about 23 degree (and has enough eye relief to supports most glasses) horizontally and has dual 1280×720 pixels per eye giving it a 1.1 arc-minutes/pixel angular resolution.  Clearly these are on the low end of what people are expecting in terms of FOV and the solid beam quickly becomes too large, heavy, and expensive at the FOV grows.  Interesting they are both are on the small end of their apparent pixel size.

meta-2-combiner-02bSpherical/Semi-Spherical Large Combiner (Meta 2)

While most of the AR/MR companies today are trying to make flatter combiners to support a wide FOV with small microdisplays for each eye, Meta has gone in the opposite direction with dual very large semi-spherical combiners with a single OLED flat panel to support an “almost 90 degree FOV”. Note in the picture of the Meta 2 device that there are essentially two hemispheres integrated together with a single large OLED flat panel above.

Meta 2 uses a 2560 by 1440 pixel display that is split between two eyes.  Allowing for some overlap there will be about 1200 pixel per eye to cover 90 degrees FOV resulting in a rather chunkylarge (similar to Oculus Rift) 4.5 arc-minutes/pixel which I find somewhat poor (a high resolution display would be closer to 1 a-m/pixel).

navdy-unitThe effect of the dual spherical combiners is to act as a magnifying mirror that also move the focus point out in space so the use can focus. The amount of magnification and the apparent focus point is a function of A) the distance from the display to the combiner, B) the distance from the eye to the combiner, and C) the curvature.   I’m pretty familiar with this optical arrangement since the optical design it did at Navdy had  similarly curved combiner, but because the distance from the display to the combiner and the eye to the combiner were so much more, the curvature was less (larger radius).

I wonder if their very low angular resolution was as a result of their design choice of the the large spherical combiner and the OLED display’s available that they could use.   To get the “focus” correct they would need a smaller (more curved) radius for the combiner which also increases the magnification and thus the big chunky pixels.  In theory they could swap out the display for something with higher resolution but it would take over doubling the horizontal resolution to have a decent angular resolution.

I would also be curious how well this large of a plastic combiner will keep its shape over time. It is a coated mirror and thus any minor perturbations are double.  Additionally and strain in the plastic (and there is always stress/strain in plasic) will cause polarization effect issues, say whenlink-ahmd viewing and LCD monitor through it.   It is interesting because it is so different, although the basic idea has been around for a number of years such as by a company called Link (see picture on the right).

Overall, Meta is bucking the trend toward smaller and lighter, and I find their angular resolution disappointing The image quality based on some on-line see-through videos (see for example this video) is reasonably good but you really can’t tell angular resolution from the video clips I have seen.  I do give them big props for showing REAL/TRUE video’s through they optics.

It should be noted that their system at $949 for a development kit is about 1/3 that of Hololens and the ODG R-7 with only 720p per eye but higher than the BT-300 at $750.   So at least on a relative basis, they look to be much more cost effective, if quite a bit larger.

odg-002-cropTilted Thin Flat or Slightly Curved (ODG)

With a wide FOV tilted combiner, the microdisplay and optics are locate above in a “brow” with the plate tilted (about 45 degrees) as shown at left on an Osterhout Design Group (ODG) model R-7 with 1280 by 720 pixel microdisplays per eye.   The R-7 has about a 37 degree FOV and a comparatively OK 1.7 arc-minutes/pixel angular resolution.

odg-rr-7-eyesTilted Plate combiners have the advantage of being the simplest and least expensive way to provide a large field of view while being relatively light weight.

The biggest drawback of the plate combiner is that it takes up a lot of volume/distance in front of the eye since the plate is tilted at about 45 degrees from front to back.  As the FOV gets bigger the volume/distance required also increase.
odg-horizons-50d-fovODG is now talking about a  next model called “Horizon” (early picture at left). Note in the picture at left how the Combiner (see red dots) has become much larger. They claim to have >50 degree FOV and with a 1920 x 1080 display per eyethis works out to an angular resolution of about 1.6 arc-minutes/pixel which is comparitively good.

Their combiner is bigger than absolutely necessary for the ~50 degree FOV.  Likely this is to get the edges of the combiner farther into a person’s peripheral vision to make them less noticeable.

The combiner is still tilted but it looks like it may have some curvature to it which will tend to act as a last stage of magnification and move the focus point out a bit.   The combiner in this picture is also darker than the one in the older R-7 combiner and may have additional coatings on it.

ODG has many years of experience and has done many different designs (for example, see this presentation on Linked-In).  They certainly know about the various forms of flat optical waveguides such as Microsoft’s Hololens is using that I am going to be talking about next time.  In fact,  that Microsoft’s licensed Patent from ODG for  about $150M US — see).

Today, flat or slightly curved thin combiners like ODG is using probably the best all around technology today in terms of size, weight, cost, and perhaps most importantly image quality.   Plate combiners don’t require the optical “gymnastics” and the level of technology and precision that the flat waveguides require.

Next time — High Tech Flat Waveguides

Flat waveguides using diffraction (DOE) and/or holographic optical elements (HOE) are what many think will be the future of combiners.  They certainly are the most technically sophisticated. They promise to make the optics thinner and lighter but the question is whether they have the optical quality and yield/cost to compete yet with simpler methods like what ODG is using on the R-7 and Horizon.

Microsoft and Magic Leap each are spending literally over $1B US each and both are going with some form of flat, thin waveguides. This is a subject to itself that I plan to cover next time.


Near Eye AR/VR and HUD Metrics For Resolution, FOV, Brightness, and Eyebox/Pupil

Image result for oculus riftI’m planning on following up on my earlier articles about AR/VR Head Mounted Displays
(HMD) that also relate to Heads Up Displays (HUD) with some more articles, but first I would like to get some basic Image result for hololenstechnical concepts out of the way.  It turns out that the metrics we care about for projectors while related don’t work for hud-renaultmeasuring HMD’s and HUDs.

I’m going to try and give some “working man’s” definitions rather than precise technical definitions.  I be giving a few some real world examples and calculations to show you some of the challenges.

Pixels versus Angular Resolution

Pixels are pretty well understood, at least with today’s displays that have physical pixels like LCDs, OLEDs, DLP, and LCOS.  Scanning displays like CRTs and laser beam scanning, generally have additional resolution losses due to imperfections in the scanning process and as my other articles have pointed out they have much lower resolution than the physical pixel devices.

When we get to HUDs and HMDs, we really want to consider the angular resolution, typically measured in “arc-minutes” which are 1/60th of a degree; simply put this is the angular size that a pixel covers from the viewing position. Consumers in general haven’t understood arc-minutes, and so many companies have in the past talked in terms of a certain size and resolution display viewed from a given distance; for example a 60-inch diagonal 1080P viewed at 6 feet, but since the size of the display, resolution and viewing distance are all variables its is hard to compare displays or what this even means with a near eye device.

A common “standard” for good resolution is 300 pixels per inch viewed at 12-inches (considered reading distance) which translates to about one-arc-minute per pixel.  People with very good vision can actually distinguish about twice this resolution or down to about 1/2 an arc-minute in their central vision, but for most purposes one-arc-minute is a reasonable goal.

One thing nice about the one-arc-minute per pixel goal is that the math is very simple.  Simply multiply the degrees in the FOV horizontally (or vertically) by 60 and you have the number of pixels required to meet the goal.  If you stray much below the goal, then you are into 1970’s era “chunky pixels”.

Field of View (FOV) and Resolution – Why 9,000 by 8100 pixels per eye are needed for a 150 degree horizontal FOV. 

As you probably know, the human eye’s retina has variable resolution.  The human eyes has roughly elliptical FOV of about 150 to 170 degrees horizontally by 135 to 150 degrees vertically, but the generally good discriminating FOV is only about 40 degree (+/-20 degrees) wide, with reasonably sharp vision, the macular, being about 17-20 degrees and the fovea with the very best resolution covers only about 3 degrees of the eye’s visual field.   The eye/brain processing is very complex, however, and the eye moves to aim the higher resolving part of the retina at a subject of interest; one would want the something on the order of the one-arc-minute goal in the central part of the display (and since having a variable resolution display would be a very complex matter, it end up being the goal for the whole display).

And going back to our 60″, 1080p display viewed from 6 feet, the pixel size in this example is ~1.16 arc-minutes and the horizontal field of view of view will be about 37 degrees or just about covering the generally good resolution part of the eye’s retina.

Image result for oculus rift

Image from Extreme Tech

Now lets consider the latest Oculus Rift VR display.  It spec’s 1200 x 1080 pixels with about a 94 horz. by 93 vertical FOV per eye or a very chunky ~4.7 arc-minutes per pixel; in terms of angular resolution is roughly like looking at a iPhone 6 or 7 from 5 feet away (or conversely like your iPhone pixels are 5X as big).   To get to the 1 arc-minute per pixel goal of say viewing today’s iPhones at reading distance (say you want to virtually simulate your iPhone), they would need a 5,640 by 5,580 display or a single OLED display with about 12,000 by 7,000 pixels (allowing for a gap between the eyes the optics)!!!  If they wanted to cover the 150 by 135 FOV, we are then talking 9,000 by 8,100 per eye or about a 20,000 by 9000 flat panel requirement.

Not as apparent but equally important is that the optical quality to support these types of resolutions would be if possible exceeding expensive.   You need extremely high precision optics to bring the image in focus from such short range.   You can forget about the lower cost and weight Fresnel optics (and issues with “God rays”) used in Oculus Rift.

We are into what I call “silly number territory” that will not be affordable for well beyond 10 years.  There are even questions if any know technology could achieve these resolutions in a size that could fit on a person’s head as there are a number of physical limits to the pixel size.

People in gaming are apparently living with this appallingly low (1970’s era TV game) angular resolution for games and videos (although the God rays can be very annoying based on the content), but clearly it not a replacement for a good high resolution display.

Now lets consider Microsoft’s Hololens, it most criticized issue is is smaller (relative to the VR headsets such as Oculus) FOV of about 30 by 17.5 degrees.  It has a 1268 by 720 pixel display per eye which translates into about 1.41 arc-minutes per pixel which while not horrible is short of the goal above.   If they had used 1920×1080 (full HD) microdisplay devices which are becoming available,  then they would have been very near the 1 arc-minute goal at this FOV.

Let’s understand here that it is not as simple as changing out the display, they will also have to upgrade the “light guide” that the use as an combiner to support the higher resolution.   Still this is all reasonably possible within the next few years.   Microsoft might even choose to grow the FOV to around 40 degrees horizontally rather and keep the lower angular resolution a 1080p display.  Most people will not seriously notice the 1.4X angular resolution different (but they will by about 2x).

Commentary on FOV

I know people want everything, but I really don’t understand the criticism of the FOV of Hololens.  What we can see here is a bit of “choose your poison.”  With existing affordable (or even not so affordable) technology you can’t support a wide field of view while simultaneously good angular resolution, it is simply not realistice.   One can imaging optics that would let you zoom between a wide FOV with lower angular resolution and a smaller FOV with higher angular resolution.  The control of this zooming function could perhaps be controlled by the content or feedback from the user’s eyes and/or brain activity.

Lumens versus Candelas/Meter2 (cd/m2 or nits)

With an HMD or HUD, what we care about is the light that reaches the eye.   In a typical front projector system, only an extremely small percentage of the light that goes out of the projector, reflects off the screen and makes it back to any person’s eye, the vast majority of the light goes to illuminating the room.   With a HMD or HUD all we care about is the light that makes it into the eye.

Projector lumens or luminous flux, simply put, are a measure of the total light output and for a projector is usually measure when outputting a solid white image.   To get the light that makes it to the eye we have to account for the light hits a screen, and then absorbed, scattered, and reflects back at an angle that will get back to the eye.  Only an exceeding small percentage (a small fraction of 1%) of the projected light will make it into the eye in a typical front projector setup.

With HMDs and HUDs we talk about brightness in terms candelas-per-Meter-Squared (cd/m2), also referred to as “nits” (while considered an obsolete term, it is still often used because it is easier to write and say).  Cd/m2 (or luminance) is measure of brightness in a given direction which tells us how bright the light appears to the eye looking in a particular direction.   For a good quick explanation of lumens, cd/m2 I would recommend a Compuphase article.

Image result for hololens

Hololens appears to be “luminosity challenged” (lacking in  cd/m2) and have resorted to putting sunglasses on outer shield even for indoor use.  The light blocking shield is clearly a crutch to make up for a lack of brightness in the display.   Even with the shield, it can’t compete with bright light outdoors which is 10 to 50 times brighter than a well lit indoor room.

This of course is not an issue for the VR headsets typified by Oculus Rift.  They totally block the outside light, but it is a serious issue for AR type headsets, people don’t normally wear sunglasses indoors.

Now lets consider a HUD display.  A common automotive spec for a HUD in sunlight is to have 15,000 cd/m2 whereas a typical smartphone is between 500 and 600 cd/m2 our about 1/30th the luminosity of what is needed.  When you are driving a car down the road, you may be driving in the direction of the sun so you need a very bright display in order to see it.

The way HUDs work, you have a “combiner” (which may be the car’s windshield) that combines the image being generated with the light from the real world.  A combiner typical only reflects about 20% to 30% of the light which means that the display before the combiner needs to have on the order of 30,000 to 50,000 cd/m2 to support the 15,000 cd/m2 as seen in the combiner.  When you consider that you smartphone or computer monitor only has about 400 to 600 cd/m2 , it gives you some idea of the optical tricks that must be played to get a display image that is bright enough.

phone-hudYou will see many “smartphone HUDs” that simply have a holder for a smarphone and combiner (semi-mirror) such as the one pictured at right on Amazon or on crowdfunding sites, but rest assured they will NOT work in bright sunlight and only marginal in typical daylight conditions. Even with combiners that block more than 50% of the daylight (not really much of a see-through display at this point) they don’t work in daylight.   There is a reason why companies are making purpose built HUDs.

The cd/m2 also is a big issue for outdoor head mount display use. Depending on the application, they may need 10,000 cd/m2 or more and this can become very challenging with some types of displays and keeping within the power and cooling budgets.

At the other extreme at night or dark indoors you might want the display to have less than 100 cd/m2 to avoid blinding the user to their surrounding.  Note the SMPTE spec for movie theaters is only about 50 cd/mso even at 100 cd/m2 you would be about 2X the brightness of a movie theater.  If the device much go from bright sunlight to night use, you could be talking over a 1,500 to 1 dynamic range which turns out to be a non-trivial challenge to do well with today’s LEDs or Lasers.

Eye-Box and Exit Pupil

Since AR HMDs and HUDs generate images for a user’s eye in a particular place, yet need to compete with the ambient light, the optical system is designed to concentrate light in the direction of the eye.  As a consequence, the image will only be visible in a given solid angle “eye-box” (with HUDs) or “pupil” (with near eye displays).   There is also a trade-off in making the eyebox or pupil bigger and the ease of use, as the bigger the eye-box or pupil, the easier it will be the use.

With HUD systems there can be a pretty simple trade-off in eye-box size and cd/m2 and the lumens that must be generated.   Using some optical tricks can help keep from needing an extremely bright and power hungry light source.   Conceptually a HUD is in some ways like a head mounted display but with very long eye relief. With such large eye relieve and the ability of the person to move their whole head, the eyebox for a HUD has significantly larger than the exit pupil of near eye optics.  Because the eyebox is so much larger a HUD is going to need much more light to work with.

For near eye optical design, getting a large exit pupil is a more complex issue as it comes with trade-offs in cost, brightness, optical complexity, size, weight, and eye-relief (how far the optics are from the viewer’s eye).

Too small a pupil and/or with more eye-relief, and a near eye device is difficult to use as any small movement of the device causes you to to not be able to see the whole image.  Most people’s first encounter with an exit pupil is with binoculars or a telescope and how the image cuts offs unless the optics are centered well on the user’s eye.


While I can see that people are excited about the possibilities of AR and VR technologies, I still have a hard time seeing how the numbers add up so to speak for having what I would consider to be a mass market product.  I see people being critical of Hololens’ lower FOV without being realistic about how they could go higher without drastically sacrificing angular resolution.

Clearly there can be product niches where the device could serve, but I think people have unrealistic expectations for how fast the field of view can grow for product like Hololens.   For “real work” I think the lower field of view and high angular resolution approach (as with Hololens) makes more sense for more applications.   Maybe game players in the VR space are more willing to accept 1970’s type angular resolution, but I wonder for how long.

I don’t see any technology that will be practical in high volume (or even very expensive at low volume) that is going to simultaneously solve the angular resolution and FOV that some people want. AR displays are often brightness challenged, particularly for outdoor use.  Layered on top of these issues are those size, weight, cost, and power consumption which we will have to save these issues for another day.


Wrist Projector Scams – Ritot, Cicret, the new eyeHand

ritot-cicret-eyehand-001Wrist Projectors are the crowdfund scams that keeps on giving with new ones cropping up every 6 months to a year. When I say scam, I mean that there is zero chance that they will ever deliver anything even remotely close what they are promising. They have obviously “Photoshopped”/Fake pictures to “show” projected images that are not even close to possible in the the real world and violate the laws of physics (are forever impossible). While I have pointed out in this blog where I believe that Microvision has lied and mislead investors and showed very fake images with the laser beam scanning technology, even they are not total scammers like Ritot, Cicret, and eyeHand.

According to Ritot’s Indiegogo campaign, they have taken in $1,401,510 from 8917 suckers (they call them “backers”).   Cicret according to their website has a haul of $625,000 from 10,618 gullible people.

Just when you think that Ritot and Cicret had found all the suckers for wrist projectors, now CrowdFunder reports that eyeHand has raised $585,000 from individuals and claims to have raised another $2,500,000 in equity from “investors” (if they are real then they are fools, if not, then it is just part of the scam). A million here, $500K there, pretty soon you are talking real money.

Apparently Dell’s marking is believing these scams (I would hope their technical people know better) and has show video Ads that showed a similar impossible projectors.  One thing I will give them is that they did a more convincing “simulation” (no projecting “black”) and they say in the Ads that these are “concepts” and not real products. See for example the following stills from their Dell’s videos (click to see larger image).  It looks to me like they combined a real projected image (with the projector off camera and perpendicular to the arm/hand) and then add fake projector rays to try and suggest it came from the dummy device on the arm): dell-ritots-three

Ritot was the first of these scams I was alerted to and I help contribute some technical content to the DropKicker article I am the “Reader K” that they thanked in the author’s note at the beginning of the article.  A number of others have called out the Ritot and Cicret as being scams but that did not keep them from continuing to raise money nor has it stopped the new copycat eyeHand scam.

The some of key problems with the wrist projector:

  1. Very shallow angle of projection.  Projectors normally project on a surface that is perpendicular to the direction of projection, but the wrist projectors have to project onto a surface that is nearly parallel to the direction of projection.  Their concepts show a projector that is only a few (2 to 4) millimeters above the surface. When these scammers later show “prototypes” they radically change the projection distance and projection angle.
  2. Extremely short projection distance.  The near side of the projection is only a few millimeters away while the far side of the image could be 10X or 50X further away.  There is no optics or laser scanning technology on earth that can do this.  There is no way to get such a wide image at such a short distance from the projector.  As light falls off with the square of distance, this results in an impossible illumination problem (the far side being over 100X dimmer than the near side).
  3. Projecting in ambient light – All three of the scammers show concept images where the projected image is darker than the surrounding skin.  This is absolutely impossible and violates the laws of physics.   The “black” of the image is set by the ambient light and the skin, the projector can only add light, it is impossible to remove light with a projector.  This shows ignorance and/or a callous regard for the the truth by the scammers.
  4. The blocking of the image by hairs, veins, and muscles.  At such a shallow angle (per #1 above) everything is in the way.
  5. There is no projector small enough.  These projector engines with their electronics that exist are more than 20X bigger in volume than what would be required to fit.
  6. The size of the orifice through with the light emerges is too small to support the size of the image that they want to project
  7.  The battery required to make them daylight readable would be bigger than the whole projector that they show.  These scammers would have you believe that a projector could work off a trivially small battery.
  8. Cicret and eyeHand show “touch interfaces” that won’t work due to the shallow angle.  The shadows cast by fingers working the touch interface would block the light to the rest of the image and made “multi-touch” impossible.   This also goes back to the shallow angle issue #1 above.

The issues above hold true whether the projection technology uses DLP, LCOS, or Laser Beam Scanning.

Cicret and Ritot have both made “progress reports” showing stills and  videos using projectors more than 20 times bigger and much higher and farther away (to reduce the projection angle) than the sleek wrist watch models they show in their 3-D CAD models.   Even then they  keep off-camera much/most of the electronics and battery/power-supply necessary needed to drive the optics that the show.

The image below is from a Cicret “prototype” video Feb of 2015 where they simply strapped a Microvision ShowWX+ HMDI upside down to a person’s wrist (I wonder how many thousand dollars they used engineering this prototype). They goofed in the video and showed enough of the projector that I could identify (red oval) the underside of the Microvision projector (the video also shows the distinctive diagonal roll bar of a Microvision LBS projector).  I have show the rest of the projector roughly to scale in the image below that they cropped off when shooting the video.  What you can’t tell in this video is that the projector is also a couple of inches above the surface of the arm in order to project a reasonable image.


So you might think Cicret was going to use laser beam scanning, but no, their October 2016 “prototype” is showing a panel (DLP or LCOS) projector.  Basically it looks like they are just clamping whatever projector they find to a person’s wrist, there is no technology they are developing.  In this latest case, it looks like what they have done is found a small production projector taken its guts out and put it in a 3-D printed case.  Note the top of the case is going to be approximately 2 inches above a person’s wrist and how far away the image is from the projector.


Ritot also has made update to keep their suckers on the hook.   Apparently Indiegogo only rule is that you much keep lying to your “backers” (for more on the subject of how Indiegogo condones fraud click here).  These updates at best show how little these scammers understood projection technology.   I guess one could argue that they were too incompetent to know they were lying.  ritot-demo-2014

On the left is a “demo” Ritot shows in 2014 after raising over $1M.  It is simply an off the shelf development system projector and note there is no power supply.  Note they are showing it straight on/perpendicular to the wrist from several inches away.

ritot-2015By 2015 Rito had their own development system and some basic optics.  Notice how big the electronics board is relative to the optics and that even this does not show the power source.

By April 2016 they showed an optical engine (ONLY) strapped to a persons wrist.  ritot-2016-04-20-at-25sCut off in the picture is the all the video drive electronics (see the flex cable in the red oval) that is off camera and likely a driver board similar to the one in the 2015 update  and the power supplies/battery.

In the April 2016 you should notice how the person’s wrist is bent to make make it more perpendicular to the direction of the projected image.  Also not that the image is distorted and about the size of an Apple watch’s image.   I will also guarantee that you will not have a decent view-able image when used outdoors in daylight.

The eyeHand scam has not shown anything like a prototype, just a poorly faked (projecting black) image.  From the low angle they show in their fake image, the projected would be blocked by the base of the thumb even if the person hold their hand flat.  To make it work at all they would have to move the projector well up the person’s arm and then bend the wrist, but then the person could not view it very well unless they hold their arm at an uncomfortable angle.  Then you have the problem of keeping the person from moving/relaxing their wrist and loosing the projection surface.   And of course it would not be view-able outdoors in daylight.

It it not like others have been trying to point out that these projectors are scams.  Google search “Ritot scam” or “Cicret scam” and you will find a number of references.  As best I can find, this blog is the first to call out the eyeHand scam:

  • The most technically in depth article was by Drop-Kicker on the Ritot scam
  • Captain Delusional has a  comic take on the Cicret scam on YouTube – He has some good insights on the issue of touch control but also makes some technical mistakes such as his comments on laser beam scanning (you can’t remove the laser scanning roll-bar by syncing the camera — also laser scanning has the same fall-off in brightness due do the scanning process).
  • Geek Forever had an article on the Ritot Scam 
  • A video about the Ritot Scam on Youtube
  • KickScammed about Ritot from 2014

The problem with scam startups is that they tarnish all the other startups trying to find a way to get started.  Unfortunately, the best liars/swindlers often do the best with crowdfunding.  The more they are willing to lie/exaggerate, the better it makes their product sound.

Indiegogo has proven time and again to have extremely low standards (basically if the company keep posting lies, they are good to go – MANY people tried to tell Indiegogo about the Ritot Scam but to no avail before Ritot got the funds). Kickstarter has some standards but the bar is not that large but at least I have not see a wrist projector on Kickstarter yet. Since the crowdfunding sites get a cut of the action whether the project delivers or not, their financial incentives are on the side of the companies and the people funding. There is no bar for companies that go with direct websites, it is purely caveat emptor.

I suspect that since the wrist projector scam has worked at least three (3) times so far, we will see other using it.   At least with eyeHand you have a good idea of what it will look like in two years (hint – like Ritot and Cicret).

Laser Beam Scanning Versus Laser-LCOS Resolution Comparison


Side By Side Center Patterns (click on image for full size picture)

I apologize for being away for so long.  The pictures above and below were taken over a year ago and I meant to format and publish them back then but some other business and life events got in the way.

The purpose of this article is to compare the resolution of the Celluon PicoPro Laser Beam Scanning (LBS) projector and the UO Smart Beam Laser LCOS projector.   This is not meant to be a full review of both products, although I will make a few comments here and there, but rather, it is to compare the resolution between the two products.  Both projectors claim to have 720P resolution but only one of them actually has that “native/real” resolution.

This is in a way a continuation of the serious I have written about the PicoPro with optics developed by Sony and the beam scanning mirror and control by Microvision and in particular articles and  With this article I am now included some comparison pictures I took of the UO Smart Beam projector (

As per my prior articles, the Celluon PicoPro has no where close to it stated 1920×720 (non-standard) nor even 1280×720 (720P) resolution.  The UO projector while not perfect, does demonstrate 720P resolution reasonably well, but it does suffer from chroma aberrations (color separation) at the top of the image due to optical 100% offset (this is to be expected to some extent).

Let me be up front, I worked on the LCOS panel used in the UO projector when I was at Syndiant but I had nothing to do with the UO projector itself.   Take that as bias if you want, but the pictures I think tell the story.  I did not have any contact with either UO (nor Celluon for that matter) in preparing this article.

I also want to be clear that both the UO projector and the Celluon PicoPro tested are now over 1 year old and there may have been improvements since then.  I saw serious problems with both products, in particular with the color balance, the Celluon is too red (“white” is pink) and the UO very red deficient (“whilte is significantly blue-green).   The color is so far off on the Celluon that it would be a show stopper for me ever wanting to buy one as a consumer (hopefully UO has or will fix this).   Frankly, I think both projectors have serious flaws (if you want to know more, ask and I will write a follow-up article).

The UO Smart Beam had the big advantage in that it has “100% offset” which means that when placed on table-top, it will project upward not hitting the table without any keystone.   The PicoPro has zero offset and shoots straight out.  If you put it flat on a table the lower half of the image will shoot into the tabletop. Celluon includes a cheap and rather silly monopod that you can used to have the projector “float” above the table surface and then you can tilt it up and get a keystone image.  To take the picture, I had to mount the PicoPro on a much taller tripod and then shoot over the Projector so the image would not be keystoned

I understand that the next generation of the Celluon and a similar Sony MPCL1 projector (which has a “kickstand) have “digital keystone correction” which is not as good a solution as 100% offset as it reduces the resolution of the image; this is the “cheap/poor” way out and they really should have 100% offset like the UO projector (interestingly, earlier Microvision ShowWX projector with lower resolution had 100% offset) .

For the record – I like the Celluon PicoPro flatter form factor better; I’m not a fan of the UO cube as hurts the ability to put the projector in one’s pocket or a typical carrying bag.

Both the PicoPro with laser scanning and the Smart Beam with lasers illuminating an LCOS microdisplay have no focus knob and have a wide focus range (from about 50cm/1.5 feet to infinity), although they are both less sharp at the closer range.  The PicoPro with LBS is a Class 3R laser product whereas the Smart Beam with laser “illumination” of LCOS is only Class 1.   The measure dbrightness of the PicoPro was about 32 Lumens as rated when cold but dropped under 30 when heated up.  The UO while rated at 60 lumens was about 48 lumens when cold and about 45 when warmed up or more significantly below its “spec.”

Now onto the main discussion of resolution.  The picture at the top of this article shows the center crop from 720P test pattern generated by both projectors with the Smart Beam image on the left and the PicoPro on the right.   There is also an inset of the Smart Beam’s 1 pixel wide text pattern near the PicoPro’s 1 pixel wide pattern for comparison This test pattern shows a series of 1 pixel, 2 pixel and 3 pixel wide horizontal and vertical lines.

What you should hopefully notice is that the UO clearly resolves even the 1 pixel wide lines and the black lines are black whereas the 1 pixel wide lines are at best blurry and the 2 and even 3 pixel wide lines doe get to a very good black level (as in the contrast is very poor).  And the center is the very best case for the Celluon LBS whereas for the UO with it 100% offset it is a medium case (the best case is lower center).

The worst case for both projectors is one of the upper corners and below is a similar comparison of their upper right corner.  As before, I have included and inset of the UO’s single pixel image.


Side By Side Center Patterns (click on image for full size picture)

What you should notice is that while there are still distinct 1 pixel wide lines in both directions in the UO projector, 1 pixel wide lines in the case of the Celluon LBS are a blurry mess.  Clearly they can’t resolve 1 pixel wide lines at 720P.

Because of the 100% offset optics the best case for the UO projector is at the bottom of the image (this is true almost any 100% offset optics) and this case is not much different than the center case for Celluon projector (see below):


Below is a side by side picture I took (click on it for a full size image). The camera’s “white point” was an average between the two projectors (Celluon is too red/blue&green deficient and the UO is red deficient). The image below is NOT what I used in the cropped test patterns above as the 1 pixel features were too near the resolution limit of the Canon 70D camera (5472 by 3648 pixels) for the 1 pixel features.  So I used individual shots from each projector to double “sample” by the camera of the projected images.


For the Celluon PicoPro image I used the picture below (originally taken in RAW but digital lens corrected, cropped, and later converted to JPG for posting – click on image for full size):


For the UO Smart Beam image, I use the following image (also taken in RAW, digital lens corrected, straighten slightly, cropped and later converted to JPG for posting):


As is my usual practice, I am including the test pattern (in lossless PNG format below for anyone who wants to verify and/or challenge my results:

interlace res-chart-720P G100A

I promise I will publish any pictures by anyone that can show better results with the PicoPro or any other LBS projector (Or UO projector for that matter) with the test pattern (or similar) above (I went to considerable effect to take the best possible PicoPro image that I could with a Canon 70D Camera).

Celluon Laser Beam Scanning Power Consumption (Over 6 Watts at 32 Lumens)

Celluon Power MeasurementsOn the left are a series of power measurements I made on the Celluon PicoPro projector with an optical engine designed by Sony using a Microvision scanning mirror.  The power was calculated based on the voltage and current from current coming from the battery using the HDMI input.

The first 6 measurements were with a solid image of the black/white/color indicated.  For the last 3 measurements I did an image that was half black on the left and the other half white, an image that was top half black, and a screen of 1 pixel wide vertical stripes.    The reason for the various colors/patterns was to gain some additional insight into the power consumption (and will be covered in a future article).  In addition to the power (in Watts) added  a column with the delta power from the Black image.

Celluon PicoPro Battery IMG_8069

Picture of Celluon PicoPro Battery

The Celluon PicoPro consumes 2.57 Watts for a fully black image (there are color lines at the bottom, presumably for laser brightness calibration) and 6.14W for a 32 lumen full white image.   When you consider that a smart phone running with the GPS only consumes about 2.5W and a smart phone LCD on full brightness consumes about 1W to 1.5W, over 6W is a lot of power (Displaymate has and excellent article on smartphone displays that includes the power consumption).   The Celluon has a 3260mah / 12.3Wh battery which is bigger than what goes in even large smartphones (and fills most of the left side of the case).

So why does the Celluon unit not need a fan, the answer is A) it only outputs 32-lumens and B) it use a lot of thermal management build into the case to spread the heat from the projector.  In the picture below I have shown some of the key aspects of the thermal management.  I have flipped over the projector and indicated with dashed rectangles were the thermal pads (a light blue color) go to the projector unit.  In addition the cast aluminum body used to hold the lasers and the optics which acts as a heat sink to spread the heat, there is gray flexible heat spreading material lining the entire top and bottom of the case plus a more hidden, a heat sink amalgamation essentially dedicated to the lasers as well as aluminum fins around the sides of the case.

2015-07-22_Case Heat Sinking 003

The heat spreading material on the left (as view) top of the case is pretty much dedicated to the battery, but all the rest of the heat spreading, particularly along the bottom of the case goes to the projector.

The most interesting feature is that there is a dedicated heat path from the area where the lasers are held in the cast body to the a heat sink “hidden chamber” or what I have nicknamed “the thermal corset”.   You should notice that there are three (3) light blue heat pads on the right side of the case top and that the middle one is isolated from the other two.  This middle one is also thicker and goes through a hole in the main case body to a chamber that filled with a heat sink material and then covered with an outer case.   This also explains why the Cellouon unit looks like it is in two parts from the outside.

Don’t get me wrong, having a fanless projector is desirable, but it is not due to the “magic” of using lasers.  Quite to the contrary, the Celluon unit has comparitively poor lumens per Watt, about double the power of what a similar DLP projector would take for the same lumens.

You may want to notice in the table that if you add up the “delta” red, green, and blue it totals to a lot more than the delta white.  The reason for this is that the Celluon unit never puts out “pure” fully saturated primary colors.  It always mixes a significant amount of the other two colors (I have verified this with several methods including using color filters over the output and using a spectral-meter).    This has to be done (and is done with LED projectors as well) so that the colors called for by standard movies and pictures are not over-saturated (if you don’t do this, green grass, for example” will look like it is glowing).

Another interesting result is that the device consumes more power if I put up a pattern were the left half is black and the right half is white rather than having the top half black and the bottom half white.   This probably has something to do with laser heating and not getting a chance to cool down between lines.

I also put up a pattern with alternating 1 pixel wide vertical lines and it should be noted that the power is between that of the left/right half screen image and the full white image.

So what does this mean in actual use?   With “typical” movie content, the image is typically about 25% to 33% (depends on the movie) of full white so the projector will be consuming about 4 Watts per hour which with a 12.3Wh battery will go about 3 hours.   But if you are web browsing, the content is often more like 90% of full white so it will be consuming over 6W per hour or 4 to 6 times what a typical smartphone displays consumes.    Note this is before you add in the power consumed in getting and processing the data (say from the internet).


The Celluon projector may be fanless,  but not because it is efficient.  From a product perspective, it does do a good job with its “thermal corset” of hiding/managing the power.

This study works from the “top down” by measuring the power and seeing where the heat is going in the case, the next time I plan to work some “bottom’s up” numbers to help show what causes the high power consumption and how it might change in the future.

Celluon/Sony/Microvision Optical Path

Celluon Light Path Labled KGOnTech

Today I’m going to give a bit of a guided tour through the Celluon optical path.  This optical engine was developed by Sony probably based on Microvision’s earlier work and using Microvision’s scanning mirror.   I’m going to give a “tour” of the optics and then give some comment on what I see in terms of efficiency (light loss) and cost.

Referring to the picture above and starting with the lasers at the bottom, there a 5 of them (two each of red and green and one blue) that are in a metal chassis (and not visible in the picture).   Each laser goes to it own beam spreading and alignment lens set.  These lenses enlarge the diameter of each laser beam and they are glued in place after alignment.  Note that the beams at this point are spread wider than the size of the scanning mirror and will be converged/focus back later in the optics.

Side Note: One reason for spreading the laser beams bigger than the scanning mirror is to reduce precision required of the optical components (making very small high precision optics with no/extremely-small defects becomes exponentially expensive).  But a better explanation is that it supports the despeckling process.  With the wider beam they can pass the light through more different paths before focusing it back.  There is a downside to this as seen in the Celluon output, namely is still too big when exiting the projector and thus the images are out of focus at short projection distances. 

After the beam spreading lenses there is glass plate at a 45 degree angle that splits a part of the light from the lasers down to a light sensors for each laser.   The light sensors are used to give feedback on the output of each laser and adjust to adjust them based on how they change with temperature and aging.

Side Note:  Laser heating and the changing of the laser output is a big issue with laser scanning. The lasers very quickly change in temperature/output.  In tests I have done, you can see the effect of bright objects on one side of the screen affecting the color on the other side of the screen in spite of the optical feedback.   

Most of the light from the sensor deflector continues to a complex structure of about 15 different pieces of optically coated solid glass elements glued together into a complex many faceted structure. There are about 3 times as many surfaces/components as would be required for simply combining 3 laser beams.   This structure is being used to combine the various colors into a single beam and has some speckle reducing structures.  As will be discussed later, having the light go through so many elements, each with their optical losses (and cost) results in loosing over half the light.

lenovo 21s cropFor reference compare this to the optical structure shown in the Lenovo video for their prototype laser projector in a smartphone at left (which uses an STMicro engine see).  There are just 3 lenses, 1 mirror (for red) and two dichroic plate combiners to combine the green and blue and a flat window. The Celluon/Sony/Microvision engine by comparison is using many more elements and instead of simple plate combiners they are using prisms which while having better optical performance, are considerably more expensive.  The Lenovo/STM engine does not show/have the speckle reduction elements nor the distortion correction elements (its two mirror scanning process inherently has less distortion) of the Celluon/Sony design.

Starting with the far left red laser light path, it goes to a “Half Mirror and 2nd Mirror” pair.   This two mirror assembly likely being done for speckle reduction.  Speckle is caused by light interfering with itself and by having the light follow different path lengths (the light off the 2nd mirror will follow a slightly longer path) it will reduce the speckle.  The next element is a red-pass/green-reflect dichroic mirror that combines left red and green lasers followed by a red&green-pass/blue-reflect dichroic combiner.

Then working from the right, there is another speckle reduction half-mirror/2nd-mirror pair for the right hand green laser followed by a green-pass/red-reflect dichroic mirror to combine the right side green and red lasers.  A polarizing combiner is (almost certainly) used to combine the 3 lasers on the left with the two lasers on the right into a single beam.

After the polarizing combiner there is a mirror that directs the combined light through a filter encased between two glass plates.  Most likely this filter either depolarizes or circularly polarizes the light because on exiting this section into the open air the previously polarized laser light has little if any linear polarization.   Next the light goes through a 3rd set of despeckling mirror pairs.   The light reflects off another mirror and exits into a short air gap.

Following the air gap there is a “Turning Block” that is likely part of the despeckling.   The material in the block probably has some light scattering properties to vary slightly the light path length and thus reduce speckle and thus the reason for the size/thickness of the block.   There is a curved light entry surface that will have a lens effect.

Light exiting the Turning Block goes through a lens that focuses the spread light back to a smaller beam that will reflect off the beam scanning mirror.  This lens set the way the beam diverges after it exits the projector.

After the converging lens the light reflects off a mirror that sends the light into the beam scanning mirror assembly.  The beam scanning mirror assembly, designed by Microvision, is it own complex structure and among other things has some strong magnets in it (supporting the magnetic mirror deflection).

Side Note: The STM/bTendo design in the Lenovo projector uses two simpler mirrors that move in only one axis rather than a single complex mirror that has to move in two axes.  The STM mirrors both likely uses a simple electrostatic only design whereas Microvision’s dual axis uses electrostatic for one direction and electromagnetic for the other.  

Finally, the light exits the projector via a Scanning Correction Lens that is made of plastic. It appears to be the only plastic optical element as all the other elements that could be easily accessed.   Yes, even though this is a laser scanning projector, it still has a correction lens, in this case to correct the otherwise “bow-tie” distorted scanning process.

Cost Issues

In addition to the obvious cost of the lasers (and needing 5 of them rather than just 3) and the Scanning Mirror Assembly, there are a large number of optically coated glass elements.  Addtionally, instead of using lower cost plate elements, the Celluon/Sony/Microvision engine use much more expensive solid prisms for the combiner and despeckling elements.   Each of these has to be precisely made, coated, and glued together. The cost of each element is a function of the quality/optical efficiency and which can vary significantly, but I would think there would be at least $20 to $30 of raw cost in just the glass elements even at moderately high volumes (and it could be considerably more).

Then there is a lot to assemble with precise alignment of all the various optics.  Finally, all of the lasers must be individually aligned after the unit with all the other elements has been assemble.

Optical Efficiency (>50% of the laser light is lost)

The light in the optical engine passes through and/or reflects off a large number of optical interfaces and there are light losses at each of these interfaces.  It is the “death by a thousand cuts” because while each element might have a 1% to 10% or more lose, the effects are multiplicative.   The use of solid rather than plate optics reduces the losses but as at added cost.  You can see in the picture of the walls of the chassis spots of colored light that has “escaped” the optical path and is lost.  You can also see the light glowing off optical elements including the lens; all of this is lost light.  The light that goes to the light sensors is also lost.

Celluon laser lable IMG_9715

Laser Warning Label From Celluon Case

Some percentage of the light that is spread will not be converged back onto the mirror.  Additionally, there are scattering losses in the Correction Lens and Turning block and in the rest of the optics.

When it is multiplied out, more than 50% of the laser light is lost in the optics.

This 50% light loss percentage agrees with the package labeling (see picture on the left) that says the laser light output for Green is 50mW even thought they are using two green lasers each of which likely outputs 50mW or more.

Next Time: Power Consumption

The Celluon system consumes ~2.6 Watts to put up a “black” image and ~6.1 Watts to put up a 32-lumen white image.  The delta between white and black being about 3.5 Watts or about 9 lumens per delta Watt from back to white.  For reference, the newer DLP projectors using LEDs can produce about double the delta lumens per Watt.  Next time, I plan on drilling down in the power consumption numbers.

Lenovo’s STMicro Based Prototype Laser Projector (part 1)

Lenovo Tech World Projector 001At Lenovo at their Tech World on May 27th 2015 showed a Laser Beam Scanning (LBS) projector integrated into a cell phone prototype (to be clear, a prototype and not a product).   White there has been no announcement of the maker of the LBS projector, there is no doubt that is made by STM as I will show below (to give credit where it is due, this was first shown on a blog by Paul Anderson focused on Microvision )

ST-720p- to Lenove comparison 2The comparison at left is base on video by Lenovo that included an exploded views of the projector and pictures of STM’s 720p projector from an article from on Jan 18, 2013.   I have drawn lines comparing various elements such as the size and placement of connectors and other components, the size and placement of the 3 major I.C.’s, and even the silk screen “STM” in the same place on both the Lenovo video and the STM article’s photo (circled in yellow).

While there are some minor differences, there are so many direct matches that there can be no doubt that Lenovo is using STM.

The next interesting to consider is how this design compares to the LBS design of Microvision and Sony in the Celluon projector.   The Lenovo video shows the projector as being about 34mm by 26mm by 5mm thick.  To check this I took the a photo from the STM to CelluonTO SCALE
article and was able to fit the light engine and electronic into a 34mm by 26mm rectangle arranged as they are in the Lenovo video (yet one more verification that it is STM).   I then took a picture I took of the Celluon board to the same scale and show the same 34x26mm rectangle on it.   The STM optics plus electronics are 1/4 the area and 1/5th the volume (STM is 5mm thick versus Microvision/Sony’s 7mm).

The Microvision/Sony is has probably about double the lumens/brightness of the STM module due to have two green and two red lasers and I have not had a chance to compare the image quality.   Taking out the extra two lasers would make the Microvision/Sony engine optics/heat-sinking smaller by about 25% and have a smaller impact on the board space, but this would still leave them over 3X bigger than STM.   The obvious next question is why.

One reason is that the STM either has a simpler electronics design or is more integrated and/or some combination thereof.  In particular the Microvision/Sony design requires an external DRAM (large rectangular chip in the Microvision/Sony).    STM probably still needs DRAM, but it is likely integrated into one of their chips.

There are not a lot of details on the STM optics (developed by bTendo of Israel before being acquired by STM).   But what we do know is STM uses separate simpler and smaller horizontal and vertical mirrors versus Microvision significantly larger and more complex single mirror assembly.  Comparing the photos above, the Microvision mirror assembly alone is almost as big as STM’s entire optical engine with lasers.   The Microvision mirror assembly has a lot of parts other than the MEMs mirror including some very strong magnets.  Generally the optical path of the Microvision engine requires a lot of space to enter and exit the Microvision mirror from the “right” directions.

btendo optics

On the right I have captured two frames from the Lenovo video showing the optics from two directions.  What you should notice is that the mirror assembly is perpendicular to the incoming laser light.  There appears to be a block of optics (pointed to by the red arrow in the two pictures) that redirects the light down to the first mirror and then returning it to the second mirror.  The horizontal scanning mirror is clearly shown in the video but it is not clear (so I took an educated guess) as to the location of the vertical scanning mirror.

Also shown at the right is bTendo patent 8,228,579 showing the path of light for their two scanning mirror design.   It does not show the more complex block of optics required to direct the light down to the vertical mirror and then redirect it back down to the horizontal mirror and then out as would be required in the Lenovo design.    You might also notice that there is a flat clear glass/plastic output cover shown in the at the 21s point in the video, this is very different from the Microvision/Celluon/Sony design show below.

Microvision mirror with measurements

Microvision Mirror Assembly and Exit Lens

Shown at left is the Microvision/Celluon beam scanning mirror and the “Exit” Lens.   First notices the size and complexity of the scanning mirror assembly with magnets and coils.  You can see the single round mirror with its horizontal hinge (green arrow) and the vertical hinge (yellow arrow) on the larger oval yoke.   The single mirror/pivot point causes an inherently bow-tied image.  You can see how distorted the mirror looks through the Exit Lens (see red arrow); this is caused by the exit lens correcting for the bow-tie effect.  This significant corrective lens is also a likely source of chroma aberrations in the final image.


All the above does not mean that the Leveno/STM is going to be a successful product.   I have not had a chance to evaluated the Lenovo projector and I still have serious reservations about any embedded projector succeeding in a cell phone (I outlined my reasons in an August 2013 article and I think they still hold true).    Being less than 1/5th the volume of the Microvision/Sony design is necessary but I don’t think is sufficient.

This comparison only shows that the STM design is much smaller than Microvisions and Microvision has only made relatively small incremental progress in size since the ShowWX announced in 2009) and Sony so far has not improved on it much, at least so far.

IRIS HUD on Indiegogo Appears to be Repackaged Pioneer HUD(s)

The startup IRIS has started and Indiegogo presale campaign for not just one (a major challenge for a new company)  but two different HUD designs, one “laser” and one DLP based.    Their video and “story” talk about how they designed this HUD and even show some CAD pictures, 3-D printing (of what?), and a CNC milling machine (but not showing what is being made).

The problem is that this “new” unit looks almost identical at every point to Pioneer HUD announced shipped in Japan in 2012 (with a slightly updated version in 2013) see such as, “The Verge” article from May 2012.   Pioneer’s model was also a “Laser HUD” and used a Microvision beam scanning mirror and laser control electronics.

Pioneer then in late 2013 Pioneer introduced a less expensive model based on Texas Instrument’s DLP that I wrote about on Seeking Alpha.   And low and behold IRIS also has a DLP version.  Where the Laser version was sold with Pioneer’s proprietary navigation system, the DLP version was sold in Europe that connect to a smart-phone.

According to IRIS’s Indiegogo campaign,

This limited quantity of Laser (30) and DLP (300) units are being assembled and will be ready to ship at the end of the campaign.  

Assuming that if IRIS is actually going to be delivering these products (that is always a big “if” for a new high-tech product on Indiegogo), the only rational conclusion is that they are shipping Pioneer’s unsold inventory of at least Laser and DLP engines if not whole systems.

Below are a series of comparison photos with alternating photos of the IRIS HUD and the Pioneer Laser HUD.   I have draw lines connecting corresponding elements between the IRIS and Pioneer HUDs.   I will go into some more of the business issues after the photos.

IRIS Pioneer Comparison 003

IRIS does claim to be adding features that were not in the either the Laser or DLP based Pioneer systems, specifically they say they are adding “gesture recognition” and connection to the OBD (on-board diagnostics) port.   Being I think most generous, it could be that they are taking the old unsold Pioneer units and modifying them.   I could be OK with this, but I am always a bit distrustful when I catch someone fudging on what they did.

Pioneer DLP hud2Interestingly, the Pioneer DLP HUD (left) while it worked with smartphones, as does IRIS’s HUD, it looks quite different and it optically different in just about every way but the combiner.   The Pioneer Laser HUD rear projected on a screen behind a large plastic lens that is then viewed via the combiner (the “combiner” is that large curved plastic mostly transparent but slightly mirrored lens at the front of the unit).  The Pioneer DLP HUD front projects on a a screen that is then seen reflected in- and magnified by- the combiner.

Additionally, the Pioneer Laser HUD required you to remove your sun visor to mount the unit where their DLP HUD strapped to the sun visor (see the photo above).   This got me curious how they could be selling two radically different designs that also mounted differently while showing a single product so I posted the follow question and got the response below on IRIS’s Facebook page:

Karl Guttag‎, “Is the case and mounting the same for the Laser and the DLP versions of the product?

IRIS “Yes, Absolutely the same!

I guess it is possible that they took Pioneer Laser HUD cases and reworked/redesigned them to fit the DLP and added gesture recognition and OBD.   That would seem to me to be a pretty major effort for a small team with little known funding.

Yet they say they are going to ship units at the end of their Indiegogo campaign this month which would suggest they have them in-stock.   If they are so close to having real product, then I would have expected them to be out there demonstrating them to reviewers and not just showing the carefully staged video on Indiegogo.   Maybe they have something, but maybe it does not work very well.   Something just does not seem to add up.

BTW, I have had the opportunity to see both the Pioneer Laser and DLP based HUDs.  Frankly, neither one seems very practical.  The Laser HUD requires you to remove your sun visor to mount it and they give you a small sun visor that only goes up and down (can’t block your side window and and does not cover enough).  Additionally unless you are very short, the combiner tends to cut through your critical forward vision.   The DLP version was worse in that it mounted below the sun visor and totally blocks the forward vision if you are tall and/or your seat adjust to a high position.  The bottom line, there are reasons why the Pioneer units did not sell well.


I previously worked as CTO for Navdy which is also developing an aftermarket HUD product and could be seen as a competitor for IRIS.  I currently have no financial interest in Navdy.   Because of my prior position at Navdy and knowledge of non-public information, it is not appropriate for me to comment on their product.

Celluon LBS Analysis Part 2B – “Never In-Focus Technology” Revisit

Celluon alignment IMG_9775

After Alignment alignment target (click for bigger image)

I received concerns that the chroma aberrations (color fringes) seen in the photos in Part 2B were caused by poor alignment of the lasers.   I had aligned the lasers per Celluon’s instructions before running the test but I decided to repeat the alignment to see if there would be a difference.

After my first redo of the alignment I notice that the horizontal resolution got slightly better in places but the vertical resolution got worse.   The problem I identified is that the alignment procedure does not make aligning the pairs of red and green lasers easy.  The alignment routine turns all 5 lasers on a once which makes it very difficult to see pairs of lasers of the same color.

To improve on the procedure, I put a red color filter in front of the projector output to eliminate the blue and two green lasers and then aligned the two red laser to each other.  Then using a green color filter, I aligned the two green lasers.  I did this for both horizontally and vertically.   On this first pass I didn’t worry about the other colors.  On the next pass I moved the red pair by always the same amount horizontally and vertically and similarly for the green pair.  I went around this loop a few times trying for the best possible alignment (see picture of alignment image above).

After the re-alignment I did notice some slightly better horizontal resolution in the vertical lines (but not that much and not everywhere) and some very slight improvement in the vertical resolution.   There was still the large chroma aberrations, particularly on the left side of the image (much less so on the right side) that some had claimed were “proof” that the lasers were horribly aligned (which they were not before).   The likely cause of the chroma aberrations is the output lens and/or angle error in the mechanical alignment of the lasers.

Below shows the comparison before and after on the 72-inch diagonal image.laser alignment comparison 2

Note the overall effect (and the key point of the earlier article_ of the projected image going further out of focus at smaller image sizes.   Even at 72-inch diagonal the image is far from what should be considered sharp/in-focus even after the re-calibration.

Below shows the left and right side of the 72-in diagonal image.  The green arrows show that there is minimal chroma aberration on the right side but there is a significant issue on the left side.   Additionally, you may note the sets of parallel horizontal lines have lost all definition on the left and right side and the 1 pixel wide targets are not resolved (compare to the center target above).   This loss of resolution on the sides of the image is inherent in Microvision’s scanning process.

Celluon 72-in diag left-right targets

Center left and center right of 72-in diag. after re-alignment (click on thumbnail for full resolution image)

While the re-alignment did make some parts of the image a little more defined, the nature of the laser scanning process could not fully resolved other areas.   In future article I hope to get into this some more.

One other small correction from the earlier article, the images labeled “24-inch diagonal” are actually closer to 22-inches in diagonal.

Below are the high-resolution (20 megapixel) images for the 72-in, 22-in, and 12-in images after calibration.  I used a slightly different test patter which is also below (click on the various images for the high-resolution version).

Celluon 72-in diag  recalibrated IMG_9783

Celluon 72-in diag re-calibrated (click for full size image)

Celluon 22-in diag  recalibrated IMG_9864

Celluon 22-in diag re-calibrated (click for full size image)

Celluon 12-in diag recalibrated IMG_9807

Celluon 12-in diag re-calibrated (click for full size image)





interlace res-chart-720P G100A

Test Chart for 1280×270 resolution (click for full resolution)

Just to verify that my camera/lens combination was in no way limiting the visible resolution of the projected image, I also took some pictures of about 1/3 of the image (to roughly triple the resolution) and with an 85mm F1.8 “prime” (non-zoom) lens shot at F6.3 so it would show extremely find detail (including the texture of the white wall the image was projected onto).

Below are the images showing the Center-Left, Center and Center-Right resolution targets of the test chart above.   Among other things to notice how the resolution of the projected image drops from the center to the left and right and also how the chroma/color aberrations/fringes are most pronounce on the center-left image.


Celluon 72-in diag 85mm Center-Left 9821

85mm Prime Lens Center Left Target and Lines (click for full size image)

Celluon 72-in diag 85mm lens center  9817

85mm Prime Lens Center Target and Lines (click for full size image)

Celluon 72-in diag 85mm center-right 9813

85mm Prime Lens Center-Right Target and Lines (click for full size image)


Celluon Laser Beam Steering Analysis Part 2 – “Never In-Focus Technology”

June 6th 2015 – Note, I am in the process of updating this analysis with new photos.  The results are not dramatically different but I was able to improve the horizontal resolution slightly and now have some better pictures.    

Celluon image size comparison center cropsOne of the first things I noticed when projecting text pattern images with the Celluon PicoPro was that the images were very blurry.   I later found out that the smaller the image the blurrier it became.

To the left are high-resolution center crops of images taken with a 12-inch diagonal (about as big as you can get on a letter size sheet of paper, a 24-inch diagonal image (about as big as fits on a standard “B” size sheet of paper, and a 72-inch diagonal image I project on a wall.   For reference I have also included a the same portion of the source 3x magnified.

As you should notice the 12-in diagonal image is completely blurry even at 1/2 the stated resolution.  With the 24-inch diagonal you can start to see some “modulation” of the single pixel size lines horizontally but not vertically.  With the 72-inch diagonal the horizontal lines are pretty clear but still the vertical lines are still pretty much a blur (on close visual inspection there is a little modulation of the single pixel wide lines).

What is happening is that size of the laser beams is larger than the pixel size for small images.  The size of the beam diverges but at a slower rate than the size of the image grows so eventually the laser beam size is smaller than a “pixel” and you start to see separation between horizontal 1 pixel wide lines.

As for the horizontal resolution, whatever is driving the lasers in their horizontal sweep is not able to fully modulate them at single pixel resolution.

For the next set of 3 images (plus a 2x Magnified source) I have scale the images down so you can see more area.  Note you need to click on the image to see it at its intended size and to see the detail.  In these pictures you can see the ruler with both indicates the size of the image and shows that the camera was in-focus and could see the detail if it was in the projected image.

On the 24-inch diagonal and 72-in diagonal image I have drawn 3 ovals.  The left oval is around a set of 4 line pairs (see source image) of horizontal and vertical lines.   The middle and right ovals are each around 4 line pairs of vertical lines and two sets of 4 pairs of horizontal lines and where the horizontal and vertical lines cross is a set of 9 white pixels (never visible in any of the projected images).

Looking at the 72-inch image you may notice that you can barely make out the horizontal line pairs in the center oval but that they become blurry in the right oval.  This is due to the interlaced Lissajous scanning being done (for more detail on the Microvision interlaced scanning process see:  The net effect of this scanning process is that vertical resolution is reduce from the center to the left and right sides.

Image Size Comparison

The 5 year old Microvision ShowWX having this blurring issue with small images.  In looking inside at the optics with the lasers on, I notice that the laser spot sizes were larger than expected.  I’m left wondering if the larger laser spot sizes were at least in part cause by efforts to reduce speckle or for some other reason.

Next time, I plan on giving a little “tour” of the optics.

Addendum – How the pictures were taken, full resolution images, and source pattern used

All the pictures were taken with a Canon 70D (5472 by 3648 pixel) DSLR.  By framing the pictures so that filled roughly 90% of the width, this meant there were roughly 4 camera pixel “samples” per pixel in the output image.   The ruler in the picture was both to keep track of the size of the image and to make sure the camera was in-focus and could resolve single pixels (if they were there).

I did selectively zoom in with the camera on smaller regions to see if it made any measurable difference in resolving features in the images and it did not.  I have included the test pattern I used and would welcome anyone using it to verify what I have shown.

By clicking on the thumbnails below you will bring up the full size image (depending on your browser it may not display full size until after you click on the magnifying glass).  You can then right click to download the images.   Each image is about 8 to 9 Megabytes and is stored in a high quality (low compression) JPG format.   The source test pattern is stored in loss-less PNG.

12-inch Diag Celluon_8572

12-in Diagonal Celluon Image (20 megapixels-click to see full size image)

24-inch Diag Celluon_8452

24-in Diagonal Celluon Image (20 megapixels click to see full size image)

72-inch Diag Celluon_8205

72-in Diagonal Celluon Image (20 megapixels click to see full size image)

Basic res-chart-720P

Test Pattern Source (1280×720 pixels PNG format, click for full size image)