Archive for LCOS

Wrist Projector Scams – Ritot, Cicret, the new eyeHand

ritot-cicret-eyehand-001Wrist Projectors are the crowdfund scams that keeps on giving with new ones cropping up every 6 months to a year. When I say scam, I mean that there is zero chance that they will ever deliver anything even remotely close what they are promising. They have obviously “Photoshopped”/Fake pictures to “show” projected images that are not even close to possible in the the real world and violate the laws of physics (are forever impossible). While I have pointed out in this blog where I believe that Microvision has lied and mislead investors and showed very fake images with the laser beam scanning technology, even they are not total scammers like Ritot, Cicret, and eyeHand.

According to Ritot’s Indiegogo campaign, they have taken in $1,401,510 from 8917 suckers (they call them “backers”).   Cicret according to their website has a haul of $625,000 from 10,618 gullible people.

Just when you think that Ritot and Cicret had found all the suckers for wrist projectors, now CrowdFunder reports that eyeHand has raised $585,000 from individuals and claims to have raised another $2,500,000 in equity from “investors” (if they are real then they are fools, if not, then it is just part of the scam). A million here, $500K there, pretty soon you are talking real money.

Apparently Dell’s marking is believing these scams (I would hope their technical people know better) and has show video Ads that showed a similar impossible projectors.  One thing I will give them is that they did a more convincing “simulation” (no projecting “black”) and they say in the Ads that these are “concepts” and not real products. See for example the following stills from their Dell’s videos (click to see larger image).  It looks to me like they combined a real projected image (with the projector off camera and perpendicular to the arm/hand) and then add fake projector rays to try and suggest it came from the dummy device on the arm): dell-ritots-three

Ritot was the first of these scams I was alerted to and I help contribute some technical content to the DropKicker article http://drop-kicker.com/2014/08/ritot-projection-watch/. I am the “Reader K” that they thanked in the author’s note at the beginning of the article.  A number of others have called out the Ritot and Cicret as being scams but that did not keep them from continuing to raise money nor has it stopped the new copycat eyeHand scam.

The some of key problems with the wrist projector:

  1. Very shallow angle of projection.  Projectors normally project on a surface that is perpendicular to the direction of projection, but the wrist projectors have to project onto a surface that is nearly parallel to the direction of projection.  Their concepts show a projector that is only a few (2 to 4) millimeters above the surface. When these scammers later show “prototypes” they radically change the projection distance and projection angle.
  2. Extremely short projection distance.  The near side of the projection is only a few millimeters away while the far side of the image could be 10X or 50X further away.  There is no optics or laser scanning technology on earth that can do this.  There is no way to get such a wide image at such a short distance from the projector.  As light falls off with the square of distance, this results in an impossible illumination problem (the far side being over 100X dimmer than the near side).
  3. Projecting in ambient light – All three of the scammers show concept images where the projected image is darker than the surrounding skin.  This is absolutely impossible and violates the laws of physics.   The “black” of the image is set by the ambient light and the skin, the projector can only add light, it is impossible to remove light with a projector.  This shows ignorance and/or a callous regard for the the truth by the scammers.
  4. The blocking of the image by hairs, veins, and muscles.  At such a shallow angle (per #1 above) everything is in the way.
  5. There is no projector small enough.  These projector engines with their electronics that exist are more than 20X bigger in volume than what would be required to fit.
  6. The size of the orifice through with the light emerges is too small to support the size of the image that they want to project
  7.  The battery required to make them daylight readable would be bigger than the whole projector that they show.  These scammers would have you believe that a projector could work off a trivially small battery.
  8. Cicret and eyeHand show “touch interfaces” that won’t work due to the shallow angle.  The shadows cast by fingers working the touch interface would block the light to the rest of the image and made “multi-touch” impossible.   This also goes back to the shallow angle issue #1 above.

The issues above hold true whether the projection technology uses DLP, LCOS, or Laser Beam Scanning.

Cicret and Ritot have both made “progress reports” showing stills and  videos using projectors more than 20 times bigger and much higher and farther away (to reduce the projection angle) than the sleek wrist watch models they show in their 3-D CAD models.   Even then they  keep off-camera much/most of the electronics and battery/power-supply necessary needed to drive the optics that the show.

The image below is from a Cicret “prototype” video Feb of 2015 where they simply strapped a Microvision ShowWX+ HMDI upside down to a person’s wrist (I wonder how many thousand dollars they used engineering this prototype). They goofed in the video and showed enough of the projector that I could identify (red oval) the underside of the Microvision projector (the video also shows the distinctive diagonal roll bar of a Microvision LBS projector).  I have show the rest of the projector roughly to scale in the image below that they cropped off when shooting the video.  What you can’t tell in this video is that the projector is also a couple of inches above the surface of the arm in order to project a reasonable image.

cicret-001b

So you might think Cicret was going to use laser beam scanning, but no, their October 2016 “prototype” is showing a panel (DLP or LCOS) projector.  Basically it looks like they are just clamping whatever projector they find to a person’s wrist, there is no technology they are developing.  In this latest case, it looks like what they have done is found a small production projector taken its guts out and put it in a 3-D printed case.  Note the top of the case is going to be approximately 2 inches above a person’s wrist and how far away the image is from the projector.

cicret-002e

Ritot also has made update to keep their suckers on the hook.   Apparently Indiegogo only rule is that you much keep lying to your “backers” (for more on the subject of how Indiegogo condones fraud click here).  These updates at best show how little these scammers understood projection technology.   I guess one could argue that they were too incompetent to know they were lying.  ritot-demo-2014

On the left is a “demo” Ritot shows in 2014 after raising over $1M.  It is simply an off the shelf development system projector and note there is no power supply.  Note they are showing it straight on/perpendicular to the wrist from several inches away.

ritot-2015By 2015 Rito had their own development system and some basic optics.  Notice how big the electronics board is relative to the optics and that even this does not show the power source.

By April 2016 they showed an optical engine (ONLY) strapped to a persons wrist.  ritot-2016-04-20-at-25sCut off in the picture is the all the video drive electronics (see the flex cable in the red oval) that is off camera and likely a driver board similar to the one in the 2015 update  and the power supplies/battery.

In the April 2016 you should notice how the person’s wrist is bent to make make it more perpendicular to the direction of the projected image.  Also not that the image is distorted and about the size of an Apple watch’s image.   I will also guarantee that you will not have a decent view-able image when used outdoors in daylight.

The eyeHand scam has not shown anything like a prototype, just a poorly faked (projecting black) image.  From the low angle they show in their fake image, the projected would be blocked by the base of the thumb even if the person hold their hand flat.  To make it work at all they would have to move the projector well up the person’s arm and then bend the wrist, but then the person could not view it very well unless they hold their arm at an uncomfortable angle.  Then you have the problem of keeping the person from moving/relaxing their wrist and loosing the projection surface.   And of course it would not be view-able outdoors in daylight.

It it not like others have been trying to point out that these projectors are scams.  Google search “Ritot scam” or “Cicret scam” and you will find a number of references.  As best I can find, this blog is the first to call out the eyeHand scam:

  • The most technically in depth article was by Drop-Kicker on the Ritot scam
  • Captain Delusional has a  comic take on the Cicret scam on YouTube – He has some good insights on the issue of touch control but also makes some technical mistakes such as his comments on laser beam scanning (you can’t remove the laser scanning roll-bar by syncing the camera — also laser scanning has the same fall-off in brightness due do the scanning process).
  • Geek Forever had an article on the Ritot Scam 
  • A video about the Ritot Scam on Youtube
  • KickScammed about Ritot from 2014

The problem with scam startups is that they tarnish all the other startups trying to find a way to get started.  Unfortunately, the best liars/swindlers often do the best with crowdfunding.  The more they are willing to lie/exaggerate, the better it makes their product sound.

Indiegogo has proven time and again to have extremely low standards (basically if the company keep posting lies, they are good to go – MANY people tried to tell Indiegogo about the Ritot Scam but to no avail before Ritot got the funds). Kickstarter has some standards but the bar is not that large but at least I have not see a wrist projector on Kickstarter yet. Since the crowdfunding sites get a cut of the action whether the project delivers or not, their financial incentives are on the side of the companies and the people funding. There is no bar for companies that go with direct websites, it is purely caveat emptor.

I suspect that since the wrist projector scam has worked at least three (3) times so far, we will see other using it.   At least with eyeHand you have a good idea of what it will look like in two years (hint – like Ritot and Cicret).

Laser Beam Scanning Versus Laser-LCOS Resolution Comparison

cen-img_9783-celluon-with-uo

Side By Side Center Patterns (click on image for full size picture)

I apologize for being away for so long.  The pictures above and below were taken over a year ago and I meant to format and publish them back then but some other business and life events got in the way.

The purpose of this article is to compare the resolution of the Celluon PicoPro Laser Beam Scanning (LBS) projector and the UO Smart Beam Laser LCOS projector.   This is not meant to be a full review of both products, although I will make a few comments here and there, but rather, it is to compare the resolution between the two products.  Both projectors claim to have 720P resolution but only one of them actually has that “native/real” resolution.

This is in a way a continuation of the serious I have written about the PicoPro with optics developed by Sony and the beam scanning mirror and control by Microvision and in particular articles http://wp.me/p20SKR-gY and http://wp.me/p20SKR-hf.  With this article I am now included some comparison pictures I took of the UO Smart Beam projector (https://www.amazon.com/UO-Smart-Beam-Laser-Projector-KDCUSA/dp/B014QZ4FLO).

As per my prior articles, the Celluon PicoPro has no where close to it stated 1920×720 (non-standard) nor even 1280×720 (720P) resolution.  The UO projector while not perfect, does demonstrate 720P resolution reasonably well, but it does suffer from chroma aberrations (color separation) at the top of the image due to optical 100% offset (this is to be expected to some extent).

Let me be up front, I worked on the LCOS panel used in the UO projector when I was at Syndiant but I had nothing to do with the UO projector itself.   Take that as bias if you want, but the pictures I think tell the story.  I did not have any contact with either UO (nor Celluon for that matter) in preparing this article.

I also want to be clear that both the UO projector and the Celluon PicoPro tested are now over 1 year old and there may have been improvements since then.  I saw serious problems with both products, in particular with the color balance, the Celluon is too red (“white” is pink) and the UO very red deficient (“whilte is significantly blue-green).   The color is so far off on the Celluon that it would be a show stopper for me ever wanting to buy one as a consumer (hopefully UO has or will fix this).   Frankly, I think both projectors have serious flaws (if you want to know more, ask and I will write a follow-up article).

The UO Smart Beam had the big advantage in that it has “100% offset” which means that when placed on table-top, it will project upward not hitting the table without any keystone.   The PicoPro has zero offset and shoots straight out.  If you put it flat on a table the lower half of the image will shoot into the tabletop. Celluon includes a cheap and rather silly monopod that you can used to have the projector “float” above the table surface and then you can tilt it up and get a keystone image.  To take the picture, I had to mount the PicoPro on a much taller tripod and then shoot over the Projector so the image would not be keystoned

I understand that the next generation of the Celluon and a similar Sony MPCL1 projector (which has a “kickstand) have “digital keystone correction” which is not as good a solution as 100% offset as it reduces the resolution of the image; this is the “cheap/poor” way out and they really should have 100% offset like the UO projector (interestingly, earlier Microvision ShowWX projector with lower resolution had 100% offset) .

For the record – I like the Celluon PicoPro flatter form factor better; I’m not a fan of the UO cube as hurts the ability to put the projector in one’s pocket or a typical carrying bag.

Both the PicoPro with laser scanning and the Smart Beam with lasers illuminating an LCOS microdisplay have no focus knob and have a wide focus range (from about 50cm/1.5 feet to infinity), although they are both less sharp at the closer range.  The PicoPro with LBS is a Class 3R laser product whereas the Smart Beam with laser “illumination” of LCOS is only Class 1.   The measure dbrightness of the PicoPro was about 32 Lumens as rated when cold but dropped under 30 when heated up.  The UO while rated at 60 lumens was about 48 lumens when cold and about 45 when warmed up or more significantly below its “spec.”

Now onto the main discussion of resolution.  The picture at the top of this article shows the center crop from 720P test pattern generated by both projectors with the Smart Beam image on the left and the PicoPro on the right.   There is also an inset of the Smart Beam’s 1 pixel wide text pattern near the PicoPro’s 1 pixel wide pattern for comparison This test pattern shows a series of 1 pixel, 2 pixel and 3 pixel wide horizontal and vertical lines.

What you should hopefully notice is that the UO clearly resolves even the 1 pixel wide lines and the black lines are black whereas the 1 pixel wide lines are at best blurry and the 2 and even 3 pixel wide lines doe get to a very good black level (as in the contrast is very poor).  And the center is the very best case for the Celluon LBS whereas for the UO with it 100% offset it is a medium case (the best case is lower center).

The worst case for both projectors is one of the upper corners and below is a similar comparison of their upper right corner.  As before, I have included and inset of the UO’s single pixel image.

ur-img_9783-celluon-with-uo-overlay

Side By Side Center Patterns (click on image for full size picture)

What you should notice is that while there are still distinct 1 pixel wide lines in both directions in the UO projector, 1 pixel wide lines in the case of the Celluon LBS are a blurry mess.  Clearly they can’t resolve 1 pixel wide lines at 720P.

Because of the 100% offset optics the best case for the UO projector is at the bottom of the image (this is true almost any 100% offset optics) and this case is not much different than the center case for Celluon projector (see below):

lcen-celluon-with-uo-overlay

Below is a side by side picture I took (click on it for a full size image). The camera’s “white point” was an average between the two projectors (Celluon is too red/blue&green deficient and the UO is red deficient). The image below is NOT what I used in the cropped test patterns above as the 1 pixel features were too near the resolution limit of the Canon 70D camera (5472 by 3648 pixels) for the 1 pixel features.  So I used individual shots from each projector to double “sample” by the camera of the projected images.

side-by-side-img_0339-celluon-uo

For the Celluon PicoPro image I used the picture below (originally taken in RAW but digital lens corrected, cropped, and later converted to JPG for posting – click on image for full size):

img_9783-celluon-with-uo-overlay

For the UO Smart Beam image, I use the following image (also taken in RAW, digital lens corrected, straighten slightly, cropped and later converted to JPG for posting):

img_0231-uo-test-chart

As is my usual practice, I am including the test pattern (in lossless PNG format below for anyone who wants to verify and/or challenge my results:

interlace res-chart-720P G100A

I promise I will publish any pictures by anyone that can show better results with the PicoPro or any other LBS projector (Or UO projector for that matter) with the test pattern (or similar) above (I went to considerable effect to take the best possible PicoPro image that I could with a Canon 70D Camera).

Desperately Seeking the Next Big Thing – Head Mounted Displays (HMDs) — Part 1

Untitled-2With Microsoft’s big announcement of HoloLens and spending a reported $150 million just for HMD IP from the small Osterhout Design Group, reports of Facebook spending about $2 billion for Oculus Rift, and the mega publicity surrounding Google Glass and the hundreds of millions they have spent, Head Mounted Displays (HMD) are certainly making big news these days.

Most of the articles I have seen pretty much just parrot the company press releases and hype up these as being the next big thing.   Many of the articles have, to say the least, dubious technical content and at worst give misinformation.   My goal is to analyze the technology and much of what I am seeing and hearing does not add up.

The question is whether these are lab experiments with big budgets and companies jumping the gun that are chasing each other or whether HMDs really are going to be big in terms of everyone using them?    Or are the companies just running scared that they might miss the next big thing after cell phones and tablets.   Will they reach numbers rivaling cell phone (or at least a significant fraction)?    Or perhaps is there a “consolation prize market” which for HMDs would be to take significant share of the game market?

Let me get this out-of-the-way:  Yes, I know there is a lot of big money and smart people working on the problem.   The question is whether the problem is bigger than what is solvable?  I know I will hear from all the people with 20/20 hindsight all the successful analogies (often citing Apple) but for every success there many more that failed to catch on in a big way or had minor success and then dived.   As an example consider the investment in artificial intelligence (AI) and related computing in the 1980’s and the Intel iAPX 432 (once upon a time Intel was betting the farm on the 432 to be replacement for the 8086 until the IBM PC took off).    More recently and more directly related, 3-D TV has largely failed.  My point here is that big companies and lots of smart people make the wrong call on future markets all the time; sometimes the problems is bigger than all the smart people and money can solve.

Let me be clear, I am not talking about HMDs used in niche/dedicated markets.  I definitely see uses for HMDs applications where hands-free use is a definite.  A classic example is military applications where a soldier has to keep his hands free, is already wearing a helmet that messes up their hair and they don’t care what they look like, and they spend many hours in training.   There are also uses for HMD in the medical field for doctors as a visual aid and for helping people with impaired vision.  What I am talking about is whether we are on the verge of mass adoption.

Pardon me for being a bit skeptical, but on the technical side I still see some tremendous obstacles to HMD.    As I pointed out on this blog soon after Google Glass was announced http://www.kguttag.com/2012/03/03/augmented-reality-head-mounted-displays-part-1-real-or-not/ HMDs have a very long history of not living up to expectations.

I personally started working on a HMD in 1998 and learned about many of the issues and problems associated with them.    There are the obvious measurable issues like size, weight, fit/comfort and can you wear them with your glasses, display resolution, brightness, ruggedness, storage, and battery life.   Then there are what I call the “social issues” like how geeky it looks, does it mess up a person’s hair, and taking video (a particularly hot topic with Google Glass).   But perhaps the most insidious problems are what I lump into the “user interface” category which include input/control, distraction/safety, nausea/disorientation, and what I loosely refer to “as it just doesn’t work right.”   These issues only just touch on what I sometime joking refer to as “the 101 problems with HMDs.”

A lot is made of the display device itself, be it a transmissive LCD, liquid crystal on silicon (LCOS), OLED, or TI’s DLP.    I have about 16 years of history working on display devices, particularly LCOS, and I know the pro’s and con’s on each one in some detail.   But as it turns out, the display device and its performance is among the least of the issues with HMDs, I had a very good LCOS device way back in 1998.   As with icebergs, the biggest problems are the ones below the surface.

This first article is just to set up the series.  My plan is to go into the various aspects and issue with HMDs trying to be as objective as I can with a bit of technical analysis.    My next article will be on the subject of “One eye, two eyes, transparent or not.”

Whatever happened to pico projectors embedding in phones?

iPad smBack around 2007 when I was at Syndiant we started looking at the pico projector market, we talked to many of the major cell phone as well as a number of PC companies and almost everyone had at least an R&D program working on pico projectors.  Additionally there were market forecasts for rapid growth of embedded pico projectors in 2009 and beyond.  This convinced us to develop small liquid crystal on silicon (LCOS) microdisplay for embedded pico projectors.  With so many companies saying they needed pico projectors, it seemed like a good idea at the time.  How could so many people be wrong?

Here we are 6 years later and there are almost no pico projectors embedded in cell phones or much else for that matter.   So what happened?   Well, just about the same time we started working on pico projectors, Apple introduced their first iPhone.    The iPhone overnight roughly tripled the size of the display screen of a smartphone such as a Blackberry.  Furthermore Apple introduced ways to control the screen (pinch/zoom, double clicking to zoom in on a column, etc.) to make better use of what was still a pretty small display.   Then to make matter much worse, Apple introduce the iPad and tablet market took off almost instantaneously.    Today we have larger phones, so called “phablets,” and small tablets filling in just about every size in between.

Additionally I have written about before, the use model for a cell phone pico projector shooting on a wall doesn’t work.   There is very rarely if ever a dark enough place with something that will work well for a screen in a place that is convenient.

I found that to use a pico projector I had to carry a screen (at least a white piece of paper mounted on a stiff board in a plastic sleeve to keep clean and flat) with me.   Then you have the issue of holding the screen up so you can project on it and then find a dark enough place that the image looks good.    By the time you carry a pico projector and screen with you, a thin iPad/tablet works better, you can carry it around the room with ease, and you don’t have to have very dark environment.

The above is the subjective analysis, and the rest of this article will give some more quantitative numbers.

The fundamental problem with a front projector is that it has to compete with ambient light whereas flat panels have screens that absorb generally 91% to 96% of the ambient light (thus they look dark when off).     While display makers market contrast number, these very high contrast numbers assume a totally dark environment, in the real world what counts is the net contrast, that is the contrast factoring in ambient light.

Displaymate has an excellent set of articles (including SmartPhone Brightness Shootout, Mobile Brightness Shootout 2, and Smartphone Shootout 2) on the subject of what they call “Contrast Rating for High Ambient Light” (CRHAL)  which they define as the display brightness per unit area (in candela’s per meter squared, also known as “nits”) of the display divide by the reflectivity of ambient light in percent by the display.

Displaymate’s CRHAL is not a “contrast ratio,” but it gives a good way to compare displays when in reasonable ambient light.  Also important, is that for a front projector it does not take much ambient light to end up dominating the contrast.  For a front projector even dim room light is “high ambient light.”

The total light projected out of a projector is given in lumens so to compare it to a cell phone or tablet we have to know how big the projected image will be and the type of screen.   We can then compute the reflected light in “nits”  which is calculated by the following formula Candelas/meter2 = nits = Gn x (lumens/m2)/PI (where Gn is the gain of the screen and PI = ~3.1416).   If we assume a piece of white paper with a gain of 1 (about right for a piece of good printer paper) then all we have to do is calculate the screen area in meters-square, multiply by the lumens and divide by PI.

A pico projector projecting a 16:9 (HDTV aspect ratio) on a white sheet of notebook paper (with a gain of say 1) results in 8.8-inch by 5-inch image with an area of 0.028 m2 (about the same area as an iPad2 which I will use for comparison).    Plugging a 20 lumen projector in to the equation above with a screen of 0.028 m2 and a gain of 1.0 we get 227 nits.  The problem is that same screen/paper will reflected (diffusing it) about 100% of the ambient light.   Using Displaymate’s CRHAL we get 227/100 = 2.27.

Now compare the pico projector numbers to an iPad2 of the same display area which according to Displaymate has 410 nits and only reflects 8.7% of the ambient light.   The CRHAL for the iPad2 is 410/8.7  = 47.   What really crushes the pico projector by about 20 to 1 with CRHAL metric is that the flat panel display reflects less than 10th of the ambient light where the pico projector’s image has to fight with 100% the ambient light.

In terms of contrast,to get a barely “readable” B&W image, you need at least 1.5:1 contrast (the “white” needs to be 1.5 brighter than the black) and preferably more than 2:1.   To have moderately good (but not great) colors you need 10:1 contrast.

A well lit room has about 100 to 500 lux (see Table 1 at the bottom of this article) and a bright “task area” up to 1500 lux.   If we take 350 lux as a “typical” room then for the sheet of paper screen there are about 10 lumens of ambient light in our 0.028 m2 image from used above.   Thus our 20 lumen projector on top of the 10 lumens of ambient has a contrast ratio of 30/10 or about 3 to 1 which means the colors will be pretty washed out but black on white text will be readable.  To get reasonably good (but not great) colors with a contrast ratio of 10:1 we would need about 80 lumens.   By the same measure, the iPad2 in the same lighting would have a contrast ratio of about 40:1 or over 10x the contrast of a 20 lumen pico projector.   And the brighter the lighting environment the worse the pico projector will compare.    Even if we double or triple the lumens, the pico projector can’t compete.

With the information above, you can plug in whatever numbers you want for brightness and screen size and no matter was reasonable numbers you plug in, you will find that a pico projector can’t compete with a tablet even in moderate lighting conditions.

And all this is before considering the power consumption and space a pico projector would take.   After working on the problem for a number of years it became clear that rather than adding a pico projector with its added battery, they would be better off to just make the display bigger (ala the Galaxy S3 and S4 or even the Note).   The microdisplay devices created would have to look for other markets such as near eye (for example, Google Glass) and automotive Heads Up Display (HUD).

Table 1.  Typical Ambient Lighting Levels (from Displaymate)

Brightness Range

Description

0 lux  –

100 lux  –

500 lux  –

1,000 lux  –

3,000 lux  –

10,000 lux  –

20,000 lux  –

50,000 lux  –

100,000 lux  –

100 lux

500 lux

1,500 lux

5,000 lux

10,000 lux

25,000 lux

50,000 lux

75,000 lux

    120,000 lux

Pitch black to dim interior lightingResidential indoor lighting

Bright indoor lighting:  kitchens, offices, stores

Outdoor lighting in shade or an overcast sky

Shadow cast by a person in direct sunlight

Full daylight not in direct sunlight

Indoor sunlight falling on a desk near a window

Indoor direct sunlight through a window

Outdoor direct sunlight

Himax FSC LCOS in Google Glass — Seeking Alpha Article

Catwig to Himax ComparisonThis blog was the first to identify that there was a Himax panel in an early Google Glass prototype and the first to identify that there was a field sequential color LCOS panel inside Google Glass.  Due to the connection it was a reasonable speculation but there was no proof that Himax was in Google Glass.

Then when Catwig published a teardown of Google Glass last week (and my inbox lit up with people telling me about the article) there were no Himax logos to be seen which started people to wondering if there was indeed a Himax display inside.   As a result of my prior exclusive finds on Himax, LCOS and Google Glass, I was ask to contribute to Seeking Alpha and I just published an article that details my proof that there is a Himax LCOS display inside the current Google Glass.   In that article, I also discounted some recent speculation that Google Glass was going to use a Samsung OLED microdisplay anytime soon.

 

 

 

 

Extended Temperature Range with LC Based Microdisplays

cookies and freezing

Extreme Car Temperatures

A reader, Doug Atkinson, asked a question about meeting extended temperature ranges with LC based microdisplays, particularly with respect to Kopin.    He asked the classic “car dash in the desert and the trunk in Alaska” question. I thought the answer would have broader interest so I decided to answer it it here.

Kopin wrote a good paper that is available on the subject in 2006 titled “A Normally Black, High Contrast, Wide Symmetrical Viewing Angle AMLCD for Military Head Mounted Displays (HMDs) and Other Viewer Applications”. This paper is the most detailed one readily available describing the how Kopin’s transmissive panels meet the military temperature and shock requirements.  It is not clear that Kopin uses this same technology for their consumer products as this paper is specifically addressing what Kopin did for military products.

With respect to LC microdisplays in general, it should realized that there is not a huge difference in the technical spec’s of the liquid crystals between the LC’s  most small panel microdisplays use and large flat panels in most cases. They often just use different “blends” of the very similar materials. There are some major LC differences including TN (twisted nematic), VAN (vertically aligned nematic), and others.   Field sequential color are biased to wanting faster switching “blends” of the LC.

In general, anywhere a large flat panel LC can go, a microdisplay LC can go. The issue is designing the seals and and other materials/structures to withstand the temperature cycling and mechanical shock which requires testing,  experimentation, and development.

The liquid crystals themselves generally will go through different phases from freezing (which is generally fatal) to heating up to the the “clearing point” where the display stops working (but is generally recoverable).  There is also a different spec for “storage temperature range” versus “operating temperature range.” Generally it is assumed the device only has to work in a temperature range in which a human could survive.

At low temperature the LC gets “sluggish” and does not operate well but this can be cured by various “heater mechanisms” including having heating mechanisms designed into the panel itself.  The liquid crystal blends are often designed/picked to work best at a higher temperature range because it is easier to heat than cool.

Field sequential color LCOS is more affected by temperature change because temperature affects not only the LC characteristics, but the switching speed. Once again, this can be dealt with by designing for the higher temperature range and then heating if necessary.

As far as Kopin’s “brightness” goes (another of Doug’s questions), a big factor is how powerful/bright the back light has to be. The Kopin panel blocks something like 98.5% of the light by their own spec’s. What you can get away with in a military headset is different than what you may accept in a consumer product in terms of size, weight, and power consumption. Brightness in daylight is a well known (inside the industry) issue for Kopin’s transmissive panels and one reason that near eye display makers have sought out LCOS.

[As an aside for completeness about FLC]  Displaytech which was sold the Micron and then sold to Citizen Finetech Miyota and the Kopin bought Forth Dimension Display (FDD) both use Ferro-electric LC (FLC / FLCOS) which does have a pretty dramatically different temperature profile that is very near “freezing” (going into a solid state) a little below 0C which would destroy the device. Displaytech claimed (I don’t know about FDD) that they had extended the low temperature range but I don’t know by how much. The point is that the temperature range of FLC is so different that meeting military spec’s is much more difficult.

AR Display Device of the Future: Color Filter, Field Sequential, OLED, LBS and other?

I’m curious what people think will be the near eye microdisplay of the future.   Each technology has its own drawbacks and advantages that are well known.   I thought I would start by listing summarizing the various options:

Color filter transmissive LCD – large pixels with 3 sub-pixels and lets through only 1% to 1.5% of the light (depends on pixel size and other factors).  Scaling down is limited by the colors bleeding together (LC effects) and light throughput.  Low power to panel but very inefficient use of the illumination light.

Color filter reflective (LCOS) – same as CF-transmissive but the sub-pixels (color dots) can be smaller, but still limited scaling due to needing 3 sub-pixels and color bleeding.  Light throughput on the order of 10%.  More complicated optics than transmissive (requires a beam splitter), but shares the low power to panel.

Field Sequential Color (LCOS) – Color breakup from sequential fields (“rainbow effect”), but the pixels can be very small (less than 1/3rd that of color filter).   Light throughput on the order of 40% (assuming a 45% loss in polarization).  Higher power to the panel due to changing fields.  Optical path similar to CF-LCOS, but to take advantage of the smaller size requires smaller but higher quality (low MTF) optics.   Potentially mates well with lasers for very large depth of focus so that the AR image is in focus regardless of where the user’s eyes are focused.

Field Sequential Color (DLP) – Color breakup form FSC but can go to higher field rates than LCOS to reduce the effects.   Device and control is comparatively high powered and has a larger optical path.  The pixel size it bigger than FSC LCOS due to the physical movement of the DLP mirrors.   Light throughput on the order of 80% (does not have the polarization losses) but falls as pixel gets smaller (gap between mirrors is bigger than LCOS).    Not sure this is a serious contender due to cost, power of the panel/controller, and optical path size, and nobody I know of has used it for near eye, but I listed it for completeness

OLED – Larger pixel due to 3 color sub-pixels.  It is not clear how small this technology will scale in the foreseeable future.  OLED while improving the progress has been slow — it has been the “next great near eye technology” for 10 years.   Has a very simple optical path and potentially high light efficiency which has made it seem to many like on technology with the best future, but it is not clear how it scales to very small sizes and higher resolution (the smallest OLED pixel I have found is still about 8 times bigger than the smallest FSC LCOS pixel) .    Also it is very diffuse light and therefore the depth of focus will be low.

Laser Beam Steering – While this one sounds good to the ill-informed, the need to precision combine 3 separate lasers beams tends to make it not very compact and it is ridiculously to expensive today due to the special (particularly green) lasers required.  Similar to field sequential color, there are breakup effects of having a raster scan (particularly with no persistence like a CRT) on a moving platform (as in a head mount display).   While there are still optics involved to produce an image on the eye, it could have a large depth of focus.   There are a lot of technical and cost issues that keep this from being a serious alternative any time soon, but it is in this list for completeness.

I particularly found it interesting that Google’s early prototype used a color filter LCOS and then they switched to field sequential LCOS.    This seems to suggest that they chose size over issues with the field sequential color breakup.    With the technologies I know of today, this is the trade-off for any given resolution; field sequential LCOS pixels are less than 1/3rd the size (a typically closer to 1/9th the size) of any of the existing 3-color devices (color filter LCD/LCOS or OLED).

Olympus MEG4.0

Olympus MEG4.0 – Display Device Over Ear

It should also be noted that in HMD, an extreme “premium” is put on size and weight in front of the eye (weight in front of the eye creates as series of ergonomic and design issues).    This can be mitigated by using light guides to bring the image to eye and locating a larger/heavier display device and its associate optics to a less critical location (such as near the ear) as Olympus has done with their Meg4.0 prototype (note, Olympus has been working at this for many years).  But doing this has trade-offs with the with the optics and cost.

Most of this comparison boils down to size versus field sequential color versus color sub-pixels.    I would be curious what you think.

Kopin Displays and Near Eye (Followup to Seeking Alpha Article)

Kopin Pixel compared to LCOS

Kopin’s smallest transmissive color filter pixel is bigger than nine of the smallest field sequential color LCOS pixels

After posting my discovery of a Himax LCOS panel on a Google Glass prototype, I received a number of inquiries about Kopin including a request from Mark Gomes of SeekingAlpha the give my thoughts about Kopin which were published in “Will Kopin Benefit From the Glass Wars?”  In this post I am adding morel information to supplement what I wrote for the  Seeking Alpha article.

First, a little background on their “CyberDisplay® technology would be helpful.   Back in the 1990’s Kopin developed a unique “lift-off” process to transfer transistor and other circuitry from a semiconductor I.C. onto a glass plate to make a transmissive panel which they call the CyberDisplay®.  Kopin’s “lift-off” technology was amazing for that era. This technology allowed Kopin to apply very (for its day) small transistors on glass to enable small transmissive devices that were used predominantly in video and still camera viewfinders. The transmissive panel has 3 color dots (red, green, blue) that produce a single color pixel similar to a large LCD screen only much smaller. In the late 1990’s Kopin could offer a simple optical design with the transmissive color panel that was smaller than existing black and white displays using small CRTs.  This product was very successful for them, but it has become a commoditized (cheap) device these many years later.

CyberDisplay pixel is large and blocks 98.5% of the light

While the CyberDisplay let Kopin address the market for what are now considered low resolution displays cost effectively, the Achilles’ heel to the technology is that it does not scale well to higher resolution because the pixels are so large relative to other microdisplay technologies.  For example Kopin’s typical transmissive panel is15 by 15 microns and is made up of three 5 by 15 color “dots” (as Kopin calls them).    But what makes matters worse; even these very large pixel devices have an extremely poor light throughput of 1.5% (blocks 98.5% of the light) and scaling the pixel down will block even more light!

While not listed on the website (but included in a news release), Kopin has an 8.7 x 8.7 micron color filter pixel (that I suspect is used in their Golden-i head mount display) but it blocks even more light than the 15×15 pixel as the pixel gets smaller.    Also to be fair, there are CyberDisplay pixels that block “only” 93.5% of the light but they give up contrast and color purity in exchange for light throughput which is not usually desirable.

There are many reasons why the transmissive color filter LCOS light throughput is so poor.  To begin with, the color filters themselves which are going to block more than 2/3rds of the light (blocking the other 3 primary colors plus other losses).    Because it is transmissive, the circuitry and the transistor to control each pixel block the light which becomes significant as the pixel becomes small.

But perhaps the biggest factor (but most complex to understand, I will only touch on it here) is that the electric field for controlling the liquid crystal for a given color dot extent into the neighboring color dots thus causing the colors to bleed together and loose all color saturation/control.  To reduce this problem they can use less light throughput efficient liquid crystal materials that are less susceptible to the neighboring electric fields and use black masks (which block light)  surrounding the each color dot to hide the area where the colors bleed together.

Field Sequential Color – Small Pixels and 80+% light throughput

With reflective LCOS, all the wires and circuitry are hidden behind the pixel mirror so that non of the transistors and other circuitry block the light.  Furthermore the liquid crystal layer is usually less than half as thick which limits the electric field spreading and allows pixels to be closer together without significantly affecting each other.  And of course there are no color filters which waste more than 2/3rds of the light.    The down side to field sequential color is the color field breakup where when the display move quickly relative to the eye, the colors may not line up for a split second.   The color breakup effects can be reduce by going to higher field sequential rates.

Kopin’s pixesl are huge when compared to those of field sequential LCOS devices (from companies such as Himax, Syndiant, Compound Photonics, and Citizen Finetech Miyota) that today can easily have pixels 5 by 5 microns and with some that are smaller than 3 by 3 microns.   Therefore FSC LCOS can have about 9 times the pixel resolution for roughly the same size device!  And the light throughput of the LCOS devices is typically more than 80% which becomes particularly important for outdoor use.

So while a low resolution Kopin CyberDisplay might be able to produce a low resolution image in a headset as small as Google Glass, they would have to limit the device in the future to a low resolution device – – – not a good long-term plan.  I’m guessing that the ability to scale to higher resolutions was at least one reason why Google went with a field sequential color device rather than starting with a transmissive panel that would have at least initially been easier to design with.  Another important factor weight in advantage of LCOS over a transmissive panel is the light throughput so that the display is bright enough for outdoor use.

I don’t want to be accused of ignoring Kopin’s 2011 acquisition of Forth Dimension Displays (FDD) which makes a form of LCOS.  This is clearly a move by Kopin move into reflective FSC LCOS.   It so happens back in 1998 and 1999 I did some cooperative work with CRL Opto (that later became FDD) and they even used I design I worked on for their silicon backplane in their first product.  The FSC LCOS that FDD makes is considerably different in both the design of the device and the manufacturing process required for a high volume product.

Through FDDs many years of history (and several name changes) FDD has drifted to a high end specialized display technology with a large 8+ micron pixels.   For a low volume niche applications FDD is servicing, there was no need to develop more advance silicon to support a very small device and drive electronics.  Other companies aiming more at consumer products (such as Syndiant where I was CTO) have put years of efforts into building “smarter” silicon that enabled minimizing the not only the size of the display;  reducing the number of connection wires going between the display and the controller; and reduced the controller to one small ASIC.

Manufacturing Challenge for Kopin

To cost effectively assemble small pixel LCOS devices requires manufacturing equipment and methods that are almost totally different from what Kopin does with their CyberDisplay or FDD with their large pixel LCOS.   Almost every step in the process is done with an eye to high volume manufacturing cost.   And it is not like a they can just buy the equipment and be up and running, it usually takes over a year to get the yields up to an acceptable level from the time the equipment is installed.  Companies such as Himax have reportedly spent around $300M in developing their LCOS devices and I know of multiple other companies having spend over $100M and many years of effort in the past.

Conclusion

For at least the reasons given above, I don’t see Kopin as currently positioned well to build a competitive high volume head mounted displays that are to meet the future needs of the market as I think all roads lead to higher resolution, yet small devices.  It would seem to me that they would need a lot time, effort, and money to field a long-term competitive product.

Laser Illumination Could Cause LCOS to Win Out Over OLED in Near Eye AR

Steve Mann IEEE adapted

The conventional wisdom is that eventually OLEDs will become inexpensive and they will push out all other technologies in near eye because they will be smaller and lighter with a simple optical path.   But in reading ‘Steve Mann: My “Augmediated” Life”‘ in IEEE Spectrum I was struck by his comment “It requires a laser light source and a spatial light modulator”  (a spatial light modulator are devices like LCOS, transmissive panels, and DLP).     The reason he gives for needing a laser light source is to support a very high depth of focus.   For those that don’t believe LCOS and lasers give a high depth of focus you might want to look at my blog from last year (and the included link to a video demonstration).

Steve Mann has “lived the dream” of Augmented Reality for 35 years and (with due affection) is a geek’s geek when it comes to wearing AR technology.  He makes what I think are valid points as to what he finds wrong about Google Glass including the need to have the camera’s view concentric with the eye’s view and issues of eye strain in the way the Google Glass image is in the upper corner of your field of view which can cause eye muscle strain.

But the part of Steve Mann’s article really caught my attention is the need for laser illumination to give a high depth of focus to reduce eye strain because you need what you see in the images to be in focus at the same depth as what you see in the real world.     Google Glass and other LED illuminated AR generally set the focus so that the display focuses in what would be a persons far vision.   Steve Mann is saying is that the focus in your eye from the display has to match that of the real world or there will be problems and the only known way to do this is to use laser illumination.

This issue of laser light having a large depth of focus when used with a panel is an important “gem” that could have a big impact in terms of the technology used in near eye AR in the future.   LEDs and that includes OLEDs produce light with rays that are scattered and hard to focus.   Wheres lasers produce high f-number light that is easy to focus (and requires smaller optics as well).  As I said at the top of this post, the conventional wisdom is that cost is the only factor keeping OLEDs out of near eye AR, but if Steven Mann is correct, they are also prevented from being good for AR due to the physics of light.   And the best technology I know of for near eye AR to mate up with laser light is LCOS.

Google Glass Is Using Field Sequential Color (FSC) LCOS (Likely Himax)

GG DVF 40-42 RGB (2)

Sequential Red, Green, and Blue Fields Captured From Google YouTube Video DVF [through Glass]

I’m going to have to eat some crow because up until Saturday night, I honestly thought Google was using a transmissive panel based on the shape of the newer Google Glass headset.  I hadn’t seen anything that showed it used Field Sequential Color (FSC) and I had looked for it in several videos before that didn’t appear to show it.  With FSC the various (red, green, blue and perhaps other colors) are presented to the eye in sequence rather than all at the same time and this can show up in videos (usually) and in sometimes in still pictures.

But on a Saturday (March 9th)  I watch the Google produced Video DVF [through Glass] from way back in September 2012.  A careful frame by frame analysis (see above for the images from 3 frames) of the video proves that the newer Google Glass design uses a Field Sequential Color display (FSC).  Note in the picture above captured at 3 separate times, there is a red, green, and blue images in the Google Glass which is indicative of FSC.   Based on the size and shape and some other technical factors (too much to go into here), it has to be a reflective Liquid Crystal on Silicon (LCOS) device, most likely made by Himax.

BTW, as further visual evidence (there are a couple more examples in the video but this one is to me the clearest) of it being an FSC device is given later in the video at 3:30 when Google Co-Founder (and part-time actor?) Sergey Brin wearing Google Glass stands up to applaud and there is a classic FSC color breakup as captured in the picture below one recognizable to anyone that has looked into an FSC projector.  Seeing separate color fields when the projector moves is a classic FSC effect.

GG man jumping up

Sergey Brin Stands Up Rapidly and Reveals Color Sequential Breakup

This (new) evidence largely confirms Seeking Alpha Blogger Mark Gomes conclusion that Himax is in both the old and the newer Google Glass design  (see also his instablog response to my comments).   Back last week I was not convinced and commented that I still thought it was a transmissive panel and Mr. Gomes and I has some cordial back and forth public discussion in each others blogs about it on Seeking Alpha and this blog.   But with the proof that it is using field sequential color, there is only one conclusion and that is that it is a reflective field sequential color LCOS device.   This also adds up as to why the earlier prototype was using a Himax Color Filter LCOS device when it would have been simpler and smaller to have used a transmissive panel at that time.  Apparently the color filter LCOS was a “stand-in” waiting for the smaller field sequential color device and/or optics.

Additionally, while I had dismissed the Digitimes Himax and Google Glass article as confirming it was Himax because it appeared a couple of days after Mark Gomes’ article and so I thought it was just an “echo” of what he and I had written.   But in public comments Mr. Gomes pointed out that it was adding some more details.

So why do I now agree with Mr. Gomes that the Google Glasses most likely uses a Himax panel?  The evidence is overwhelming that it is field sequential color and it seems that Himax is the obvious candidate since in my first blog on the subject appear Feb 28, 2012 clearly identified Himax as supplying the earlier Google Glass prototype and they have had FSC LCOS devices for about 6 years.    This is further reinforced by what Mark Gomes has posted as well as the Digitimes article.   Both the technical and the financial/business analysis agree.

There are a few other but IMO much less likely candidates.  My old company Syndiant has digital field FSC LCOS technology that last I knew about both was technically  superior to that of Himax’s analog LCOS technology, but I don’t think Syndiant would be ready for a Google sized order yet (and the announced JVC-Kenwood deal happened too recently).  Citizen Finetech Miyota (CFM) recently bought FSC LCOS technology from Micron, but I can’t see why Micron would have sold the technology to CFM if a deal with Google was in the works.   Omnivision bought the the FSC LCOS technology of Aurora Systems, but it was not very good technology IMO and so far I only know of the continuing to make the old Aurora devices which are aimed at front projectors.   Then there is Compound Photonics who bought the FSC assets from the now defunct Brillian but they have stated that they are working on  laser pico projectors.

Also, please don’t give me the conspiracy and collusion theories.   The video I watched on March 9th was the first one I had seen that proved Google Glass was field sequential color.  Additionally, I never corresponded with or even knew of Mark Gomes before the Seeking Alpha article came out mentioning my blog and I was legitimately concerned that he may have ignored some of my original article and only considered the parts that supported his position so I wanted to correct the record.  Mark Gomes for his part was very respectful, yet emphatic in his position based on his research which now appears to me to have been largely correct (although I still say the Himax web site looks abandoned and Himax did give the appearance of having given up on FSC LCOS back around 2010).   Frankly, I was as surprise as anyone at the wild swings in Himax stock and didn’t buy any before my first article.

Full Disclosure:  I never traded in Himax stock before today (or any other stock discussed on this blog other than being a well know holder of the private company Syndiant stock as a form Founder, CTO, and Investor).  But seeing how the Google Glass news last week affected the stock and based on Mr. Gomes’ articles, combined with this new evidence, I decide to put some money where my mouth is and just bought some Himax (HIMX) to see what happens.

Appendix (For Those that Want to duplicate my findings)

Figuring out that Google Glass used FSC would have been instantly recognizable to anyone that got to use the newer Google Glass device, but I didn’t have one to play with and I was using the available on-line video and pictures.   The crafted Google videos that give the appearance of looking through the Google Glass didn’t show this because they simulation of the display.  And in most of the videos the image in the Google Glass was not visible and/or the camera exposure and other settings didn’t pick up the FSC effects.  Perhaps Ironically, it appears that the camera in Google Glass tends to pick up the FSC effect more than other cameras used to shoot pictures of people wearing it.

Some video cameras more so than others will tend to pick up the signature color breakup of FSC.   Also the camera angle has to be right so you can see the image when videoing someone wearing Google Glass.   And perhaps most importantly, the exposure of the camera, which is usually based on the overall scene, has to be such that the sequential colors from the small spot of light in the viewfinder (haven’t ever seen a close up of the viewfinder) does not over-expose and wash out the colors (in this case you may notice a more white flicker).

All I did was play the video DVF [through Glass]  on my PC and kept pausing and un-pausing it.  It is tricky to catch the frames that show FSC.  One reason is that the video has many frames per second and the Youtube player does not support “shuttle/jog” frame by frame.   One could download the video and play it frame by frame but it is not necessary.   I just kept going over the time around 0:38 to 0:44 a few times to capture the images.   Similarly went through the video at about 3:30 to get the FSC breakup with Sergey Brin.

Note that you will not always see a red, green, or blue color when you capture a frame.   When colors get too bright in the image, it will saturate the camera sensor and result in white.     I don’t believe there is a “white field” in the Google Glass but rather it is just that the camera is not picking up the colors due to over saturation.

I should also add that FSC effects show up differently on different cameras and in different lighting and camera exposure.   I have looked previously at other Google Glass stills and videos trying to find FSC effect and did not find them.    Unless the camera angle and the exposure is right, you just aren’t going to see the colors.    Even in this whole video, I only found a few seconds of video that demonstrated FSC.