Desperately Seeking the Next Big Thing – Head Mounted Displays (HMDs) — Part 1

Untitled-2With Microsoft’s big announcement of HoloLens and spending a reported $150 million just for HMD IP from the small Osterhout Design Group, reports of Facebook spending about $2 billion for Oculus Rift, and the mega publicity surrounding Google Glass and the hundreds of millions they have spent, Head Mounted Displays (HMD) are certainly making big news these days.

Most of the articles I have seen pretty much just parrot the company press releases and hype up these as being the next big thing.   Many of the articles have, to say the least, dubious technical content and at worst give misinformation.   My goal is to analyze the technology and much of what I am seeing and hearing does not add up.

The question is whether these are lab experiments with big budgets and companies jumping the gun that are chasing each other or whether HMDs really are going to be big in terms of everyone using them?    Or are the companies just running scared that they might miss the next big thing after cell phones and tablets.   Will they reach numbers rivaling cell phone (or at least a significant fraction)?    Or perhaps is there a “consolation prize market” which for HMDs would be to take significant share of the game market?

Let me get this out-of-the-way:  Yes, I know there is a lot of big money and smart people working on the problem.   The question is whether the problem is bigger than what is solvable?  I know I will hear from all the people with 20/20 hindsight all the successful analogies (often citing Apple) but for every success there many more that failed to catch on in a big way or had minor success and then dived.   As an example consider the investment in artificial intelligence (AI) and related computing in the 1980’s and the Intel iAPX 432 (once upon a time Intel was betting the farm on the 432 to be replacement for the 8086 until the IBM PC took off).    More recently and more directly related, 3-D TV has largely failed.  My point here is that big companies and lots of smart people make the wrong call on future markets all the time; sometimes the problems is bigger than all the smart people and money can solve.

Let me be clear, I am not talking about HMDs used in niche/dedicated markets.  I definitely see uses for HMDs applications where hands-free use is a definite.  A classic example is military applications where a soldier has to keep his hands free, is already wearing a helmet that messes up their hair and they don’t care what they look like, and they spend many hours in training.   There are also uses for HMD in the medical field for doctors as a visual aid and for helping people with impaired vision.  What I am talking about is whether we are on the verge of mass adoption.

Pardon me for being a bit skeptical, but on the technical side I still see some tremendous obstacles to HMD.    As I pointed out on this blog soon after Google Glass was announced http://www.kguttag.com/2012/03/03/augmented-reality-head-mounted-displays-part-1-real-or-not/ HMDs have a very long history of not living up to expectations.

I personally started working on a HMD in 1998 and learned about many of the issues and problems associated with them.    There are the obvious measurable issues like size, weight, fit/comfort and can you wear them with your glasses, display resolution, brightness, ruggedness, storage, and battery life.   Then there are what I call the “social issues” like how geeky it looks, does it mess up a person’s hair, and taking video (a particularly hot topic with Google Glass).   But perhaps the most insidious problems are what I lump into the “user interface” category which include input/control, distraction/safety, nausea/disorientation, and what I loosely refer to “as it just doesn’t work right.”   These issues only just touch on what I sometime joking refer to as “the 101 problems with HMDs.”

A lot is made of the display device itself, be it a transmissive LCD, liquid crystal on silicon (LCOS), OLED, or TI’s DLP.    I have about 16 years of history working on display devices, particularly LCOS, and I know the pro’s and con’s on each one in some detail.   But as it turns out, the display device and its performance is among the least of the issues with HMDs, I had a very good LCOS device way back in 1998.   As with icebergs, the biggest problems are the ones below the surface.

This first article is just to set up the series.  My plan is to go into the various aspects and issue with HMDs trying to be as objective as I can with a bit of technical analysis.    My next article will be on the subject of “One eye, two eyes, transparent or not.”

I’m Back

Hi everyone,

Just a quick note today to say that I no longer at Navdy and I now have some time to get back to posting on this blog.   All the travel and work at a start-up was pretty all consuming.

I have some idea for some topics particularly related to the various head mounted display goings on from Google “Glass” to Microsoft’s recent “HoloLens.”   I also want to write more on the computer and game system history.   I like answering questions and would like suggestions for topics so please feel free to write.

The one thing I ask is that there be no questions directly related to Navdy as it would not be appropriate for me to answer them.   It is not going to make good reading for me to have to repeatedly reply with “no comment” or the like.   Almost everything else related to technology, projection, head mounted displays, lasers, computer/video-game history is fair game.

Oh yes one more thing, I am available again for consulting work.

Karl

Navdy Launches Pre-Sale Campaign Today

Bring Jet-Fighter Tech to Your Car with NavdyIts LAUNCH Day for Navdy as our presale campaign starts today. You can go to the  Navdy site to see the video.  It was a little over a year ago that Doug Simpson contacted me via this blog asking about how to make a aftermarket heads up display (HUD) for automobiels.     We went through an incubator program called Highway1 sponsored by PCH International that I discussed in my last blog entry.

The picture above is a “fancy marketing image” that tries to simulate what the eye sees (which is impossible to do with a camera as it turns out).   We figures out how to do some pretty interesting stuff and the optics works better than I thought was possible when we started.    The image image focuses beyond the “combiner/lens” to help with the driver seeing the images in the far vision is about 40 times brighter (for use in bright sunlight) than an iPhone while being very efficient.

Navdy Office

Being CTO at a new start-up has kept me away from this blog (a start-up is very time consuming).  We have raise some significant initial venture capital to get the program off the ground and the pre-sale campaign takes it to the next level to get products to market.  In the early days it was just me and Doug but now we have about a dozen people and growing.

Karl

Highway1 Incubator

Those that follow my blog are probably wondering what has happened to me these past months.   I have away from home for most of the last 4 months at an “incubator” program for start-ups called Highway1.   Navdy, for which I recently became CTO, was selected as one of 11 companies from over 100 applicants for the very first class of the Highway1  program sponsored by PCH International.

What makes Highway1 different from almost all other incubator programs these days is that it is totally focused on helping hardware start-ups.   Highway1 recognizes that hardware start-ups have special needs, are more difficult to get started, and have have to deliver a physical product unlike software companies.

The Highway1 office is in the Mission District of San Francisco where most of the time is spent, but the program also includes spending two weeks in Shenzhen China where many of the electronic products used around the world are made.   During the program companies are introduced to mentors from other companies and experts in the field as well as helped with introductions to angle and venture investment firms.

While in Shenzhen, the companies were introduced to manufacturers who could eventually be making their products.   Additionally our company received some very crucial support from PCH in Shenzhen in locating a company that could manufacture a critical component of our system.

Along the way, the people at the various 11 companies became friends and helped each other out.  Respecting each other was particularly important as the companies were cranking out prototypes sharing first on one and later two 3-D printers for making prototypes (as demo day neared, the those 3-D printers were pretty much running non-stop).   There was some incredible talent  technically, marketing, and business wise at these companies.

At the end of the program was “Demo Day” where more than 200 venture capitalists, investors, press, and technologist pack a large room at PCH’s U.S. Headquarters in San Francisco.  It was a chance for investors and the press to see what the companies had developed.   While Navdy presented, details of our product and plans were not released to the press because we are planning on launching our product later this year.  Navdy did receive serious interest from a number of VC’s with our demo after the formal presentations.

The whole Highway1 program was the dream of Liam Casey the founder and CEO of PCH, a company with over $700M in revenue.  You may not know the PCH name, but it is very likely that you have brand name products that they helped get to your home or office (be it anywhere in the world).   Liam was personally there to greet us at the beginning of the program and at key points along the way, and he told some great business stories.  The whole of the PCH team, be it the people from San Francisco, China, or Ireland, were always awesome to work with and incredibly nice reflecting PCH’s founder.

Comment: I don’t usually use the word “awesome” but the word was ubiquitous in San Francisco and it seemed to fit the people at PCH.

“If you haven’t tested it, it doesn’t work”

1994 Derek Roskell

Derek Roskell (circa 1994) of TI MOS Design Bedford, UK (Formal Photo – not how I remember him)

When I started this blog, I intended to write about more than displays and include some of my personal IC history.   Today’s story is about Derek Roskell of Texas Instrument’s who led the UK-based design teams I worked with between 1979 and 1997 on a number of the most complex I.C.s done up to that point including the 9995 16-bit CPU, 34010 and 34020 Graphics CPU’s, and the extremely complex 320C80 and 320C82 image processors with a 32-bit RISC CPU and 4 (C80) and 2 (C82) advanced DSP processors on one chip.  Every one of these designs quickly went from first silicon to product.

Having one successful design after the other may not seem so special in today’s era of logic synthesis and all the other computer tools, but back in 1979 we drew logic on paper and  transistors on sheets of frosted Mylar plastic with color pencils that then were then digitized by hand.  We then printed out large “composites” plots on giant flat-bed pen plotters (with each layer of the I.C. in a different color) and then verified all the circuitry by hand and eye (thank goodness by the mid 1980’s we got computer schematic verification).

In those days it all could go very wrong and it did for a 16-bit CPU call the 9940 and a spinoff version the 9985 that were design in Houston Texas in 1977-1978.   It went so bad that the both the 9940 and 9985 were never fully functional, causing the designer to be discredited (whether at fault or not) and many people to leave.

In the wake of the 9940/9985 disaster, in 1979 management pick me, the young hotshot only 1.5 years out of college, to lead the architecture and logic design of a new CPU, the TMS9995, to replace the failed TMS9985.   There was one hitch, they wanted to use a  TI design group in Bedford England.  So after some preliminary work, I packed up for a 6 month assignment in Bedford where I first met Derek Roskell.

Derek in circa 2010 DSCN1430

Derek more “In Character” but taken years later

To say Derek is a self-deprecating is a gross understatement.  The U.S. managers at TI at the time were more the self-assertive, aggressive, “shoot from the hip,” cut corners (which resulted in the 9940/9985 debacle) and generally didn’t take well to Derek’s “English working class” (said with great affection) style with the all too frequent laugh at the “wrong” time.

When I first met Derek he was this “funny old guy” who at had worked on “ancient” TTL technology.  He  was around 40 and seem like an old man in a world of engineers in their 20’s and early 30’s who he led.   As it turned out, Derek was the steady hand that guided a number of brilliant people who worked under him.   He made sure my “brilliant” architecture and logic design actually worked.  You don’t have one successful design after another, particularly back then, by accident.

Upper management  was always pressuring to get thing done faster which could only be accomplished by cutting corners.  They called Bedford a “country club” for resisting the pressure.  Derek was willing to take the heat and do things the “right way” because he understood the consequences of cutting corners.

For most engineers fun part of engineering is doing the original design work.  That is the “creative stuff” and the stuff that gets you noticed.   Also most engineers have big egos and think, “of course what I designed works.”  But when you are designing these massive I.C.’s with hundreds of thousand and later millions of transistors, even if 99.99% of the design is correct, there will be a hopeless number errors to debug and correct.  Most of what it takes to make sure a design works is the tedious process of “verification.”

A couple of months back I had a small reunion in Bedford with some friends from the old days including Derek.   Everyone remembered Derek for one thing he constantly chided the designers with, “If you haven’t tested it, it doesn’t work.”  Pretty good advice.

Epilog

TI, like most companies today, in their search for “shareholder value” closed the large Bedford UK site around 1995 but still kept Bedford MOS designers who had so many proven successes and moved them to a rental building Northhampton.   Through the years TI kept “consolidating/downsizing” and finally 2011 it shut down the last vestiges of their design operation in England and losing a number of extremely talented (and by then) senior people.

Below is a picture taken of the design team in Bedford that worked with me on the 320C80.

320C80 Bedford Team cropped and Sharpend

320C80 Bedford Design Team (1994)

Whatever happened to pico projectors embedding in phones?

iPad smBack around 2007 when I was at Syndiant we started looking at the pico projector market, we talked to many of the major cell phone as well as a number of PC companies and almost everyone had at least an R&D program working on pico projectors.  Additionally there were market forecasts for rapid growth of embedded pico projectors in 2009 and beyond.  This convinced us to develop small liquid crystal on silicon (LCOS) microdisplay for embedded pico projectors.  With so many companies saying they needed pico projectors, it seemed like a good idea at the time.  How could so many people be wrong?

Here we are 6 years later and there are almost no pico projectors embedded in cell phones or much else for that matter.   So what happened?   Well, just about the same time we started working on pico projectors, Apple introduced their first iPhone.    The iPhone overnight roughly tripled the size of the display screen of a smartphone such as a Blackberry.  Furthermore Apple introduced ways to control the screen (pinch/zoom, double clicking to zoom in on a column, etc.) to make better use of what was still a pretty small display.   Then to make matter much worse, Apple introduce the iPad and tablet market took off almost instantaneously.    Today we have larger phones, so called “phablets,” and small tablets filling in just about every size in between.

Additionally I have written about before, the use model for a cell phone pico projector shooting on a wall doesn’t work.   There is very rarely if ever a dark enough place with something that will work well for a screen in a place that is convenient.

I found that to use a pico projector I had to carry a screen (at least a white piece of paper mounted on a stiff board in a plastic sleeve to keep clean and flat) with me.   Then you have the issue of holding the screen up so you can project on it and then find a dark enough place that the image looks good.    By the time you carry a pico projector and screen with you, a thin iPad/tablet works better, you can carry it around the room with ease, and you don’t have to have very dark environment.

The above is the subjective analysis, and the rest of this article will give some more quantitative numbers.

The fundamental problem with a front projector is that it has to compete with ambient light whereas flat panels have screens that absorb generally 91% to 96% of the ambient light (thus they look dark when off).     While display makers market contrast number, these very high contrast numbers assume a totally dark environment, in the real world what counts is the net contrast, that is the contrast factoring in ambient light.

Displaymate has an excellent set of articles (including SmartPhone Brightness Shootout, Mobile Brightness Shootout 2, and Smartphone Shootout 2) on the subject of what they call “Contrast Rating for High Ambient Light” (CRHAL)  which they define as the display brightness per unit area (in candela’s per meter squared, also known as “nits”) of the display divide by the reflectivity of ambient light in percent by the display.

Displaymate’s CRHAL is not a “contrast ratio,” but it gives a good way to compare displays when in reasonable ambient light.  Also important, is that for a front projector it does not take much ambient light to end up dominating the contrast.  For a front projector even dim room light is “high ambient light.”

The total light projected out of a projector is given in lumens so to compare it to a cell phone or tablet we have to know how big the projected image will be and the type of screen.   We can then compute the reflected light in “nits”  which is calculated by the following formula Candelas/meter2 = nits = Gn x (lumens/m2)/PI (where Gn is the gain of the screen and PI = ~3.1416).   If we assume a piece of white paper with a gain of 1 (about right for a piece of good printer paper) then all we have to do is calculate the screen area in meters-square, multiply by the lumens and divide by PI.

A pico projector projecting a 16:9 (HDTV aspect ratio) on a white sheet of notebook paper (with a gain of say 1) results in 8.8-inch by 5-inch image with an area of 0.028 m2 (about the same area as an iPad2 which I will use for comparison).    Plugging a 20 lumen projector in to the equation above with a screen of 0.028 m2 and a gain of 1.0 we get 227 nits.  The problem is that same screen/paper will reflected (diffusing it) about 100% of the ambient light.   Using Displaymate’s CRHAL we get 227/100 = 2.27.

Now compare the pico projector numbers to an iPad2 of the same display area which according to Displaymate has 410 nits and only reflects 8.7% of the ambient light.   The CRHAL for the iPad2 is 410/8.7  = 47.   What really crushes the pico projector by about 20 to 1 with CRHAL metric is that the flat panel display reflects less than 10th of the ambient light where the pico projector’s image has to fight with 100% the ambient light.

In terms of contrast,to get a barely “readable” B&W image, you need at least 1.5:1 contrast (the “white” needs to be 1.5 brighter than the black) and preferably more than 2:1.   To have moderately good (but not great) colors you need 10:1 contrast.

A well lit room has about 100 to 500 lux (see Table 1 at the bottom of this article) and a bright “task area” up to 1500 lux.   If we take 350 lux as a “typical” room then for the sheet of paper screen there are about 10 lumens of ambient light in our 0.028 m2 image from used above.   Thus our 20 lumen projector on top of the 10 lumens of ambient has a contrast ratio of 30/10 or about 3 to 1 which means the colors will be pretty washed out but black on white text will be readable.  To get reasonably good (but not great) colors with a contrast ratio of 10:1 we would need about 80 lumens.   By the same measure, the iPad2 in the same lighting would have a contrast ratio of about 40:1 or over 10x the contrast of a 20 lumen pico projector.   And the brighter the lighting environment the worse the pico projector will compare.    Even if we double or triple the lumens, the pico projector can’t compete.

With the information above, you can plug in whatever numbers you want for brightness and screen size and no matter was reasonable numbers you plug in, you will find that a pico projector can’t compete with a tablet even in moderate lighting conditions.

And all this is before considering the power consumption and space a pico projector would take.   After working on the problem for a number of years it became clear that rather than adding a pico projector with its added battery, they would be better off to just make the display bigger (ala the Galaxy S3 and S4 or even the Note).   The microdisplay devices created would have to look for other markets such as near eye (for example, Google Glass) and automotive Heads Up Display (HUD).

Table 1.  Typical Ambient Lighting Levels (from Displaymate)

Brightness Range

Description

0 lux  –

100 lux  –

500 lux  –

1,000 lux  –

3,000 lux  –

10,000 lux  –

20,000 lux  –

50,000 lux  –

100,000 lux  –

100 lux

500 lux

1,500 lux

5,000 lux

10,000 lux

25,000 lux

50,000 lux

75,000 lux

    120,000 lux

Pitch black to dim interior lightingResidential indoor lighting

Bright indoor lighting:  kitchens, offices, stores

Outdoor lighting in shade or an overcast sky

Shadow cast by a person in direct sunlight

Full daylight not in direct sunlight

Indoor sunlight falling on a desk near a window

Indoor direct sunlight through a window

Outdoor direct sunlight

Himax FSC LCOS in Google Glass — Seeking Alpha Article

Catwig to Himax ComparisonThis blog was the first to identify that there was a Himax panel in an early Google Glass prototype and the first to identify that there was a field sequential color LCOS panel inside Google Glass.  Due to the connection it was a reasonable speculation but there was no proof that Himax was in Google Glass.

Then when Catwig published a teardown of Google Glass last week (and my inbox lit up with people telling me about the article) there were no Himax logos to be seen which started people to wondering if there was indeed a Himax display inside.   As a result of my prior exclusive finds on Himax, LCOS and Google Glass, I was ask to contribute to Seeking Alpha and I just published an article that details my proof that there is a Himax LCOS display inside the current Google Glass.   In that article, I also discounted some recent speculation that Google Glass was going to use a Samsung OLED microdisplay anytime soon.

 

 

 

 

Extended Temperature Range with LC Based Microdisplays

cookies and freezing

Extreme Car Temperatures

A reader, Doug Atkinson, asked a question about meeting extended temperature ranges with LC based microdisplays, particularly with respect to Kopin.    He asked the classic “car dash in the desert and the trunk in Alaska” question. I thought the answer would have broader interest so I decided to answer it it here.

Kopin wrote a good paper that is available on the subject in 2006 titled “A Normally Black, High Contrast, Wide Symmetrical Viewing Angle AMLCD for Military Head Mounted Displays (HMDs) and Other Viewer Applications”. This paper is the most detailed one readily available describing the how Kopin’s transmissive panels meet the military temperature and shock requirements.  It is not clear that Kopin uses this same technology for their consumer products as this paper is specifically addressing what Kopin did for military products.

With respect to LC microdisplays in general, it should realized that there is not a huge difference in the technical spec’s of the liquid crystals between the LC’s  most small panel microdisplays use and large flat panels in most cases. They often just use different “blends” of the very similar materials. There are some major LC differences including TN (twisted nematic), VAN (vertically aligned nematic), and others.   Field sequential color are biased to wanting faster switching “blends” of the LC.

In general, anywhere a large flat panel LC can go, a microdisplay LC can go. The issue is designing the seals and and other materials/structures to withstand the temperature cycling and mechanical shock which requires testing,  experimentation, and development.

The liquid crystals themselves generally will go through different phases from freezing (which is generally fatal) to heating up to the the “clearing point” where the display stops working (but is generally recoverable).  There is also a different spec for “storage temperature range” versus “operating temperature range.” Generally it is assumed the device only has to work in a temperature range in which a human could survive.

At low temperature the LC gets “sluggish” and does not operate well but this can be cured by various “heater mechanisms” including having heating mechanisms designed into the panel itself.  The liquid crystal blends are often designed/picked to work best at a higher temperature range because it is easier to heat than cool.

Field sequential color LCOS is more affected by temperature change because temperature affects not only the LC characteristics, but the switching speed. Once again, this can be dealt with by designing for the higher temperature range and then heating if necessary.

As far as Kopin’s “brightness” goes (another of Doug’s questions), a big factor is how powerful/bright the back light has to be. The Kopin panel blocks something like 98.5% of the light by their own spec’s. What you can get away with in a military headset is different than what you may accept in a consumer product in terms of size, weight, and power consumption. Brightness in daylight is a well known (inside the industry) issue for Kopin’s transmissive panels and one reason that near eye display makers have sought out LCOS.

[As an aside for completeness about FLC]  Displaytech which was sold the Micron and then sold to Citizen Finetech Miyota and the Kopin bought Forth Dimension Display (FDD) both use Ferro-electric LC (FLC / FLCOS) which does have a pretty dramatically different temperature profile that is very near “freezing” (going into a solid state) a little below 0C which would destroy the device. Displaytech claimed (I don’t know about FDD) that they had extended the low temperature range but I don’t know by how much. The point is that the temperature range of FLC is so different that meeting military spec’s is much more difficult.

AR Display Device of the Future: Color Filter, Field Sequential, OLED, LBS and other?

I’m curious what people think will be the near eye microdisplay of the future.   Each technology has its own drawbacks and advantages that are well known.   I thought I would start by listing summarizing the various options:

Color filter transmissive LCD – large pixels with 3 sub-pixels and lets through only 1% to 1.5% of the light (depends on pixel size and other factors).  Scaling down is limited by the colors bleeding together (LC effects) and light throughput.  Low power to panel but very inefficient use of the illumination light.

Color filter reflective (LCOS) – same as CF-transmissive but the sub-pixels (color dots) can be smaller, but still limited scaling due to needing 3 sub-pixels and color bleeding.  Light throughput on the order of 10%.  More complicated optics than transmissive (requires a beam splitter), but shares the low power to panel.

Field Sequential Color (LCOS) – Color breakup from sequential fields (“rainbow effect”), but the pixels can be very small (less than 1/3rd that of color filter).   Light throughput on the order of 40% (assuming a 45% loss in polarization).  Higher power to the panel due to changing fields.  Optical path similar to CF-LCOS, but to take advantage of the smaller size requires smaller but higher quality (low MTF) optics.   Potentially mates well with lasers for very large depth of focus so that the AR image is in focus regardless of where the user’s eyes are focused.

Field Sequential Color (DLP) – Color breakup form FSC but can go to higher field rates than LCOS to reduce the effects.   Device and control is comparatively high powered and has a larger optical path.  The pixel size it bigger than FSC LCOS due to the physical movement of the DLP mirrors.   Light throughput on the order of 80% (does not have the polarization losses) but falls as pixel gets smaller (gap between mirrors is bigger than LCOS).    Not sure this is a serious contender due to cost, power of the panel/controller, and optical path size, and nobody I know of has used it for near eye, but I listed it for completeness

OLED – Larger pixel due to 3 color sub-pixels.  It is not clear how small this technology will scale in the foreseeable future.  OLED while improving the progress has been slow — it has been the “next great near eye technology” for 10 years.   Has a very simple optical path and potentially high light efficiency which has made it seem to many like on technology with the best future, but it is not clear how it scales to very small sizes and higher resolution (the smallest OLED pixel I have found is still about 8 times bigger than the smallest FSC LCOS pixel) .    Also it is very diffuse light and therefore the depth of focus will be low.

Laser Beam Steering – While this one sounds good to the ill-informed, the need to precision combine 3 separate lasers beams tends to make it not very compact and it is ridiculously to expensive today due to the special (particularly green) lasers required.  Similar to field sequential color, there are breakup effects of having a raster scan (particularly with no persistence like a CRT) on a moving platform (as in a head mount display).   While there are still optics involved to produce an image on the eye, it could have a large depth of focus.   There are a lot of technical and cost issues that keep this from being a serious alternative any time soon, but it is in this list for completeness.

I particularly found it interesting that Google’s early prototype used a color filter LCOS and then they switched to field sequential LCOS.    This seems to suggest that they chose size over issues with the field sequential color breakup.    With the technologies I know of today, this is the trade-off for any given resolution; field sequential LCOS pixels are less than 1/3rd the size (a typically closer to 1/9th the size) of any of the existing 3-color devices (color filter LCD/LCOS or OLED).

Olympus MEG4.0

Olympus MEG4.0 – Display Device Over Ear

It should also be noted that in HMD, an extreme “premium” is put on size and weight in front of the eye (weight in front of the eye creates as series of ergonomic and design issues).    This can be mitigated by using light guides to bring the image to eye and locating a larger/heavier display device and its associate optics to a less critical location (such as near the ear) as Olympus has done with their Meg4.0 prototype (note, Olympus has been working at this for many years).  But doing this has trade-offs with the with the optics and cost.

Most of this comparison boils down to size versus field sequential color versus color sub-pixels.    I would be curious what you think.

Kopin Displays and Near Eye (Followup to Seeking Alpha Article)

Kopin Pixel compared to LCOS

Kopin’s smallest transmissive color filter pixel is bigger than nine of the smallest field sequential color LCOS pixels

After posting my discovery of a Himax LCOS panel on a Google Glass prototype, I received a number of inquiries about Kopin including a request from Mark Gomes of SeekingAlpha the give my thoughts about Kopin which were published in “Will Kopin Benefit From the Glass Wars?”  In this post I am adding morel information to supplement what I wrote for the  Seeking Alpha article.

First, a little background on their “CyberDisplay® technology would be helpful.   Back in the 1990’s Kopin developed a unique “lift-off” process to transfer transistor and other circuitry from a semiconductor I.C. onto a glass plate to make a transmissive panel which they call the CyberDisplay®.  Kopin’s “lift-off” technology was amazing for that era. This technology allowed Kopin to apply very (for its day) small transistors on glass to enable small transmissive devices that were used predominantly in video and still camera viewfinders. The transmissive panel has 3 color dots (red, green, blue) that produce a single color pixel similar to a large LCD screen only much smaller. In the late 1990’s Kopin could offer a simple optical design with the transmissive color panel that was smaller than existing black and white displays using small CRTs.  This product was very successful for them, but it has become a commoditized (cheap) device these many years later.

CyberDisplay pixel is large and blocks 98.5% of the light

While the CyberDisplay let Kopin address the market for what are now considered low resolution displays cost effectively, the Achilles’ heel to the technology is that it does not scale well to higher resolution because the pixels are so large relative to other microdisplay technologies.  For example Kopin’s typical transmissive panel is15 by 15 microns and is made up of three 5 by 15 color “dots” (as Kopin calls them).    But what makes matters worse; even these very large pixel devices have an extremely poor light throughput of 1.5% (blocks 98.5% of the light) and scaling the pixel down will block even more light!

While not listed on the website (but included in a news release), Kopin has an 8.7 x 8.7 micron color filter pixel (that I suspect is used in their Golden-i head mount display) but it blocks even more light than the 15×15 pixel as the pixel gets smaller.    Also to be fair, there are CyberDisplay pixels that block “only” 93.5% of the light but they give up contrast and color purity in exchange for light throughput which is not usually desirable.

There are many reasons why the transmissive color filter LCOS light throughput is so poor.  To begin with, the color filters themselves which are going to block more than 2/3rds of the light (blocking the other 3 primary colors plus other losses).    Because it is transmissive, the circuitry and the transistor to control each pixel block the light which becomes significant as the pixel becomes small.

But perhaps the biggest factor (but most complex to understand, I will only touch on it here) is that the electric field for controlling the liquid crystal for a given color dot extent into the neighboring color dots thus causing the colors to bleed together and loose all color saturation/control.  To reduce this problem they can use less light throughput efficient liquid crystal materials that are less susceptible to the neighboring electric fields and use black masks (which block light)  surrounding the each color dot to hide the area where the colors bleed together.

Field Sequential Color – Small Pixels and 80+% light throughput

With reflective LCOS, all the wires and circuitry are hidden behind the pixel mirror so that non of the transistors and other circuitry block the light.  Furthermore the liquid crystal layer is usually less than half as thick which limits the electric field spreading and allows pixels to be closer together without significantly affecting each other.  And of course there are no color filters which waste more than 2/3rds of the light.    The down side to field sequential color is the color field breakup where when the display move quickly relative to the eye, the colors may not line up for a split second.   The color breakup effects can be reduce by going to higher field sequential rates.

Kopin’s pixesl are huge when compared to those of field sequential LCOS devices (from companies such as Himax, Syndiant, Compound Photonics, and Citizen Finetech Miyota) that today can easily have pixels 5 by 5 microns and with some that are smaller than 3 by 3 microns.   Therefore FSC LCOS can have about 9 times the pixel resolution for roughly the same size device!  And the light throughput of the LCOS devices is typically more than 80% which becomes particularly important for outdoor use.

So while a low resolution Kopin CyberDisplay might be able to produce a low resolution image in a headset as small as Google Glass, they would have to limit the device in the future to a low resolution device – – – not a good long-term plan.  I’m guessing that the ability to scale to higher resolutions was at least one reason why Google went with a field sequential color device rather than starting with a transmissive panel that would have at least initially been easier to design with.  Another important factor weight in advantage of LCOS over a transmissive panel is the light throughput so that the display is bright enough for outdoor use.

I don’t want to be accused of ignoring Kopin’s 2011 acquisition of Forth Dimension Displays (FDD) which makes a form of LCOS.  This is clearly a move by Kopin move into reflective FSC LCOS.   It so happens back in 1998 and 1999 I did some cooperative work with CRL Opto (that later became FDD) and they even used I design I worked on for their silicon backplane in their first product.  The FSC LCOS that FDD makes is considerably different in both the design of the device and the manufacturing process required for a high volume product.

Through FDDs many years of history (and several name changes) FDD has drifted to a high end specialized display technology with a large 8+ micron pixels.   For a low volume niche applications FDD is servicing, there was no need to develop more advance silicon to support a very small device and drive electronics.  Other companies aiming more at consumer products (such as Syndiant where I was CTO) have put years of efforts into building “smarter” silicon that enabled minimizing the not only the size of the display;  reducing the number of connection wires going between the display and the controller; and reduced the controller to one small ASIC.

Manufacturing Challenge for Kopin

To cost effectively assemble small pixel LCOS devices requires manufacturing equipment and methods that are almost totally different from what Kopin does with their CyberDisplay or FDD with their large pixel LCOS.   Almost every step in the process is done with an eye to high volume manufacturing cost.   And it is not like a they can just buy the equipment and be up and running, it usually takes over a year to get the yields up to an acceptable level from the time the equipment is installed.  Companies such as Himax have reportedly spent around $300M in developing their LCOS devices and I know of multiple other companies having spend over $100M and many years of effort in the past.

Conclusion

For at least the reasons given above, I don’t see Kopin as currently positioned well to build a competitive high volume head mounted displays that are to meet the future needs of the market as I think all roads lead to higher resolution, yet small devices.  It would seem to me that they would need a lot time, effort, and money to field a long-term competitive product.