Archive for Use Model

CES 2017 AR, What Problem Are They Trying To Solve?

Introduction

First off, this post is a few weeks late. I got sick on returning from CES and then got busy with some other pressing activities.

At left is a picture that caught me next to the Lumus Maximus demo at CES from Imagineality’s “CES 2017: Top 6 AR Tech Innovations“. Unfortunately they missed that in the Lumus booth at about the same time was a person from Magic Leap and Microsoft’s Hololens (it turned out we all knew each other from prior associations).

Among Imagineality’s top 6 “AR Innovations” were ODG’s R-8/R-9 Glasses (#1) and Lumus’s Maximus 55 degree FOV waveguide (#3). From what I heard at CES and saw in the writeups, ODG and Lumus did garner a lot of attention. But by necessity, theses type of lists are pretty shallow in their evaluations and I try to do on this blog is go a bit deeper into the technology and how it applies to the market.

Among the near eye display companies I looked at during CES include Lumus, ODG, Vuzix, Real Wear, Kopin, Wave Optics, Syndiant, Cremotech, QD Laser, Blaze (division of eMagin) plus several companies I met with privately. As interesting to me as their technologies was there different takes on the market.

For this article, I am mostly going to focus on the Industrial / Enterprise market. This is were most of the AR products are shipping today. In future articles, I plan to go into other markets and more of a deep dive on the the technology.

What Is the Problem They Are Trying to Solve?

I have had an number of people asked me what was the best or most interesting AR thing I saw at CES 2017, and I realized that this was at best an incomplete question. You first need to ask, “What problem are they trying to solve?” Which leads to “how well does it solve that problem?” and “how big is that market?

One big takeaway I had at CES having talked to a number of different company’s is that the various headset designs were, intentionally or not, often aimed at very different applications and use cases. Its pretty hard to compare a headset that almost totally blocks a user’s forward view but with a high resolution display to one that is a lightweight information device that is highly see-through but with a low resolution image.

Key Characteristics

AR means a lot of different things to different people. In talking to a number of companies, you found they were worried about different issues. Broadly you can separate into two classes:

  1. Mixed Reality – ex. Hololens
  2. Informational / “Data Snacking”- ex. Google Glass

For most of the companies were focused on industrial / enterprise / business uses at least for the near future and in this market the issues include:

  1. Cost
  2. Resolution/Contrast/Image Quality
  3. Weight/Comfort
  4. See-through and/or look over
  5. Peripheral vision blocking
  6. Field of view (small)
  7. Battery life per charge

For all the talk about mixed reality (ala Hololens and Magic Leap), most of the companies selling product today are focused on helping people “do a job.” This is where they see the biggest market for AR today. It will be “boring” to the people wanting the “world of the future” mixed reality being promised by Hololens and Magic Leap.

You have to step back and look at the market these companies are trying to serve. There are people working on a factory floor or maybe driving a truck where it would be dangerous to obscure a person’s vision of the real world. They want 85% or more transparency, very lightweight and highly comfortable so it can be worn for 8 hours straight, and almost no blocking of peripheral vision. If they want to fan out to a large market, they have to be cost effective which generally means they have to cost less than $1,000.

To meet the market requirements, they sacrifice field of view and image quality. In fact, they often want a narrow FOV so it does not interfere with the user’s normal vision. They are not trying to watch movies or play video games, they are trying to give necessary information for person doing a job than then get out of the way.

Looking In Different Places For the Information

I am often a hard audience. I’m not interested in the marketing spiel, I’m looking for what is the target market/application and what are the facts and figure and how is it being done. I wanting to measure things when the demos in the boths are all about trying to dazzle the audience.

As a case in point, let’s take ODG’s R-9 headset, most people were impressed with the image quality from ODG’s optics with a 1080p OLED display, which was reasonably good (they still had some serious image problems caused by their optics that I will get into in future articles).

But what struck me was how dark the see-through/real world was when viewed in the demos. From what I could calculate, they are blocking about 95% of the real world light in the demos. They also are too heavy and block too much of a person’s vision compared to other products; in short they are at best going after a totally different market.

Industrial Market

Vuzix is representative of the companies focused on industrial / enterprise applications. They are using with waveguides with about 87% transparency (although they often tint it or uses photochromic light sensitive tinting). Also the locate the image toward the outside of the use’s view so that even when an image it displayed (note in the image below-right that the exit port of the waveguide is on the outside and not in the center as it would be on say a Hololens).

The images at right were captured from a Robert Scoble interview with Paul Travers, CEO of Vuzix. BTW, the first ten minutes of the video are relatively interesting on how Vuzix waveguides work but after that there is a bunch of what I consider silly future talk and flights of fancy that I would take issue with. This video shows the “raw waveguides” and how they work.

Another approach to this category is Realwear. They have a “look-over” display that is not see through but their whole design is make to not block the rest of the users forward vision. The display is on a hinge so it can be totally swung out of the way when not in use.

Conclusion

What drew the attention of most of the media coverage of AR at CES was how “sexy” the technology was and this usually meant FOV, resolution, and image quality. But the companies that were actually selling products were more focused on their user’s needs which often don’t line up with what gets the most press and awards.

 

Everything VR & AR Podcast Interview with Karl Guttag About Magic Leap

With all the buzz surrounding Magic Leap and this blog’s technical findings about Magic Leap, I was asked to do an interview by the “Everything VR & AR Podcast” hosted by Kevin Harvell. The podcast is available on iTunes and by direct link to the interview here.

The interview starts with about 25 minutes of my background starting with my early days at Texas Instruments. So if you just want to hear about Magic Leap and AR you might want to skip ahead a bit. In the second part of the interview (about 40 minutes) we get into discussing how I went about figuring out what Magic Leap was doing. This includes discussing how the changes in the U.S. patent system signed into law in 2011 with the America Invents Act help make the information available for me to study.

There should be no great surprises for anyone that has followed this blog. It puts in words and summarizes a lot that I have written about in the last 2 months.

Update: I listen to the podcast and noticed that I misspoke a few times; it happens in live interviews.  An unfathomable mistake is that I talked about graduating college in 1972 but that was high school; I graduated from Bradley University with a B.S. in Electrical Engineering in 1976 and then received and MSEE from The University of Michigan in 1977 (and joined TI in 1977).  

I also think I greatly oversimplified the contribution of Mark Harward as a co-founder at Syndiant. Mark did much more than just have desigeners, he was the CEO, an investor, and and the company while I “played” with the technology, but I think Mark’s best skill was in hiring great people. Also, Josh Lund, Tupper Patnode, and Craig Waller were co-founders. 

 

Magic Leap: Focus Planes (Too) Are a Dead End

What Magic Leap Appears to be Doing

For this article I would like to dive down on the most likely display and optics Magic Leap (ML) is developing for their their Product Equivalent (PEQ). The PEQ was discussed in the “The Information” story “The Reality Behind Magic Leap.” As I explained in my  November 20, 2016 article Separating Magic and Reality (before the Dec 8th “The Information” story) the ML patent application US 2016/0327789 best fits the available evidence and if anything the “The Information” article reinforce that conclusion. Recapping the evidence:

  1. ML uses a “spatial light modulator” as stated in “The Information”
  2. Most likely an LCOS spatial light modulator and the Oct. 27th 2017 Inside Business citing “KGI Securities analyst Ming-Chi Kuo, who has a reputation for being tapped into the Asian consumer electronics supply chain” claims ML is using a Himax LCOS device.
  3. Focus planes to support vergence/accommodation per many ML presentations and their patent applications
  4. Uses waveguides which fit the description and pictures of what ML calls a “Photonics Chip”
  5. Does not have a separate focus mechanism as reported in the “The Information” article.
  6. Could fit the form factor as suggested in “The Information”
  7. Its the only patent that shows serious optical design that also uses what could be considered a “Photonics chip.”

I can’t say with certainty that the optical path is that of application 2016/0327789. It is just the only optical path in the ML patent applications that fits all the available evidence and and has a chance of working.

Field of View (FOV)

Rony Abovitz, ML CEO, is claiming a larger a larger FOV.  I would think ML would not want to be have lower angular resolution than Hololens. Keeping the same 1.7 arc minutes per pixel angular resolution as Hololens and ODG’s Horizon, this would give a horizontal FOV of about 54.4 degrees.

Note, there are rumors that Hololens is going to be moving to a 1080p device next year so ML may still not have an advantage by the time they actually have a product. There is a chance that ML will just use a 720p device, at least at first, and accept lower angular resolution of say 2.5 or greater to get into the 54+ FOV range. Supporting a larger FOV is not small trick with waveguides and is  one thing that ML might have over Hololoens; but then again Hololens is not standing still.

Sequential Focus Planes Domino Effect

The support of vergence/accommodation appears to be a paramount issue with ML. Light fields are woefully impractical for any reasonable resolution, so ML in their patent application and some of their demo videos show the concept of “focus planes.” But for every focus plane an image has to be generated and displayed.

The cost of having more than one display per eye including the optics to combine the multiple displays would be both very costly and physically large. So the only rational way ML could support focus planes is to use a single display device and sequentially display the focus planes. But as I will outline below, using sequential focus planes to address vergence/accommodation, comes at the cost of hurting other visual comfort issues.

Expect Field Sequential Color Breakup If Magic Leap Supports “Focus Planes”

Both high resolution LCOS and DLP displays use “field sequential color” where they have a single set of mirrors that display a single color plane at a time. To get the colors to fuse together in the eye they repeat the same colors multiple times per frame of an image. Where I have serious problems with ML using Himax LCOS is that instead of repeating colors to reduce the color breakup, they will be instead be showing different images to support Sequential Focus Planes. Even if they have just two focus planes as suggested in “The Information,” it means they will reduce the rate repeating of colors to help them fuse in the eye is cut in half.

The Hololens which also uses a field sequential color LCOS one can already detect breakup. Cutting the color update rate by 2 or more will make this problem significantly worse.

Another interesting factor is that field sequential color breakup tends to be more noticeable by people’s peripheral vision which is more motion/change sensitive. This means the problem will tend to get worse as the FOV increases.

I have worked many years with field sequential display devices, specifically LCOS. Based on this experience I expect that the human vision system  will do a poor job of “fusing” the colors at such slow color field update rates and I would expect people will see a lot of field sequential color breakup particularly when objects move.

In short, I expect a lot of color breakup to be noticeable if ML support focus planes with a field sequential color device (LCOS or DLP).

Focus Planes Hurt Latency/Lag and Will Cause Double Images

An important factor in human comfort is the latency/lag between any head movement and the display reacting can cause user discomfort. A web search will turn up thousands of references about this problem.

To support focus planes ML must use a display fast enough to support at least 120 frame per second. But to support just two focus planes it will take them 1/60th of a second to sequentially display both focus planes. Thus they have increase the total latency/lag from the time they sense movement until the display is updated by ~8.333 milliseconds and this is on top of any other processing latency. So really focus planes is trading off one discomfort issue, vergence/accommodation, for another, latency/lag.

Another issue which concerns me is how well sequential focus planes are doing to fuse in the eye. With fast movement the eye/brain visual system is takes its own asynchronous “snapshots” and tries to assemble the information and line it up. But as with field sequential color, it can put together time sequential information wrong, particularly if some objects in the image move and others don’t. The result will be double images, getting double images with sequential focus planes would be unavoidable with fast movement either in the virtual world or when a person moves their eyes. These problems will be compounded by color field sequential breakup.

Focus Planes Are a Dead End – Might Magic Leap Have Given Up On Them?

I don’t know all the behind the scenes issues with what ML told investors and maybe ML has been hemmed in by their own words and demos to investors. But as an engineer with most of my 37 years in the industry working with image generation and display, it looks to me that focus planes causes bigger problems than it solves.

What gets me is that they should have figured out that focus planes were hopeless in the first few months (much less if someone that knew what they were doing was there). Maybe they were ego driven and/or they built to much around the impression they made with their “Beast” demo system (big system using DLPs). Then maybe they hand waved away the problems sequential focus planes cause thinking they could fix them somehow or hoped that people won’t notice the problems. It would certainly not be the first time that a company committed to a direction and then felt that is had gone to far to change course. Then there is always the hope that “dumb consumers” won’t see the problems (in this case I think they will).

It is clear to me that like Fiber Scan Displays (FSD), focus planes are a dead end, period, full-stop. Vergence/accommodation is a real issue but only for objects that get reasonably close to the users. I think a much more rational way to address the issue is to use sensors to track the eyes/pupils and adjust the image accordingly as the eye’s focus changes relatively slowly it should be possible to keep up. In short, move the problem from the physical display and optics domain (that will remain costly and problematical), to the sensor and processing domain (that will more rapidly come down in cost).

If I’m at Hololens, ODG, or any other company working on an AR/MR systems and accept that vergence/accommodation is a problem needs to be to solve, I’m going to solve it with eye/pupil sensing and processing, not by screwing up everything else by doing it with optics and displays. ML’s competitors have had enough warning to already be well into developing solutions if they weren’t prior to ML making such a big deal about the already well known issue.

The question I’m left is if and when did Magic Leap figured this out and were they too committed by ego or what they told investors to focus planes to change at that point? I have not found evidence so far in their patent applications that they tried to changed course, but these patent applications will be about 18 months or more behind what they decided to do. But if they don’t use focus planes, they would have to admit that they are much closer to Hololens and other competitors than they would like the market to think.

Evergaze: Helping People See the Real World

Real World AR

Today I would like to forget about all the hype and glamor near eye products to have fun in a virtual world. Instead I’m going to talk a near eye device aimed at helping people to see and live in the real world.  The product is called the “seeBoost®” and it is made by the startup Evergaze in Richardson, Texas. I happen to know the founder and CEO Pat Antaki from working together on a near eye display back in 1998, long before it was fashionable. I’ve watched Pat bootstrap this company from its earliest days and asked him if I could be the first to write about seeBoost on my blog.

The Problem

Imagine you get Age Related Macular Degeration (AMD) or Diabetic Retinopathy. All your high-resolution vision and best color vision of the macular (and where high resolution fovea resides) is gone and you see something like the picture on the right. All you can use is your peripheral vision which is low in resolution, contrast, and color sensitivity. There are over 2 million people in the U.S that can still see but have worse than 20/60 vision in their better eye.

What would you pay to be able to read a book again and do other normal activities that require the ability to have “functional vision?” So not only is Evergaze aiming to help a large number of people, they are going after a sizable and growing market.

seeBoost Overview

seeBoost has 3 key parts, the lightweight near-to-eye display, a camera with high speed autofocus, and proprietary processing in an ASIC that remaps what the camera sees onto the functioning part of the user’s vision. They put the proprietary algorithms in hardware so they could have the image remapping and contrast enhancement performed with extremely low latency so that there is no perceptible delay when a person moves their head. As anyone that has used VR headsets will know, this important for wearing the device for long periods of time to avoid headaches and nausea.

A perhaps subtle but important point is that the camera and display are perfectly coaxial, so there is no parallax error as you move the object closer to your eye. The importance of centering the camera with the user’s eye for long term comfort was a major point made AR headset user and advocate Steve Mann in his March 2013, IEEE Spectrum article, “What I’ve learned from 35 years of wearing computerized eyewear”. Quoting from the article, “The slight misalignment seemed unimportant at the time, but it produced some strange and unpleasant result.” And in commenting on Google Glass Mr. Mann said, “The current prototypes of Google Glass position the camera well to the right side of the wearer’s right eye. Were that system to overlay live video imagery from the camera on top of the user’s view, the very same problems would surely crop up.”

Unlike traditional magnifying optics like a magnifying glass, in addition to being able to remap the camera image to the parts of the eye that can see, the depth of field and magnification amount are decoupled: you can get any magnification (from 1x to 8x) at any distance (2 inches to infinity). It also has digital image color reversal (black-to-white reversal, useful for reading pages with a lot of white). The device is very lightweight at 0.9 oz. including cable. The battery pack supports for 6 hours of continual use on a single charge.

Use Case

Imagine this use scenario: playing bridge with your friends. To look at the cards in your hand you may need 2x mag at 12 inches’ distance. The autofocus allows you to merely move the cards as close to your face as you like, the way a person would naturally use to make something larger. Having the camera coaxial with the display makes this all seem natural versus say having a camera above the eye. Looking at the table to see what cards are placed there, maybe you need 6x mag. at 2 feet. To see other people’s eyes and facial expressions around the table, you need 1-2x at 3-4 feet.

seeBoost is designed to help people see so they can better take part in the simple joys of normal life. The lightweight design mounts on top of a user’s prescription glasses and can help while walking, reading signs and literature, shopping, watching television, recognizing faces, cooking, and even playing sports like golf.

Another major design consideration was the narrow design so that it does not cover-up lateral and downwards peripheral vision of the eye.  This turns out to be important for people who don’t want to further lose peripheral vision. In this application, monocular(single eye) is for better situational awareness and peripheral vision.

seeBoost is a vision enhancement device rather it essentially a computer (or cell phone) monitor that you must plug into something. The user simply looks at the screen (through seeBoost), as seeBoost improves their vision for whatever they’re looking at, be it an electronic display or their grandchildren’s faces.

Assembled in the USA and Starting to Ship

This is not just some Kickstarter concept either. Evergaze has been testing prototypes with vision impaired patients for over a year and have already finished a number of studies. What’s more they recently started shipping product. To the left is an image that was taken though the seeBoost camera via its display and optics.

What’s more this product is manufactured in the US at a production line Evergaze set up in Richardson, TX. If you want to find out more about the company you can go their their YouTube Channel or if you know someone that needs a seeBoost, you can contact Pat Antaki via email: pantaki@evergaze.com

Navdy Launches Pre-Sale Campaign Today

Bring Jet-Fighter Tech to Your Car with NavdyIts LAUNCH Day for Navdy as our presale campaign starts today. You can go to the  Navdy site to see the video.  It was a little over a year ago that Doug Simpson contacted me via this blog asking about how to make a aftermarket heads up display (HUD) for automobiels.     We went through an incubator program called Highway1 sponsored by PCH International that I discussed in my last blog entry.

The picture above is a “fancy marketing image” that tries to simulate what the eye sees (which is impossible to do with a camera as it turns out).   We figures out how to do some pretty interesting stuff and the optics works better than I thought was possible when we started.    The image image focuses beyond the “combiner/lens” to help with the driver seeing the images in the far vision is about 40 times brighter (for use in bright sunlight) than an iPhone while being very efficient.

Navdy Office

Being CTO at a new start-up has kept me away from this blog (a start-up is very time consuming).  We have raise some significant initial venture capital to get the program off the ground and the pre-sale campaign takes it to the next level to get products to market.  In the early days it was just me and Doug but now we have about a dozen people and growing.

Karl

Whatever happened to pico projectors embedding in phones?

iPad smBack around 2007 when I was at Syndiant we started looking at the pico projector market, we talked to many of the major cell phone as well as a number of PC companies and almost everyone had at least an R&D program working on pico projectors.  Additionally there were market forecasts for rapid growth of embedded pico projectors in 2009 and beyond.  This convinced us to develop small liquid crystal on silicon (LCOS) microdisplay for embedded pico projectors.  With so many companies saying they needed pico projectors, it seemed like a good idea at the time.  How could so many people be wrong?

Here we are 6 years later and there are almost no pico projectors embedded in cell phones or much else for that matter.   So what happened?   Well, just about the same time we started working on pico projectors, Apple introduced their first iPhone.    The iPhone overnight roughly tripled the size of the display screen of a smartphone such as a Blackberry.  Furthermore Apple introduced ways to control the screen (pinch/zoom, double clicking to zoom in on a column, etc.) to make better use of what was still a pretty small display.   Then to make matter much worse, Apple introduce the iPad and tablet market took off almost instantaneously.    Today we have larger phones, so called “phablets,” and small tablets filling in just about every size in between.

Additionally I have written about before, the use model for a cell phone pico projector shooting on a wall doesn’t work.   There is very rarely if ever a dark enough place with something that will work well for a screen in a place that is convenient.

I found that to use a pico projector I had to carry a screen (at least a white piece of paper mounted on a stiff board in a plastic sleeve to keep clean and flat) with me.   Then you have the issue of holding the screen up so you can project on it and then find a dark enough place that the image looks good.    By the time you carry a pico projector and screen with you, a thin iPad/tablet works better, you can carry it around the room with ease, and you don’t have to have very dark environment.

The above is the subjective analysis, and the rest of this article will give some more quantitative numbers.

The fundamental problem with a front projector is that it has to compete with ambient light whereas flat panels have screens that absorb generally 91% to 96% of the ambient light (thus they look dark when off).     While display makers market contrast number, these very high contrast numbers assume a totally dark environment, in the real world what counts is the net contrast, that is the contrast factoring in ambient light.

Displaymate has an excellent set of articles (including SmartPhone Brightness Shootout, Mobile Brightness Shootout 2, and Smartphone Shootout 2) on the subject of what they call “Contrast Rating for High Ambient Light” (CRHAL)  which they define as the display brightness per unit area (in candela’s per meter squared, also known as “nits”) of the display divide by the reflectivity of ambient light in percent by the display.

Displaymate’s CRHAL is not a “contrast ratio,” but it gives a good way to compare displays when in reasonable ambient light.  Also important, is that for a front projector it does not take much ambient light to end up dominating the contrast.  For a front projector even dim room light is “high ambient light.”

The total light projected out of a projector is given in lumens so to compare it to a cell phone or tablet we have to know how big the projected image will be and the type of screen.   We can then compute the reflected light in “nits”  which is calculated by the following formula Candelas/meter2 = nits = Gn x (lumens/m2)/PI (where Gn is the gain of the screen and PI = ~3.1416).   If we assume a piece of white paper with a gain of 1 (about right for a piece of good printer paper) then all we have to do is calculate the screen area in meters-square, multiply by the lumens and divide by PI.

A pico projector projecting a 16:9 (HDTV aspect ratio) on a white sheet of notebook paper (with a gain of say 1) results in 8.8-inch by 5-inch image with an area of 0.028 m2 (about the same area as an iPad2 which I will use for comparison).    Plugging a 20 lumen projector in to the equation above with a screen of 0.028 m2 and a gain of 1.0 we get 227 nits.  The problem is that same screen/paper will reflected (diffusing it) about 100% of the ambient light.   Using Displaymate’s CRHAL we get 227/100 = 2.27.

Now compare the pico projector numbers to an iPad2 of the same display area which according to Displaymate has 410 nits and only reflects 8.7% of the ambient light.   The CRHAL for the iPad2 is 410/8.7  = 47.   What really crushes the pico projector by about 20 to 1 with CRHAL metric is that the flat panel display reflects less than 10th of the ambient light where the pico projector’s image has to fight with 100% the ambient light.

In terms of contrast,to get a barely “readable” B&W image, you need at least 1.5:1 contrast (the “white” needs to be 1.5 brighter than the black) and preferably more than 2:1.   To have moderately good (but not great) colors you need 10:1 contrast.

A well lit room has about 100 to 500 lux (see Table 1 at the bottom of this article) and a bright “task area” up to 1500 lux.   If we take 350 lux as a “typical” room then for the sheet of paper screen there are about 10 lumens of ambient light in our 0.028 m2 image from used above.   Thus our 20 lumen projector on top of the 10 lumens of ambient has a contrast ratio of 30/10 or about 3 to 1 which means the colors will be pretty washed out but black on white text will be readable.  To get reasonably good (but not great) colors with a contrast ratio of 10:1 we would need about 80 lumens.   By the same measure, the iPad2 in the same lighting would have a contrast ratio of about 40:1 or over 10x the contrast of a 20 lumen pico projector.   And the brighter the lighting environment the worse the pico projector will compare.    Even if we double or triple the lumens, the pico projector can’t compete.

With the information above, you can plug in whatever numbers you want for brightness and screen size and no matter was reasonable numbers you plug in, you will find that a pico projector can’t compete with a tablet even in moderate lighting conditions.

And all this is before considering the power consumption and space a pico projector would take.   After working on the problem for a number of years it became clear that rather than adding a pico projector with its added battery, they would be better off to just make the display bigger (ala the Galaxy S3 and S4 or even the Note).   The microdisplay devices created would have to look for other markets such as near eye (for example, Google Glass) and automotive Heads Up Display (HUD).

Table 1.  Typical Ambient Lighting Levels (from Displaymate)

Brightness Range

Description

0 lux  –

100 lux  –

500 lux  –

1,000 lux  –

3,000 lux  –

10,000 lux  –

20,000 lux  –

50,000 lux  –

100,000 lux  –

100 lux

500 lux

1,500 lux

5,000 lux

10,000 lux

25,000 lux

50,000 lux

75,000 lux

    120,000 lux

Pitch black to dim interior lightingResidential indoor lighting

Bright indoor lighting:  kitchens, offices, stores

Outdoor lighting in shade or an overcast sky

Shadow cast by a person in direct sunlight

Full daylight not in direct sunlight

Indoor sunlight falling on a desk near a window

Indoor direct sunlight through a window

Outdoor direct sunlight

Google Glass Prototype Using Color Filter LCOS

Google Glass Prototype Himax 1

I was looking at a video that showed a Google Prototype the other day and it became obvious to me that they were using a color filter LCOS panel.  I have seen speculation that Google was using DLP or OLED devices but this is clearly wrong.  And any speculation of it being a laser beam scanning device is simply silly (LBS is way too big and expensive)

In some recent videos and articles about Google’s Glass development show the guts of the prototype.   It looked pretty clear it was an LCOS panel mounted on a PCB going through a beam splitter to the light guide for the see-through display.   Below is a crop of the picture from the Verge Article “I Used Google Glass . . . “

Frame Capture at 1:56 in Verge Video on Youtube

Frame Capture at 1:56 in Verge Video on YouTube

What caught my eye was that there were only two wires going to the LED illumination (in a white package — see picture above) which was indicative of a white LED.   A field sequential device would have to have separate wires for each LED (or laser).     To get a color display starting with a white light source, the device had to have color filters on it and so by a process of elimination, it had to be a color filter LCOS device.

Knowing that Himax made color filter LCOS,  I searched through some pictures I took of HiMax’s panels (since they made color filter LCOS) and found some pictures I took at an electronics show Hong Kong in October 2010 (see picture on the right above.   The optical engine was for a about a 10 lumen front projector so the optical engine is a bit different (particularly the lens) but the panel in the Google prototype is a perfect match to the Himax color filter one used used by Shiny Optics (then owned by Himax) in 2010.  In the close up crop (and rotation) next to the Google prototype (see top images), the flex connector on the PC board, the PC board with its mounting holes (red arrows), and PC-board silk screen markings (green and blue arrows) are a match.

2010-10-14_Shiny Optics Himax Panel_5951

Shiny Optics Exhibit at Hong Kong Electronics Fair (Oct. 2010)

Note, the resolution of the Himax display was only 320 by 240 pixels (by 3 colors/pixel) for this panel.     This resolution may seem very low, but remember that Google glass is only putting an image in the upper corner of a person’s field of view so it only covers a small part of the viewing area.   This is also consistent with the latest Google Glass video (frame captured below) which has a low resolution display (note the simple fonts and few text characters across the screen).

It got me that the panel was oriented the wrong way in the Verge video/article (the image would come out long in the vertical direction).  But then another article by Fast Company Co Design article “Google’s Project Glass: Inside The Problem Solving And Prototyping” (see second picture below) had the panel rotated.    Since the Fast Co Design picture shows a smaller system, I assume it is a later prototype, but it uses the same panel (same telltale markings and a “white” LED).

co pro design google glass

From Fast Company Co. Design Article

But note, I would tend to doubt that Google is still using color filter LCOS.   Himax has taken most of the information about LCOS down from their site and may be out of this business.  But more importantly, for a low resolution and low brightness (near eye, rather than a projector) application such as this one of only about 320 by 240 pixels (more on the resolution of the Google Glass demo in my next article), a smaller more compact design could be done with a transmissive color filter panel which is what I suspect Google is now using.

Lastly, I’m sorry to be away so long.  I got very busy with work and got out of the habit of posting.  I have some things to write about including more on Google Glass, green lasers, Heads Up Display, and more on technology in general,  so hopefully I will be a bit more frequent.

Karl

Augmented Reality and Google Project Glass Part 2

Google Glasses

Since my last post on Augmented Reality (AR) and near eye (head mount) display Google put out some publicity on their Project Glass concept.   Google made it abundantly clear that this was only for the purposes of concept testing and not a real product, but they also said that there would likely be some test products at the end of 2012.

Jason, “The Frugal Dad” wrote saying he had seen my first article on AR and that he has a new “Infographic” that includes Google Glasses as a Future “disruptive technology.”   Unfortunately most predictions about the future turn out to be wrong and nothing I have seen so far in the way of near eye AR, including Google Glasses, I believe will meet the consumer expectations and become pervasive.  I’m not saying it won’t ever happen, but rather there are still many major problems to be solved.

As I wrote before, I think there are many practical issues with near eye displays and augmented reality.   The Google Video “Project Glass: One day…” was obviously a “concept video” and all the images in the display were “fake” as in not what the actual display will look like.

Along these lines, the April 5, 2012 Wired had an article called “Google Glasses Face Serious Hurdles, Augmented-Reality Experts Say” which raises concerns that Google is over-touting the concept.  The Wired article quotes Pranav Mistry, from the MIT Media Lab  and and one of the inventors of the SixthSense wearable computing system, “The small screen seen in the photos cannot give the experience the video is showing.”  Also in the Wired article, Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech raised concerns that Google is raising expectations too much.   Both Dr. Mistry and Dr. MacIntyre are certainly proponents of AR.  Their concerns, and mine as well, are that that raising expectation too high could backfire on the AR concept in long run.

Dr. Thrun on Charlie Rose Looking Up

Sebastian Thrun, Google Fellow and Stanford professor, was on Charlie Rose on April 25, 2012 wearing a working Google Glasses prototype.   The first 4 and a half minutes of the Charlie Rose video discuss Google Glasses and gives some insight into the issues, not the least of which is whether people are really going to wear something like this.

To see the images in the “glasses” he has to look up, where the Google Concept video suggest the images are right in front of you all the time.  So he can’t see the person he is talking to and the computer image at the same time.   Imagine talking to somebody wearing these when they are clearly looking up while talking to you (particularly notice Dr. Thrun’s eyes in the picture above at 1:11 into the Charles Rose Video).   By instinct, humans are very sensitive to eye behavior, and someone constantly looking away (up) is a distracting behavior.  Now imagine you are walking down the street and searching for something on your glasses and a truck comes by — big oops.

The most insightful comment by Dr. Thrun was “we having yet found this [augmented reality] to be the compelling use case” but he didn’t elaborate as to why.   But this does indicate that Google is still be trying to figure out if AR is really compelling.    Dr. Thrun did say that “the compelling use case is the sharing experience” and commented on sharing pictures at being something they enjoyed — I guess this is Tweeting on steroids where all your friends can see what you are doing as you do it.  In this case the glasses become a hands free video camera.

The Google video has inspired some funny spoofs of it that in their own way make some of the points above:

ADmented Realility — Google Glasses Parody extrapolates on the what could happen with advertising gone wild.

Google Glasses: A New Way to Hurt Yourself and a video shown on Jimmy Kimmel Live demonstrate the dangers of “distractive walking”

The next time on the subject of AR, I plan to talk about more of the technical issues with AR.

 

 

Augmented Reality / Head Mounted Displays (Part 1 Real or Not?)

Augmented Reality (AR) Head Mounted Displays (HMD) [aka Near Eye Displays,  Wearable Computing, and other many other names] has gotten a big boost in the public mindset with the Feb. 22, 2012 New York Times (NYT) article/leak about Google Glasses with Android.    The NYT article result in a flurry of commentary on the web and television (Google search “Google Glasses Augmented Reality“).  Reportedly Google Glasses will be available in very limited/test market release later in 2012 with a price between $250 and $600 (US).

Augmented Reality (AR) is the concept of combining computer information with the real world.   Head Mounted Displays (HMD) is any display device that in some way attached to the head (including the eye, such as a contact lens).   You can have AR on say a cell phone with a camera where computer information is put on top of the video you see without a HMD.  Similarly you can have an HMD that is only a display device without any AR capability.   But often AR and HMD are combined together and this series of articles is mostly going to be talking about the combined use of AR and HMD.

Some History

Augmented reality/HMDs have found their way into many films as a plot element and this has to some degree already primed the public’s interest.   It turns out it is much easier to make it work in the movies than in real life.  Attempts at augmented reality go back at least as far as the 1960’s.   Below is a montage of just a few of the over 100 attempts at making a head mount display which range from lab experiments to many failed product in the market (they failed so badly that most people don’t even know they existed).

The Airplane Test

So far HMDs have failed what I call the “I don’t see them on airplanes test.”  If there is anyplace you should see HMDs today, it would be by people sitting on airplanes, but have you ever seen someone using one on an airplane?  Why I consider this as a “metric” is that the people who regularly fly are typically middle to upper middle class, are more into small electronic gadgets (just look at what they sell in the on-board catalogs), and the environment stilling on an airplane is one that you would think would be ideal for a HMD.

Back when the iPad came out, you could tell that they were taking off just by the number iPads you saw people using on airplanes (mostly to watch movies).  Interestingly, I have seen HMDs sold in Best Buy vending machines at airports, but I have never seen one “in the wild” on an airplane.   The other place I would have expected to see HMDs is on the Tokyo subways and trains, but I have not see them there either.  One has to conclude that the “use model” for the iPad works in a way that it does not for a HMD.

Augmented Reality (AR)Topics

There are so many topics/issues with Augment Reality Glasses (or which every name you prefer) that there is too much to cover in just one blog post.   In terms of implementation are the technical issues with the display devices and optics, the physical human factor issues like size and weight (and does it cause nausea), and the user interface or use-model issues and feature set (including wireless connectivity).   Then there are a whole number of social/political/legal issues  such as privacy, safety (distractive driving/walking) user tracking, advertisements, etc.   AR is a very BIG topic.

Making a practical Augmented Reality HMD is deceptively difficult.  It is a daunting task to make an HMD device that fits, is light enough to wear, small enough to go with you, produces an acceptable image, and doesn’t cost too much.  And making a display that works and is cost effective is really only the starting point, the rest of the problem is making one that is useful for a variety of applications from watching movies to real time head-up displays.

There are a number of user interface problems related to the fact that a HMD is in some way strapped to your head/eye that make is “not work right” (acts unnaturally)  in terms of human interfaces.    These human interface issues are probably going to be a bigger problem than the physical design of the display devices themselves.   Making a HMD  that “works” and is cost effective is only the starting point.

Will Google Glasses succeed where others failed?

The answer is likely that they will not have a big success with their first device even if it is a big improvement on past efforts.  Even the rumors state it is a “test market” type device meaning that even Google is looking more to learn from the experience than sell a lot of units.   I’m sure Google has many smart people on the device, but sometimes the problem is bigger than even the smartest people can solve.

The idea of a display device that can appear/disappear at will is in compelling to many people and why it keeps being tried both in the movies and television as a plot element and by companies trying to build products.  My sense is that we are still at least a few more technology turns of the screw away from the concept becoming an everyday device.   In future articles I plan on discussing both the technical and user-interface challenges with HMDs.

QP Lightpad™ And Future Observations

QP Optoelectronic’s Lightpad appears to me to be an interesting “transitional product” in the evolution the pico projector “use model”.    In this post I am going to comment on what I think they got right and what will need to be improved to make pico projectors more useful.

The Lightpad combines a rear projection screen, keyboard with touchpad, WVGA (848×480) DLP pico projector, and battery that folds up into a thin form factor.   In effect it turns a smart-phone with HDMI output into a netbook (except that is for the currently for a non-jail-broken iPhone which does allow “mirroring” of the phone’s display) .   The phone acts as the computer with all the software on it, but you now have a larger, easier to read screen and a reasonable size keyboard for typing.  The projector can also be flipped around in a front projection mode to give a larger on say a wall or screen in dark environments.

The Lightpad address one big issue I have with the typical pico projector shoot on the wall use model, namely that there is almost never a white wall, in the right place with low enough ambient lighting to be useful.  The “shoot on the wall” use model only seem to work in very contrived demos.   The Lightpad addresses this issue by having a built-in rear projection screen.

The rear projection screen, as opposed to say a sheet of white paper address a very important issue for pico projectors, namely giving sufficient contrast in typical room lighting.   As I discussed perviously about ambient light, even a dimly lit room has 1 to 2 lumens per square foot and a well-lit room has 30 to 60 lumens per square foot.  It turns out that you want at least 10 to 1 contrast for a reasonably easy to read text, so with a 10 lumen or even 30 lumen projector you can’t project a very big image with much contrast on a white screen.   A rear projection screen is designed to only accept light from a certain range of angles behind it and reject most of the random room light coming from everywhere else.  Because of this “ambient light rejection” a rear projection screen will result in higher contrast in the final image.  So the rear projection screen enables lower lumen pico projector to be very usable in brighter room light conditions.

The keyboard on the Lightpad makes typing easy and the touchpad in front seems to integrate well with the touch interface on the front of the keyboard.  Because the rear projection screen is lightweight plastic, it solves the weight and potential breakages issue of carrying around a large LCD screen.   I could definitely see this type of product being useful for the professional that doesn’t want to carry a laptop with them.  As big an advantage as any being that all the software and data on your smart-phone is available to you without needed to worry about sync’ing or buying a bunch of software.

While I like the concept and QP got many things right in terms of functionality, there are both some short, medium, and long-term improvements that will hopefully be made over time by QP Optoelectroncs and/or other companies.

Short Term (Easy) Improvements

The most obvious flaw, one in which QP says they are working on to improve, is the rear projection screen.  In particular, it has a “hotspot” where if you look at the projector straight it is very bright in the center.    The hotspot effect is show in the picture at the left (but please note a camera exaggerates the effect so it is not as bad as the picture shows but still present).

The next issue, somewhat evident in the picture at the top of this article, is the size and bulk of the cables.  Some of those in the picture are associated with power that will not be there in portable use, but still there are some long bulky cables and adapters between the smart phone and the Lightpad.   In my experience, the cables can often take up more space than the pico projector itself. I would make the cables much shorter, smaller, thinner, and they should easily store into the Lightpad.

Medium Term Improvements

With their announcement of the Lightpad, QP optoelectronics also announced that they are working on a 720p (1280×720 pixels) version.    This certainly would be welcome as many of the newer, more advanced, smart phones are supporting 720p and higher resolutions and one of the big reasons to project a bigger image is to be able to see more.   It really doesn’t make much sense to project a large image if it low in resolution.   Having higher resolution would enable more normal notebook like use for applications such as editing and viewing documents, working on presentations, internet browsing, and spreadsheets.

Long Term Improvements

While Lightpad is light and about the size of a pad of paper when closed, it still does not fit in your pocket.  So you are left to have something to carry around. The really big volume potential for pico projectors is in having something that fits in a normal pocket so it can be with you wherever you go.   Improvements in LEDs and laser light sources should significantly reduce the size of the projector and its battery, but then the issue of the rigid screen and the physical keyboard.

Today with 1 Watt, only about 7 to 10 lumens is possible with LEDs and LCOS or DLP including the light source and light modulator (currently laser beam steering is far behind needing about 3 Watts to give just 10 lumens).   Realistically with significant improvements direct diode lasers and incremental improvements in the light modulators, in a few years it should be possible to produce to about 30 lumens per Watt.    If we want an image that covers about half a square foot, about the area of a small laptop LCD, that means we could get to about 60 lumens per square foot.   If the ambient lighting is normal room lighting of 30 to 60 lumens a square foot then we would only have 2:1 or 1:1 contrast and a very washed out image.  So even with major improvements in pico projector technology we will need to look for a dark corner of the room or still want some form of light controlled screen.

To make the screen easily portable it should roll up into something about the length of a pocket pen (about 6 inches long) and less than 1-inch around (about the size of a white board marker).    Making rollable rear screens with good light control and uniform light spreading (avoiding hot spots) is not that easy as generally there needs to be things like a Fresnel lens which wants to be rigid.  3M has developed Vikuity™ rear projection plastic films  that don’t use Fresnel lenses but these are still meant for rigid installation on a glass or Plexiglas rigid surface.  Perhaps something like the Vikuity materials could be made rollable.

While rear projection screens are the obvious approach,  a perhaps better rollable screen approach would be to use a “wavelength selectable” (WS) front screen.   With a wavelength selectable screen, only specific wavelengths of light are reflected and the other absorbed.  Since normal room or sunlight is “broad spectrum” most of the ambient light is absorbed.  The WS coating could be made on thin rollable plastic.  Sony made a rigid form of WS screen called the ChromaVue™ back around 2005.   At the time Sony said that they could make a rollable version with the same technology but it never came to market.  ChromaVue screens were designed to work with fairly broad spectrum projectors using high pressure lamps with color filters.  Unfortunately, manufacturing costs and low volumes of the ChromaVue screens appears to have caused Sony to stoop making them several years ago.   The task of making a WS screen with narrow band LEDs or Lasers should be much easier so I would think that we will see the re-emergence of WS screens in the future.

Virtual Keyboards and Other Input

Inexpensive camera input should enable the elimination of the physical keyboard with the pico projector projecting the image of a keyboard.  The use of cameras for input is becoming commonplace today with devices such as the Microsoft Kinect™.  In fact, many people in the field expect that pico projectors and cameras will commonly be paired together in future application.

In the case of a rear screen projection one technique is to use infrared cameras (CMOS cameras naturally detect infrared) to sense when and where the screen is touched such as with the Microsoft Surface®.  One advantage of the rear screen infrared approach is that it is relatively easy to detect when the screen as been touched.

There are more issues with a front projecting virtual keyboard.   The first of which is that it become very desirable to project the keyboard at a shallow angle so that the project does not have to be so far away from the surface of a table.   The shallow angle also means that the keyboard will not be blocked as much by shadows cast by one’s hands.   The use of laser light in pico projectors will make short throw, shallow angle projection much easier to implement.

A bit of a technical challenge with front projection keyboards is to know when a key has been pressed versus a finger hovering over a key an there is a lot of work going on in this area.   With structured light (for Microsoft presentation on structured light click here)  and/or multiple cameras detecting finger pressing versus hovering is possible.   One can also expect some quicker input like Swype to be employed.

Conclusion

My expectation is that we will see the pico projector evolve from today’s shoot on the wall gimmick/toy to being a really useful product.    I think the QP Lightpad makes a good first step in the right direction.   It is much easier and with faster adoption rates to use already successful user interfaces and use models than to try and create new ones.     At the same time one needs to live within the physics of what is possible, such as how many lumens will be possible in the coming years for a pocket size device.   The technology for virtual keyboards and multi-touch displays is becoming very advanced and should not be a limiting factor.