Archive for March 30, 2013

AR Display Device of the Future: Color Filter, Field Sequential, OLED, LBS and other?

I’m curious what people think will be the near eye microdisplay of the future.   Each technology has its own drawbacks and advantages that are well known.   I thought I would start by listing summarizing the various options:

Color filter transmissive LCD – large pixels with 3 sub-pixels and lets through only 1% to 1.5% of the light (depends on pixel size and other factors).  Scaling down is limited by the colors bleeding together (LC effects) and light throughput.  Low power to panel but very inefficient use of the illumination light.

Color filter reflective (LCOS) – same as CF-transmissive but the sub-pixels (color dots) can be smaller, but still limited scaling due to needing 3 sub-pixels and color bleeding.  Light throughput on the order of 10%.  More complicated optics than transmissive (requires a beam splitter), but shares the low power to panel.

Field Sequential Color (LCOS) – Color breakup from sequential fields (“rainbow effect”), but the pixels can be very small (less than 1/3rd that of color filter).   Light throughput on the order of 40% (assuming a 45% loss in polarization).  Higher power to the panel due to changing fields.  Optical path similar to CF-LCOS, but to take advantage of the smaller size requires smaller but higher quality (low MTF) optics.   Potentially mates well with lasers for very large depth of focus so that the AR image is in focus regardless of where the user’s eyes are focused.

Field Sequential Color (DLP) – Color breakup form FSC but can go to higher field rates than LCOS to reduce the effects.   Device and control is comparatively high powered and has a larger optical path.  The pixel size it bigger than FSC LCOS due to the physical movement of the DLP mirrors.   Light throughput on the order of 80% (does not have the polarization losses) but falls as pixel gets smaller (gap between mirrors is bigger than LCOS).    Not sure this is a serious contender due to cost, power of the panel/controller, and optical path size, and nobody I know of has used it for near eye, but I listed it for completeness

OLED – Larger pixel due to 3 color sub-pixels.  It is not clear how small this technology will scale in the foreseeable future.  OLED while improving the progress has been slow — it has been the “next great near eye technology” for 10 years.   Has a very simple optical path and potentially high light efficiency which has made it seem to many like on technology with the best future, but it is not clear how it scales to very small sizes and higher resolution (the smallest OLED pixel I have found is still about 8 times bigger than the smallest FSC LCOS pixel) .    Also it is very diffuse light and therefore the depth of focus will be low.

Laser Beam Steering – While this one sounds good to the ill-informed, the need to precision combine 3 separate lasers beams tends to make it not very compact and it is ridiculously to expensive today due to the special (particularly green) lasers required.  Similar to field sequential color, there are breakup effects of having a raster scan (particularly with no persistence like a CRT) on a moving platform (as in a head mount display).   While there are still optics involved to produce an image on the eye, it could have a large depth of focus.   There are a lot of technical and cost issues that keep this from being a serious alternative any time soon, but it is in this list for completeness.

I particularly found it interesting that Google’s early prototype used a color filter LCOS and then they switched to field sequential LCOS.    This seems to suggest that they chose size over issues with the field sequential color breakup.    With the technologies I know of today, this is the trade-off for any given resolution; field sequential LCOS pixels are less than 1/3rd the size (a typically closer to 1/9th the size) of any of the existing 3-color devices (color filter LCD/LCOS or OLED).

Olympus MEG4.0

Olympus MEG4.0 – Display Device Over Ear

It should also be noted that in HMD, an extreme “premium” is put on size and weight in front of the eye (weight in front of the eye creates as series of ergonomic and design issues).    This can be mitigated by using light guides to bring the image to eye and locating a larger/heavier display device and its associate optics to a less critical location (such as near the ear) as Olympus has done with their Meg4.0 prototype (note, Olympus has been working at this for many years).  But doing this has trade-offs with the with the optics and cost.

Most of this comparison boils down to size versus field sequential color versus color sub-pixels.    I would be curious what you think.

Kopin Displays and Near Eye (Followup to Seeking Alpha Article)

Kopin Pixel compared to LCOS

Kopin’s smallest transmissive color filter pixel is bigger than nine of the smallest field sequential color LCOS pixels

After posting my discovery of a Himax LCOS panel on a Google Glass prototype, I received a number of inquiries about Kopin including a request from Mark Gomes of SeekingAlpha the give my thoughts about Kopin which were published in “Will Kopin Benefit From the Glass Wars?”  In this post I am adding morel information to supplement what I wrote for the  Seeking Alpha article.

First, a little background on their “CyberDisplay® technology would be helpful.   Back in the 1990’s Kopin developed a unique “lift-off” process to transfer transistor and other circuitry from a semiconductor I.C. onto a glass plate to make a transmissive panel which they call the CyberDisplay®.  Kopin’s “lift-off” technology was amazing for that era. This technology allowed Kopin to apply very (for its day) small transistors on glass to enable small transmissive devices that were used predominantly in video and still camera viewfinders. The transmissive panel has 3 color dots (red, green, blue) that produce a single color pixel similar to a large LCD screen only much smaller. In the late 1990’s Kopin could offer a simple optical design with the transmissive color panel that was smaller than existing black and white displays using small CRTs.  This product was very successful for them, but it has become a commoditized (cheap) device these many years later.

CyberDisplay pixel is large and blocks 98.5% of the light

While the CyberDisplay let Kopin address the market for what are now considered low resolution displays cost effectively, the Achilles’ heel to the technology is that it does not scale well to higher resolution because the pixels are so large relative to other microdisplay technologies.  For example Kopin’s typical transmissive panel is15 by 15 microns and is made up of three 5 by 15 color “dots” (as Kopin calls them).    But what makes matters worse; even these very large pixel devices have an extremely poor light throughput of 1.5% (blocks 98.5% of the light) and scaling the pixel down will block even more light!

While not listed on the website (but included in a news release), Kopin has an 8.7 x 8.7 micron color filter pixel (that I suspect is used in their Golden-i head mount display) but it blocks even more light than the 15×15 pixel as the pixel gets smaller.    Also to be fair, there are CyberDisplay pixels that block “only” 93.5% of the light but they give up contrast and color purity in exchange for light throughput which is not usually desirable.

There are many reasons why the transmissive color filter LCOS light throughput is so poor.  To begin with, the color filters themselves which are going to block more than 2/3rds of the light (blocking the other 3 primary colors plus other losses).    Because it is transmissive, the circuitry and the transistor to control each pixel block the light which becomes significant as the pixel becomes small.

But perhaps the biggest factor (but most complex to understand, I will only touch on it here) is that the electric field for controlling the liquid crystal for a given color dot extent into the neighboring color dots thus causing the colors to bleed together and loose all color saturation/control.  To reduce this problem they can use less light throughput efficient liquid crystal materials that are less susceptible to the neighboring electric fields and use black masks (which block light)  surrounding the each color dot to hide the area where the colors bleed together.

Field Sequential Color – Small Pixels and 80+% light throughput

With reflective LCOS, all the wires and circuitry are hidden behind the pixel mirror so that non of the transistors and other circuitry block the light.  Furthermore the liquid crystal layer is usually less than half as thick which limits the electric field spreading and allows pixels to be closer together without significantly affecting each other.  And of course there are no color filters which waste more than 2/3rds of the light.    The down side to field sequential color is the color field breakup where when the display move quickly relative to the eye, the colors may not line up for a split second.   The color breakup effects can be reduce by going to higher field sequential rates.

Kopin’s pixesl are huge when compared to those of field sequential LCOS devices (from companies such as Himax, Syndiant, Compound Photonics, and Citizen Finetech Miyota) that today can easily have pixels 5 by 5 microns and with some that are smaller than 3 by 3 microns.   Therefore FSC LCOS can have about 9 times the pixel resolution for roughly the same size device!  And the light throughput of the LCOS devices is typically more than 80% which becomes particularly important for outdoor use.

So while a low resolution Kopin CyberDisplay might be able to produce a low resolution image in a headset as small as Google Glass, they would have to limit the device in the future to a low resolution device – – – not a good long-term plan.  I’m guessing that the ability to scale to higher resolutions was at least one reason why Google went with a field sequential color device rather than starting with a transmissive panel that would have at least initially been easier to design with.  Another important factor weight in advantage of LCOS over a transmissive panel is the light throughput so that the display is bright enough for outdoor use.

I don’t want to be accused of ignoring Kopin’s 2011 acquisition of Forth Dimension Displays (FDD) which makes a form of LCOS.  This is clearly a move by Kopin move into reflective FSC LCOS.   It so happens back in 1998 and 1999 I did some cooperative work with CRL Opto (that later became FDD) and they even used I design I worked on for their silicon backplane in their first product.  The FSC LCOS that FDD makes is considerably different in both the design of the device and the manufacturing process required for a high volume product.

Through FDDs many years of history (and several name changes) FDD has drifted to a high end specialized display technology with a large 8+ micron pixels.   For a low volume niche applications FDD is servicing, there was no need to develop more advance silicon to support a very small device and drive electronics.  Other companies aiming more at consumer products (such as Syndiant where I was CTO) have put years of efforts into building “smarter” silicon that enabled minimizing the not only the size of the display;  reducing the number of connection wires going between the display and the controller; and reduced the controller to one small ASIC.

Manufacturing Challenge for Kopin

To cost effectively assemble small pixel LCOS devices requires manufacturing equipment and methods that are almost totally different from what Kopin does with their CyberDisplay or FDD with their large pixel LCOS.   Almost every step in the process is done with an eye to high volume manufacturing cost.   And it is not like a they can just buy the equipment and be up and running, it usually takes over a year to get the yields up to an acceptable level from the time the equipment is installed.  Companies such as Himax have reportedly spent around $300M in developing their LCOS devices and I know of multiple other companies having spend over $100M and many years of effort in the past.

Conclusion

For at least the reasons given above, I don’t see Kopin as currently positioned well to build a competitive high volume head mounted displays that are to meet the future needs of the market as I think all roads lead to higher resolution, yet small devices.  It would seem to me that they would need a lot time, effort, and money to field a long-term competitive product.

Laser Illumination Could Cause LCOS to Win Out Over OLED in Near Eye AR

Steve Mann IEEE adapted

The conventional wisdom is that eventually OLEDs will become inexpensive and they will push out all other technologies in near eye because they will be smaller and lighter with a simple optical path.   But in reading ‘Steve Mann: My “Augmediated” Life”‘ in IEEE Spectrum I was struck by his comment “It requires a laser light source and a spatial light modulator”  (a spatial light modulator are devices like LCOS, transmissive panels, and DLP).     The reason he gives for needing a laser light source is to support a very high depth of focus.   For those that don’t believe LCOS and lasers give a high depth of focus you might want to look at my blog from last year (and the included link to a video demonstration).

Steve Mann has “lived the dream” of Augmented Reality for 35 years and (with due affection) is a geek’s geek when it comes to wearing AR technology.  He makes what I think are valid points as to what he finds wrong about Google Glass including the need to have the camera’s view concentric with the eye’s view and issues of eye strain in the way the Google Glass image is in the upper corner of your field of view which can cause eye muscle strain.

But the part of Steve Mann’s article really caught my attention is the need for laser illumination to give a high depth of focus to reduce eye strain because you need what you see in the images to be in focus at the same depth as what you see in the real world.     Google Glass and other LED illuminated AR generally set the focus so that the display focuses in what would be a persons far vision.   Steve Mann is saying is that the focus in your eye from the display has to match that of the real world or there will be problems and the only known way to do this is to use laser illumination.

This issue of laser light having a large depth of focus when used with a panel is an important “gem” that could have a big impact in terms of the technology used in near eye AR in the future.   LEDs and that includes OLEDs produce light with rays that are scattered and hard to focus.   Wheres lasers produce high f-number light that is easy to focus (and requires smaller optics as well).  As I said at the top of this post, the conventional wisdom is that cost is the only factor keeping OLEDs out of near eye AR, but if Steven Mann is correct, they are also prevented from being good for AR due to the physics of light.   And the best technology I know of for near eye AR to mate up with laser light is LCOS.

Google Glass Is Using Field Sequential Color (FSC) LCOS (Likely Himax)

GG DVF 40-42 RGB (2)

Sequential Red, Green, and Blue Fields Captured From Google YouTube Video DVF [through Glass]

I’m going to have to eat some crow because up until Saturday night, I honestly thought Google was using a transmissive panel based on the shape of the newer Google Glass headset.  I hadn’t seen anything that showed it used Field Sequential Color (FSC) and I had looked for it in several videos before that didn’t appear to show it.  With FSC the various (red, green, blue and perhaps other colors) are presented to the eye in sequence rather than all at the same time and this can show up in videos (usually) and in sometimes in still pictures.

But on a Saturday (March 9th)  I watch the Google produced Video DVF [through Glass] from way back in September 2012.  A careful frame by frame analysis (see above for the images from 3 frames) of the video proves that the newer Google Glass design uses a Field Sequential Color display (FSC).  Note in the picture above captured at 3 separate times, there is a red, green, and blue images in the Google Glass which is indicative of FSC.   Based on the size and shape and some other technical factors (too much to go into here), it has to be a reflective Liquid Crystal on Silicon (LCOS) device, most likely made by Himax.

BTW, as further visual evidence (there are a couple more examples in the video but this one is to me the clearest) of it being an FSC device is given later in the video at 3:30 when Google Co-Founder (and part-time actor?) Sergey Brin wearing Google Glass stands up to applaud and there is a classic FSC color breakup as captured in the picture below one recognizable to anyone that has looked into an FSC projector.  Seeing separate color fields when the projector moves is a classic FSC effect.

GG man jumping up

Sergey Brin Stands Up Rapidly and Reveals Color Sequential Breakup

This (new) evidence largely confirms Seeking Alpha Blogger Mark Gomes conclusion that Himax is in both the old and the newer Google Glass design  (see also his instablog response to my comments).   Back last week I was not convinced and commented that I still thought it was a transmissive panel and Mr. Gomes and I has some cordial back and forth public discussion in each others blogs about it on Seeking Alpha and this blog.   But with the proof that it is using field sequential color, there is only one conclusion and that is that it is a reflective field sequential color LCOS device.   This also adds up as to why the earlier prototype was using a Himax Color Filter LCOS device when it would have been simpler and smaller to have used a transmissive panel at that time.  Apparently the color filter LCOS was a “stand-in” waiting for the smaller field sequential color device and/or optics.

Additionally, while I had dismissed the Digitimes Himax and Google Glass article as confirming it was Himax because it appeared a couple of days after Mark Gomes’ article and so I thought it was just an “echo” of what he and I had written.   But in public comments Mr. Gomes pointed out that it was adding some more details.

So why do I now agree with Mr. Gomes that the Google Glasses most likely uses a Himax panel?  The evidence is overwhelming that it is field sequential color and it seems that Himax is the obvious candidate since in my first blog on the subject appear Feb 28, 2012 clearly identified Himax as supplying the earlier Google Glass prototype and they have had FSC LCOS devices for about 6 years.    This is further reinforced by what Mark Gomes has posted as well as the Digitimes article.   Both the technical and the financial/business analysis agree.

There are a few other but IMO much less likely candidates.  My old company Syndiant has digital field FSC LCOS technology that last I knew about both was technically  superior to that of Himax’s analog LCOS technology, but I don’t think Syndiant would be ready for a Google sized order yet (and the announced JVC-Kenwood deal happened too recently).  Citizen Finetech Miyota (CFM) recently bought FSC LCOS technology from Micron, but I can’t see why Micron would have sold the technology to CFM if a deal with Google was in the works.   Omnivision bought the the FSC LCOS technology of Aurora Systems, but it was not very good technology IMO and so far I only know of the continuing to make the old Aurora devices which are aimed at front projectors.   Then there is Compound Photonics who bought the FSC assets from the now defunct Brillian but they have stated that they are working on  laser pico projectors.

Also, please don’t give me the conspiracy and collusion theories.   The video I watched on March 9th was the first one I had seen that proved Google Glass was field sequential color.  Additionally, I never corresponded with or even knew of Mark Gomes before the Seeking Alpha article came out mentioning my blog and I was legitimately concerned that he may have ignored some of my original article and only considered the parts that supported his position so I wanted to correct the record.  Mark Gomes for his part was very respectful, yet emphatic in his position based on his research which now appears to me to have been largely correct (although I still say the Himax web site looks abandoned and Himax did give the appearance of having given up on FSC LCOS back around 2010).   Frankly, I was as surprise as anyone at the wild swings in Himax stock and didn’t buy any before my first article.

Full Disclosure:  I never traded in Himax stock before today (or any other stock discussed on this blog other than being a well know holder of the private company Syndiant stock as a form Founder, CTO, and Investor).  But seeing how the Google Glass news last week affected the stock and based on Mr. Gomes’ articles, combined with this new evidence, I decide to put some money where my mouth is and just bought some Himax (HIMX) to see what happens.

Appendix (For Those that Want to duplicate my findings)

Figuring out that Google Glass used FSC would have been instantly recognizable to anyone that got to use the newer Google Glass device, but I didn’t have one to play with and I was using the available on-line video and pictures.   The crafted Google videos that give the appearance of looking through the Google Glass didn’t show this because they simulation of the display.  And in most of the videos the image in the Google Glass was not visible and/or the camera exposure and other settings didn’t pick up the FSC effects.  Perhaps Ironically, it appears that the camera in Google Glass tends to pick up the FSC effect more than other cameras used to shoot pictures of people wearing it.

Some video cameras more so than others will tend to pick up the signature color breakup of FSC.   Also the camera angle has to be right so you can see the image when videoing someone wearing Google Glass.   And perhaps most importantly, the exposure of the camera, which is usually based on the overall scene, has to be such that the sequential colors from the small spot of light in the viewfinder (haven’t ever seen a close up of the viewfinder) does not over-expose and wash out the colors (in this case you may notice a more white flicker).

All I did was play the video DVF [through Glass]  on my PC and kept pausing and un-pausing it.  It is tricky to catch the frames that show FSC.  One reason is that the video has many frames per second and the Youtube player does not support “shuttle/jog” frame by frame.   One could download the video and play it frame by frame but it is not necessary.   I just kept going over the time around 0:38 to 0:44 a few times to capture the images.   Similarly went through the video at about 3:30 to get the FSC breakup with Sergey Brin.

Note that you will not always see a red, green, or blue color when you capture a frame.   When colors get too bright in the image, it will saturate the camera sensor and result in white.     I don’t believe there is a “white field” in the Google Glass but rather it is just that the camera is not picking up the colors due to over saturation.

I should also add that FSC effects show up differently on different cameras and in different lighting and camera exposure.   I have looked previously at other Google Glass stills and videos trying to find FSC effect and did not find them.    Unless the camera angle and the exposure is right, you just aren’t going to see the colors.    Even in this whole video, I only found a few seconds of video that demonstrated FSC.

Google Glass and Himax Whirlwind

Himax Stock early March

I usually just talk technology on this blog, but the recent events involving my blog on Google Glass using Himax’s panel caused some fascinating movement in Himax’s stock price.   There have been several moves of over one hundred million dollars of movement which has been at least in part attributed to my blog and.or my comments.   One other thing, I have never had any position in Himax stock (not that I haven’t wished I had bought and sold based on my own blog and comments).

In this blog on Thursday Feb 28, 2013 I reported that I had figured out that there was a Himax color filter LCOS panel in the early Google Glass prototypes based on pictures of the internal components.  To me it was a technical puzzle to solve first posed by Picopros.com on Feb. 22nd.   I was careful to also include in my blog post on Feb 28th, and then added to it on March 2nd that it was doubtful that the newer Google Glass devices used an LCOS panel because it would not fit.  I was trying to frank and fair so that people would not over react (for what little good that did).    This got my blog going that had been dormant for about 6 months.

It turns out that Mark Gomes was following my blog and combined my  information (which he cited) with information he and his associates collected into an article he wrote on Seeking Alpha late Monday March 4th / Early March 5th (depending on the time zone) that Himax was in Google Glass.   Then all heck broke loose.   Himax stock jump about 38% that morning and added about $223M to Himax’s market cap by about 12:40 PM EST on March 5th.

About the time Himax stock hit its high, I found out about the Seeking Alpha article and posted a comment reiterating what my blog post had stated that I didn’t think the Himax panel was used in the newest prototypes (there were also comments from others stating the same).      Himax then stock dropped back about 16% or about $135M in market cap.   (Gad, I should have shorted before I wrote that comment 🙂 ).

I was frankly concerned perhaps Mark Gomes had taken my information out of context which is why I commented back.   In subsequent public comments both on my blog and his instablogs, Mr. Gomes has insisted that my information was only one source that he had combined with other information including direct contact with Himax.

Himax stock then settled down on March 6th (up over $100 from it March 4th close) but then that night Digitimes wrote an article stating that Himax was in Google Glass.   But this article in my opinion looks to be an “echo” of the information on Seeking Alpha and doesn’t give any other sources.   Still Himax stock jump about $105M on what looks to me to be just an echo before settling back some but still much higher than before as of this writing.

Anyway, what I wild ride and I didn’t have a penny to gain (or lose), in fact it has cost me to take a lot of time answering questions from analysts that have contacted me.   Also for the conspiracy minded, ALL the correspondence I have had with Mark Gomes to date have been in public either in my comments on Seeking Alpha or in his comments on this blog.    You can follow our back and forth on this blog or in the original seeking Alpha article or in at least (as of this writing) follow up articles on Mark Gomes’s instablog.

One last teaser for the next article I am planning on doing (some of this is included in my back and forth public comments to Mark Gomes), I don’t think Google Glass is going to be a big market, at least any time soon.    So it really doesn’t matter whose panel is in the prototype, at least in my opinion (buy or sell or do nothing at your own risk 🙂  ).  If there wasn’t so much money flying around, it would be sort of funny, then again maybe it is.

Karl

New Google Glass Design Likely Uses a Transmissive Panel

Google Glass display options scaled to a dime

Kopin Panel, Microvision Optical Engine, Google Prototype CF LCOS, and Transmissive Optical Engine (to approx. the same scale)

As a follow-up to my last post, I though I would show why Google Glass is most likely using a transmissive panel.  It all comes down to size and shape.

Shown to the left is a Kopin transmissive panel more than capable of the resolution shown in the Google Glass videos and the picture I found on-line happen to have a dime in it and I use that dime to scale it to the same size at the Microvision laser beam steering (LBS) with its “dime picture” in the second image.  I roughly scaled a Google Prototype with a color filter LCOS panel to the same scale in the third image.  The Kopin panel is only about 2mm thick but it does require optics so I approximately scaled the figure 8 from Patent 6,747,611 filed by IBM in the year 2000 which shows a near eye transmissive optical engine and gives a Kopin panel as an example.

The Microvision engine is for a projector and does not include the “wave guide” that relays the image out to the eye.  You are also looking at it from the top down but it is about 6mm thick which is similar to the others in thickness.   Part of what makes the Microvision so big is the need to combine and aim the 3 independent lasers at a single mirror as shown (in an older post I showed the combining path).   The engine is on the order of 5 times too big to fit into the space available in the Google Glasses, and that does not include the electronics for Microvision’s LBS which take about as much volume as the optics).

Next we have the color filter LCOS which is much more compact than Microvision’s LBS but has an awkward “T”/”L” shape to it caused by the orientation of the beam splitter with the panel on one side and the LED on the other.  As I wrote in my prior post, this would not fit in the barrel shape of the newer Google Glass design.

Lastly, the IBM patent has a figure that shows a transmissive panel optical engine that looks remarkably similar to the Google Glass that have been seen.  The optical path is straight through and comparatively compact.  There is an adjustment knob (2600) that enables the apparent focus point (according to the patent) to be adjusted from about 18 inches to infinity.   The Google Glass are said to be set for far vision (near “infinity”) and therefore dispense with this adjustment.

Another thing to note is that there is only an LED, panel and a single lens to generate the image plus the beam splitter (doing the function of the thinner wave guide used by Google).   This is a relatively inexpensive device as the LED, a low resolution transmissive panel, and lens combined cost on the order of $10 (and probably less in high volume).

IBM Patent from 2000 to Google Glass ComparisonIn the category of “everything old is new again.” look how closely the Fig. 8 (copied at the left) from the IBM patent filed in 2000 looks like the Google Glass of about 13 years later (left below).  The main difference is that the “computer based device” (today a cell phone) is now wirelessly connected.   A feature shown in the IBM patent is a sliding light shield to support viewing images without the distraction of the background.  Google Glasses would require looking at a black background to clearly see the image in the transmissive wave guide.

Google’s design “cops out” and requires a nose bridge which others including the IBM patent and Golden-i avoid.   The nose is very sensitive to any weight on it particularly over time and it interferes with glasses.  Google has said that the device can be attached to a person’s glass frames but this is very problematic with the variety of frames on the market and the added off-balance weight.

The point I would like to make (again) here is that the display technology has been available to make Google Glasses for over a decade and as my prior post on virtual reality displays pointed out, the limiting factor is the use model (how you use it) and is heavily limited by how you control it.   I don’t see it as practical to have people talking to their devices and looking shifty-eyed and blinking, not to mention looking like somebody who escaped from a lab.

Maybe someday they will add gesture recognition so you can type on a virtual keyboard but I don’t know of anyone that has perfected this technology yet.  Also the images that Google has shown to date are pretty low resolution (on the order of only 320 by 240 pixels) and only fill a small part of one’s vision.   I don’t see people doing a lot of internet browsing with the current Google Glass.   Then we have the privacy issues as in  when someone looks at you shifty eyed through their Google Glass, are they signalling to the computer to look up your information.

One last thing, believe it or not I’m not trying to be negative to about Google Glasses, I’m just trying to relate my experience and knowledge of near eye displays.   I think even some people associate with Google Glasses are playing it down a bit trying to get people to understand that they are still looking for how people will use it.  Maybe someday they will have a  high-resolution color display that fits in a contact lens, selectively blocks out the real world, and picks up brain waves to control it, but it looks to me that that day is a ways off in the future.