I’m curious what people think will be the near eye microdisplay of the future. Each technology has its own drawbacks and advantages that are well known. I thought I would start by listing summarizing the various options:
Color filter transmissive LCD – large pixels with 3 sub-pixels and lets through only 1% to 1.5% of the light (depends on pixel size and other factors). Scaling down is limited by the colors bleeding together (LC effects) and light throughput. Low power to panel but very inefficient use of the illumination light.
Color filter reflective (LCOS) – same as CF-transmissive but the sub-pixels (color dots) can be smaller, but still limited scaling due to needing 3 sub-pixels and color bleeding. Light throughput on the order of 10%. More complicated optics than transmissive (requires a beam splitter), but shares the low power to panel.
Field Sequential Color (LCOS) – Color breakup from sequential fields (“rainbow effect”), but the pixels can be very small (less than 1/3rd that of color filter). Light throughput on the order of 40% (assuming a 45% loss in polarization). Higher power to the panel due to changing fields. Optical path similar to CF-LCOS, but to take advantage of the smaller size requires smaller but higher quality (low MTF) optics. Potentially mates well with lasers for very large depth of focus so that the AR image is in focus regardless of where the user’s eyes are focused.
Field Sequential Color (DLP) – Color breakup form FSC but can go to higher field rates than LCOS to reduce the effects. Device and control is comparatively high powered and has a larger optical path. The pixel size it bigger than FSC LCOS due to the physical movement of the DLP mirrors. Light throughput on the order of 80% (does not have the polarization losses) but falls as pixel gets smaller (gap between mirrors is bigger than LCOS). Not sure this is a serious contender due to cost, power of the panel/controller, and optical path size, and nobody I know of has used it for near eye, but I listed it for completeness
OLED – Larger pixel due to 3 color sub-pixels. It is not clear how small this technology will scale in the foreseeable future. OLED while improving the progress has been slow — it has been the “next great near eye technology” for 10 years. Has a very simple optical path and potentially high light efficiency which has made it seem to many like on technology with the best future, but it is not clear how it scales to very small sizes and higher resolution (the smallest OLED pixel I have found is still about 8 times bigger than the smallest FSC LCOS pixel) . Also it is very diffuse light and therefore the depth of focus will be low.
Laser Beam Steering – While this one sounds good to the ill-informed, the need to precision combine 3 separate lasers beams tends to make it not very compact and it is ridiculously to expensive today due to the special (particularly green) lasers required. Similar to field sequential color, there are breakup effects of having a raster scan (particularly with no persistence like a CRT) on a moving platform (as in a head mount display). While there are still optics involved to produce an image on the eye, it could have a large depth of focus. There are a lot of technical and cost issues that keep this from being a serious alternative any time soon, but it is in this list for completeness.
I particularly found it interesting that Google’s early prototype used a color filter LCOS and then they switched to field sequential LCOS. This seems to suggest that they chose size over issues with the field sequential color breakup. With the technologies I know of today, this is the trade-off for any given resolution; field sequential LCOS pixels are less than 1/3rd the size (a typically closer to 1/9th the size) of any of the existing 3-color devices (color filter LCD/LCOS or OLED).
It should also be noted that in HMD, an extreme “premium” is put on size and weight in front of the eye (weight in front of the eye creates as series of ergonomic and design issues). This can be mitigated by using light guides to bring the image to eye and locating a larger/heavier display device and its associate optics to a less critical location (such as near the ear) as Olympus has done with their Meg4.0 prototype (note, Olympus has been working at this for many years). But doing this has trade-offs with the with the optics and cost.
Most of this comparison boils down to size versus field sequential color versus color sub-pixels. I would be curious what you think.