Archive for LCOS

Magic Leap: Focus Planes (Too) Are a Dead End

What Magic Leap Appears to be Doing

For this article I would like to dive down on the most likely display and optics Magic Leap (ML) is developing for their their Product Equivalent (PEQ). The PEQ was discussed in the “The Information” story “The Reality Behind Magic Leap.” As I explained in my  November 20, 2016 article Separating Magic and Reality (before the Dec 8th “The Information” story) the ML patent application US 2016/0327789 best fits the available evidence and if anything the “The Information” article reinforce that conclusion. Recapping the evidence:

  1. ML uses a “spatial light modulator” as stated in “The Information”
  2. Most likely an LCOS spatial light modulator and the Oct. 27th 2017 Inside Business citing “KGI Securities analyst Ming-Chi Kuo, who has a reputation for being tapped into the Asian consumer electronics supply chain” claims ML is using a Himax LCOS device.
  3. Focus planes to support vergence/accommodation per many ML presentations and their patent applications
  4. Uses waveguides which fit the description and pictures of what ML calls a “Photonics Chip”
  5. Does not have a separate focus mechanism as reported in the “The Information” article.
  6. Could fit the form factor as suggested in “The Information”
  7. Its the only patent that shows serious optical design that also uses what could be considered a “Photonics chip.”

I can’t say with certainty that the optical path is that of application 2016/0327789. It is just the only optical path in the ML patent applications that fits all the available evidence and and has a chance of working.

Field of View (FOV)

Rony Abovitz, ML CEO, is claiming a larger a larger FOV.  I would think ML would not want to be have lower angular resolution than Hololens. Keeping the same 1.7 arc minutes per pixel angular resolution as Hololens and ODG’s Horizon, this would give a horizontal FOV of about 54.4 degrees.

Note, there are rumors that Hololens is going to be moving to a 1080p device next year so ML may still not have an advantage by the time they actually have a product. There is a chance that ML will just use a 720p device, at least at first, and accept lower angular resolution of say 2.5 or greater to get into the 54+ FOV range. Supporting a larger FOV is not small trick with waveguides and is  one thing that ML might have over Hololoens; but then again Hololens is not standing still.

Sequential Focus Planes Domino Effect

The support of vergence/accommodation appears to be a paramount issue with ML. Light fields are woefully impractical for any reasonable resolution, so ML in their patent application and some of their demo videos show the concept of “focus planes.” But for every focus plane an image has to be generated and displayed.

The cost of having more than one display per eye including the optics to combine the multiple displays would be both very costly and physically large. So the only rational way ML could support focus planes is to use a single display device and sequentially display the focus planes. But as I will outline below, using sequential focus planes to address vergence/accommodation, comes at the cost of hurting other visual comfort issues.

Expect Field Sequential Color Breakup If Magic Leap Supports “Focus Planes”

Both high resolution LCOS and DLP displays use “field sequential color” where they have a single set of mirrors that display a single color plane at a time. To get the colors to fuse together in the eye they repeat the same colors multiple times per frame of an image. Where I have serious problems with ML using Himax LCOS is that instead of repeating colors to reduce the color breakup, they will be instead be showing different images to support Sequential Focus Planes. Even if they have just two focus planes as suggested in “The Information,” it means they will reduce the rate repeating of colors to help them fuse in the eye is cut in half.

The Hololens which also uses a field sequential color LCOS one can already detect breakup. Cutting the color update rate by 2 or more will make this problem significantly worse.

Another interesting factor is that field sequential color breakup tends to be more noticeable by people’s peripheral vision which is more motion/change sensitive. This means the problem will tend to get worse as the FOV increases.

I have worked many years with field sequential display devices, specifically LCOS. Based on this experience I expect that the human vision system  will do a poor job of “fusing” the colors at such slow color field update rates and I would expect people will see a lot of field sequential color breakup particularly when objects move.

In short, I expect a lot of color breakup to be noticeable if ML support focus planes with a field sequential color device (LCOS or DLP).

Focus Planes Hurt Latency/Lag and Will Cause Double Images

An important factor in human comfort is the latency/lag between any head movement and the display reacting can cause user discomfort. A web search will turn up thousands of references about this problem.

To support focus planes ML must use a display fast enough to support at least 120 frame per second. But to support just two focus planes it will take them 1/60th of a second to sequentially display both focus planes. Thus they have increase the total latency/lag from the time they sense movement until the display is updated by ~8.333 milliseconds and this is on top of any other processing latency. So really focus planes is trading off one discomfort issue, vergence/accommodation, for another, latency/lag.

Another issue which concerns me is how well sequential focus planes are doing to fuse in the eye. With fast movement the eye/brain visual system is takes its own asynchronous “snapshots” and tries to assemble the information and line it up. But as with field sequential color, it can put together time sequential information wrong, particularly if some objects in the image move and others don’t. The result will be double images, getting double images with sequential focus planes would be unavoidable with fast movement either in the virtual world or when a person moves their eyes. These problems will be compounded by color field sequential breakup.

Focus Planes Are a Dead End – Might Magic Leap Have Given Up On Them?

I don’t know all the behind the scenes issues with what ML told investors and maybe ML has been hemmed in by their own words and demos to investors. But as an engineer with most of my 37 years in the industry working with image generation and display, it looks to me that focus planes causes bigger problems than it solves.

What gets me is that they should have figured out that focus planes were hopeless in the first few months (much less if someone that knew what they were doing was there). Maybe they were ego driven and/or they built to much around the impression they made with their “Beast” demo system (big system using DLPs). Then maybe they hand waved away the problems sequential focus planes cause thinking they could fix them somehow or hoped that people won’t notice the problems. It would certainly not be the first time that a company committed to a direction and then felt that is had gone to far to change course. Then there is always the hope that “dumb consumers” won’t see the problems (in this case I think they will).

It is clear to me that like Fiber Scan Displays (FSD), focus planes are a dead end, period, full-stop. Vergence/accommodation is a real issue but only for objects that get reasonably close to the users. I think a much more rational way to address the issue is to use sensors to track the eyes/pupils and adjust the image accordingly as the eye’s focus changes relatively slowly it should be possible to keep up. In short, move the problem from the physical display and optics domain (that will remain costly and problematical), to the sensor and processing domain (that will more rapidly come down in cost).

If I’m at Hololens, ODG, or any other company working on an AR/MR systems and accept that vergence/accommodation is a problem needs to be to solve, I’m going to solve it with eye/pupil sensing and processing, not by screwing up everything else by doing it with optics and displays. ML’s competitors have had enough warning to already be well into developing solutions if they weren’t prior to ML making such a big deal about the already well known issue.

The question I’m left is if and when did Magic Leap figured this out and were they too committed by ego or what they told investors to focus planes to change at that point? I have not found evidence so far in their patent applications that they tried to changed course, but these patent applications will be about 18 months or more behind what they decided to do. But if they don’t use focus planes, they would have to admit that they are much closer to Hololens and other competitors than they would like the market to think.

Magic Leap – Fiber Scanning Display Follow UP

Some Newer Information On Fiber Scanning

Through some discussions and further searching I found some more information about Fiber Scanning Displays (FSD) that I wanted to share. If anything, this material further supports the contention that Magic Leap (ML) is not going to have a high resolution FSD anytime soon.

Most of the images available is about fiber scanning for use as a endoscope camera and not as a display device. The images are of things like body parts they really don’t show resolution or the amount of distortion in the image. Furthermore most of the images are from 2008 or older which gives quite a bit of time for improvement. I have found some information that was generated in the 2014 to 2015 time frame that I would like to share.

Ivan Yeoh’s 2015 PhD dissertation

2015-yeoh-laser-projection

In terms of more recent fiber scanning technology, Ivan Yeoh’s name seems to be a common link. Show at left is a laser projected image and the source test pattern from Ivan Yeoh’s 2015 PhD dissertation “Online Self-Calibrating Precision Scanning Fiber Technology with Piezoelectric Self-Sensing“at the University of Washington. It is the best quality image of a test pattern or known image that I have found of a FSD anywhere. The dissertation is about how to use feedback to control the piezoelectric drive of the fiber. While his paper is about the endoscope calibration, he nicely included this laser projected image.

The drive resulted in 180 spirals which would nominally be 360 pixels across at the equator of the image with a 50Hz frame rate. But based on the resolution chart, the effective resolution is about 1/8th of that or only ~40 pixels, but about half of this “loss” is due to resampling a rectilinear image onto the spiral. You should also note that there is considerably more distortion in the center of the image where the fiber will be moving more slowly.

2015-yeoh-endoscope-manual-calibrationYeoh also included some good images at right showing how had previously used a calibration setup to manually calibrate the endoscope before use as it would go out of calibration with various factors including temperature. These are camera images and based on the test charts they are able to resolve about 130 pixels across which is pretty close to the Nyquist sampling rate from a 360 samples across spiral. As expected the center of the image where the fiber is moving the slowest is the most distorted.

While a 360 pixel camera is still very low resolution by today’s standards, it is still 4 to 8 times better than the resolution of the laser projected image. Unfortunately Yeoh was concerned with distortion and does not really address resolution issues in his dissertation. My resolution comments are based on measurements I could make from the images he published and copied above.

Washington Patent Application Filed in 2014

uow-2016-fsd-applicationYeoh is also the lead inventor on the University of Washington patent application US 2016/0324403 filed in 2014 and published in June 2016. At left is Fig. 26 from that application. It is supposed to be of a checkerboard pattern which you may be able to make out. The figure is described as using a “spiral in and spiral out” process where the rather than having a retrace time, they just reverse the process. This applications appears to be related to Yeoh’s dissertation work. Yeoh is shown as living in Fort Lauderdale, FL on the application, near Magic Leap headquarters.   Yeoh is also listed as an inventor on the Magic Leap application US 2016/0328884 “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” that I discuss in my last article. It would appear that Yeoh is or has worked for Magic Leap.

2008 YouTube Video

ideal-versus-actually-spiral-scan

Additionally, I would like to include some images from a 2008 YouTube Video that kmanmx from the Reddit Magic Leap subreddit alerted me to. White this is old, it has a nice picture of the fiber scanning process both as a whole and with close-up image near the start of the spiral process.

For reference on the closeup image I have added the size of a “pixel” for a 250 spiral / 500 pixel image (red square) and what a 1080p pixel (green square) would be if you cropped the circle to a 16:9 aspect ratio. As you hopefully can see the spacing and jitter variations-error in the scan process are several 1080p pixels in size. While this information is from 2008, the more recent evidence above does not show a tremendous improvement in resolution.

Other Issues

So far I have mostly concentrated on the issue of resolution, but there are other serious issues that have to be overcome. What is interesting in the Magic Leap and University of Washington patent literature is the lack of patent activity to address the other issues associated with generating a fiber scanned image. If Magic Leap were serious and had solved these issues with FSD, one would expect to see patent activity in making FSD work at high resolution.

One major issue that may not be apparent to the casual observer is the the controlling/driving the lasers over an extremely large dynamic range. In addition to support the typical 256 (8-bits) per color and supporting overall brightness adjustment based on the ambient light, the speed of the scan varies by a large amount an they must compensate for this or end up with a very bright center where the scan is moving more slowly. When you combine it all together they would seem to need to control the lasers over a greater than 2000:1 dynamic range from a dim pixel at the center to a brightest pixel at the periphery.

Conclusion

Looking at all the evidence there is just nothing there to convince me that Magic Leap is anywhere close to having perfected a FSD to the point that it could be competitive with a conventional display device like LCOS, DLP or Micro-OLED, not less the 50 megapixel resolutions they talk about. Overall, there is reasons to doubt that a electromechanical scan process is going to in the long run compete with an all electronic method.

It very well could be that Magic Leap had hoped that FSD would work and/or it was just a good way to convince investors that they had a technology that would lead to super high resolution in the future. But there is zero evidence that have seriously improved on what the University of Washington has done. They may still be pursuing it as an R&D effort but there is no reason to believe that they will have it in a product anytime soon.

All roads point to ML using either LCOS (per Business Insider of October 2016) or a DLP based what I have heard is in some prototypes. This would mean they will likely have either 720p or 1080p resolution display, or the same as others such as Hololens (which will likely have a 1080p version soon).

The whole FSD is about trying to break through the physical pixel barrier of conventional technologies.  There are various physics (diffraction is becoming a serious issue) and material issues that will likely make it tough to make physical pixels much smaller than 3 micron.

Even if there was a display resolution breakthrough (which I doubt based on the evidence), there are issues as to whether this resolution could make it through the optics. As the resolution improves the optics have to also improve or else they will limit the resolution. This is a factor that particularly concerns me with the waveguide technologies I have seen to date that appear to be at the heart of Magic Leap optics.

Magic Leap – The Display Technology Used in their Videos

So, what display technology is Magic Leap (ML) using, at least in their posted videos?   I believe the videos rule out a number of the possible display devices, and by a process of elimination it leaves only one likely technology. Hint: it is NOT laser fiber scanning prominently shown number of ML patents and article about ML.

Qualifiers

Magic Leap, could be posting deliberately misleading videos that show technology and/or deliberately bad videos to throw off people analyzing them; but I doubt it. It is certainly possible that the display technology shown in the videos is a prototype that uses different technology from what they are going to use in their products.   I am hearing that ML has a number of different levels of systems.  So what is being shown in the videos may or may not what they go to production with.

A “Smoking Gun Frame” 

So with all the qualifiers out of the way, below is a frame capture from Magic Leaps “A New Morning” while they are panning the headset and camera. The panning actions cause temporal (time based) frame shutter artifact in the form of partial ghost images as a result of the camera and the display running asynchronously and/or different frame rates. This one frame along with other artifacts you don’t see when playing the video, tells a lot about the display technology used to generate the image.

ml-new-morning-text-images-pan-leftIf you look at the left red oval you will see at the green arrow a double/ghost image starting and continuing below that point.  This is where the camera caught the display in its display update process. Also if you look at the right side of the image you will notice that the lower 3 circular icons (in the red oval) have double images where the top one does not (the 2nd to the top has a faint ghost as it is at the top of the field transition). By comparison, there is not a double image of the real world’s lamp arm (see center red oval) verifying that the roll bar is from the ML image generation.

ml-new-morning-text-whole-frameUpdate 2016-11-10: I have upload for those that would want to look at it.   Click on the thumbnail at left to see the whole 1920×1080 frame capture (I left the highlighting ovals that I overlaid).

Update 2016-11-14 I found a better “smoking gun” frame below at 1:23 in the video.  In this frame you can see the transition from one frame to the next.  In playing the video the frame transition slowly moves up from frame to frame indicating that they are asynchronous but at almost the same frame rate (or an integer multiple thereof like 1/60 or 1/30th)

ml-smoking-gun-002

 In addition to the “Smoking Gun Frame” above, I have looked at the “A New Morning Video” as well the “ILMxLAB and ‘Lost Droids’ Mixed Reality Test” and the early “Magic Leap Demo” that are stated to be “Shot directly through Magic Leap technology . . . without use of special effects or compositing.”through the optics.”   I was looking for any other artifacts that would be indicative of the various possible technologies

Display Technologies it Can’t Be

Based on the image above and other video evidence, I think it save to rule out the following display technologies:

  1. Laser Fiber Scanning Display – either a single or multiple  fiber scanning display as shown in Magic Leaps patents and articles (and for which their CTO is famous for working on prior to joining ML).  A fiber scan display scans in a spiral (or if they are arrayed an array of spirals) with a “retrace/blanking” time to get back to the starting point.  This blanking would show up as diagonal black line(s) and/or flicker in the video (sort of like an old CRT would show up with a horizontal black retrace line).  Also, if it is laser fiber scanning, I would expect to see evidence of laser speckle which is not there. Laser speckle will come through even if the image is out of focus.  There is nothing to suggest in this image and its video that there is a scanning process with blanking or that lasers are being used at all.  Through my study of Laser Beam Scanning (and I am old enough to have photographed CRTs) there is nothing in the still frame nor videos that is indicative of a scanning processes that has a retrace.
  2. Field Sequential DLP or LCOS – There is absolutely no field sequential color rolling, flashing, or flickering in the video or in any still captures I have made. Field sequential displays, display only one color at a time very rapidly. When these rapid color field changes beat against the camera’s scanning/shutter process.  This will show up as color variances and/or flicker and not as a simple double image. This is particularly important because it has been reported that Himax which makes field sequential LCOS devices, is making projector engines for Magic Leap. So either they are not using Himax or they are changing technology for the actual product.  I have seen many years of DLP and LCOS displays both live and through many types of video and still cameras and I see nothing that suggest field sequential color is being used.
  3. Laser Beam Scanning with a mirror – As with CRTs and fiber scanning, there has to be a blanking/retrace period between frames will show up in the videos as roll bar (dark and/or light) and it would roll/move over time.  I’m including this just to be complete as this was never suggested anywhere with respect to ML.
UPDATE Nov 17, 2016

Based on other evidence that as recently come in, even though I have not found video evidence of Field Sequential Color artifacts in any of the Magic Leap Videos, I’m more open to thinking that it could be LCOS or (less likely) DLP and maybe the camera sensor is doing more to average out the color fields than other cameras I have used in the past.  

Display Technologies That it Could Be 

Below are a list of possible technologies that could generate video images consistent with what has been shown by Magic Leap to date including the still frame above:

  1. Mico-OLED (about 10 known companies) – Very small OLEDs on silicon or similar substrates. As list of the some of the known makers is given here at OLED-info (Epson has recently joined this list and I would bet that Samsung and others are working on them internally). Micro-OLEDs both A) are small enough toinject an image into a waveguide for a small headset and B) has the the display characteristics that behave the way the image in the video is behaving.
  2. Transmissive Color Filter HTPS (Epson) – While Epson was making transmissive color filter HTPS devices, their most recent headset has switch to a Micro-OLED panel suggesting they themselves are moving away.  Additionally while Meta first generation used Epson’s HTPS, they moved away to a large OLED (with a very large spherical reflective combiner).  This technology is challenged in going to high resolution and small size.
  3. Transmissive Color Filter LCOS (Kopin) – is the only company making Color Filter Transmissive LCOS but they have not been that active as of last as a component supplier and they have serious issues with a roadmap to higher resolution and size.
  4. Color Filter reflective LCOS– I’m putting this in here more for completeness as it is less likely.  While in theory it could produce the images, it generally has lower contrast (which would translate into lack of transparency and a milkiness to the image) and color saturation.   This would fit with Himax as a supplier as they have color filter LCOS devices.
  5. Large Panel LCD or OLED – This would suggest a large headset that is doing something similar to the Meta 2.   Would tend to rule this out because it would go against everything else Magic Leap shows in their patents and what they have said publicly.   It’s just that it could have generated the image in the video.
And the “Winner” is I believe . . . Micro-OLED (see update above) 

By a process of elimination including getting rid of the “possible but unlikely” ones from above, it strongly points to it being Micro-OLED display device. Let me say, I have no personal reason to favor it being Micro-OLED, one could argue it might be to my advantage based on my experience for it to be LCOS if anything.

Before I started any serious analysis, I didn’t have an opinion. I started out doubtful that it was field sequential or and scanning (fiber/beam) devices due to the lack of any indicative artifacts in the video, but it was the “smoking gun frame” that convince me that that if the camera was catching temporal artifacts, it should have been catching the other artifact.

I’m basing this conclusions on the facts as I see them.  Period, full stop.   I would be happy to discuss this conclusion (if asked rationally) in the comments section.

Disclosure . . . I Just Bought Some Stock Based on My Conclusion and My Reasoning for Doing So

The last time I played this game of “what’s inside” I was the first to identify that a Himax LCOS panel was inside Google Glass which resulted in their market cap going up almost $100M in a couple of hours.  I had zero shares of Hixmax when this happened, my technical conclusion now as it was then was based on what I saw.

Unlike my call on Himax in Google Glass I have no idea which company make the device Magic Leap appears to using nor if Magic Leap will change technologies for their production device.  I have zero inside information and am basing this entirely on the information I have given above (you have been warned).   Not only is the information public, but it is based on videos that are many months old.

I  looked at companie on the OLED Microdisplay List by www.oled-info.com (who has followed OLED for a long time).  It turned out all the companies were either part of a very large company or were private companies, except for one, namely eMagin.

I have know of eMagin since 1998 and they have been around since 1993.  They essentially mirror Microvision doing Laser Beam Scanning and was also founded in 1993, a time where you could go public without revenue.  eMagin has spent/loss a lot of shareholder money and is worth about 1/100th from their peak in March 2000.

I have NOT done any serious technical, due diligence, or other stock analysis of eMagin and I am not a stock expert. 

I’m NOT saying that eMagine is in Magic Leap. I’m NOT saying that Micro-OLED is necessarily better than any other technology.  All I am saying is that I think that someone’s Micro-OLED technology is being using the Magic Leap prototype and that Magic Leap is such hotly followed company that it might (or might not) affect the stock price of companies making Micro-OLEDs..

So, unlike the Google Glass and Himax case above, I decided to place a small “stock bet” (for me) on ability to identify the technology (but not the company) by buying some eMagin stock  on the open market at $2.40 this morning, 2016-11-09 (symbol EMAN). I’m just putting my money where my mouth is so to speak (and NOT, once again, stock advice) and playing a hunch.  I’m just making a full disclosure in letting you know what I have done.

My Plans for Next Time

I have some other significant conclusions I have drawn from looking at Magic Leap’s video about the waveguide/display technology that I plan to show and discuss next time.

Near Eye AR/VR and HUD Metrics For Resolution, FOV, Brightness, and Eyebox/Pupil

Image result for oculus riftI’m planning on following up on my earlier articles about AR/VR Head Mounted Displays
(HMD) that also relate to Heads Up Displays (HUD) with some more articles, but first I would like to get some basic Image result for hololenstechnical concepts out of the way.  It turns out that the metrics we care about for projectors while related don’t work for hud-renaultmeasuring HMD’s and HUDs.

I’m going to try and give some “working man’s” definitions rather than precise technical definitions.  I be giving a few some real world examples and calculations to show you some of the challenges.

Pixels versus Angular Resolution

Pixels are pretty well understood, at least with today’s displays that have physical pixels like LCDs, OLEDs, DLP, and LCOS.  Scanning displays like CRTs and laser beam scanning, generally have additional resolution losses due to imperfections in the scanning process and as my other articles have pointed out they have much lower resolution than the physical pixel devices.

When we get to HUDs and HMDs, we really want to consider the angular resolution, typically measured in “arc-minutes” which are 1/60th of a degree; simply put this is the angular size that a pixel covers from the viewing position. Consumers in general haven’t understood arc-minutes, and so many companies have in the past talked in terms of a certain size and resolution display viewed from a given distance; for example a 60-inch diagonal 1080P viewed at 6 feet, but since the size of the display, resolution and viewing distance are all variables its is hard to compare displays or what this even means with a near eye device.

A common “standard” for good resolution is 300 pixels per inch viewed at 12-inches (considered reading distance) which translates to about one-arc-minute per pixel.  People with very good vision can actually distinguish about twice this resolution or down to about 1/2 an arc-minute in their central vision, but for most purposes one-arc-minute is a reasonable goal.

One thing nice about the one-arc-minute per pixel goal is that the math is very simple.  Simply multiply the degrees in the FOV horizontally (or vertically) by 60 and you have the number of pixels required to meet the goal.  If you stray much below the goal, then you are into 1970’s era “chunky pixels”.

Field of View (FOV) and Resolution – Why 9,000 by 8100 pixels per eye are needed for a 150 degree horizontal FOV. 

As you probably know, the human eye’s retina has variable resolution.  The human eyes has roughly elliptical FOV of about 150 to 170 degrees horizontally by 135 to 150 degrees vertically, but the generally good discriminating FOV is only about 40 degree (+/-20 degrees) wide, with reasonably sharp vision, the macular, being about 17-20 degrees and the fovea with the very best resolution covers only about 3 degrees of the eye’s visual field.   The eye/brain processing is very complex, however, and the eye moves to aim the higher resolving part of the retina at a subject of interest; one would want the something on the order of the one-arc-minute goal in the central part of the display (and since having a variable resolution display would be a very complex matter, it end up being the goal for the whole display).

And going back to our 60″, 1080p display viewed from 6 feet, the pixel size in this example is ~1.16 arc-minutes and the horizontal field of view of view will be about 37 degrees or just about covering the generally good resolution part of the eye’s retina.

Image result for oculus rift

Image from Extreme Tech

Now lets consider the latest Oculus Rift VR display.  It spec’s 1200 x 1080 pixels with about a 94 horz. by 93 vertical FOV per eye or a very chunky ~4.7 arc-minutes per pixel; in terms of angular resolution is roughly like looking at a iPhone 6 or 7 from 5 feet away (or conversely like your iPhone pixels are 5X as big).   To get to the 1 arc-minute per pixel goal of say viewing today’s iPhones at reading distance (say you want to virtually simulate your iPhone), they would need a 5,640 by 5,580 display or a single OLED display with about 12,000 by 7,000 pixels (allowing for a gap between the eyes the optics)!!!  If they wanted to cover the 150 by 135 FOV, we are then talking 9,000 by 8,100 per eye or about a 20,000 by 9000 flat panel requirement.

Not as apparent but equally important is that the optical quality to support these types of resolutions would be if possible exceeding expensive.   You need extremely high precision optics to bring the image in focus from such short range.   You can forget about the lower cost and weight Fresnel optics (and issues with “God rays”) used in Oculus Rift.

We are into what I call “silly number territory” that will not be affordable for well beyond 10 years.  There are even questions if any know technology could achieve these resolutions in a size that could fit on a person’s head as there are a number of physical limits to the pixel size.

People in gaming are apparently living with this appallingly low (1970’s era TV game) angular resolution for games and videos (although the God rays can be very annoying based on the content), but clearly it not a replacement for a good high resolution display.

Now lets consider Microsoft’s Hololens, it most criticized issue is is smaller (relative to the VR headsets such as Oculus) FOV of about 30 by 17.5 degrees.  It has a 1268 by 720 pixel display per eye which translates into about 1.41 arc-minutes per pixel which while not horrible is short of the goal above.   If they had used 1920×1080 (full HD) microdisplay devices which are becoming available,  then they would have been very near the 1 arc-minute goal at this FOV.

Let’s understand here that it is not as simple as changing out the display, they will also have to upgrade the “light guide” that the use as an combiner to support the higher resolution.   Still this is all reasonably possible within the next few years.   Microsoft might even choose to grow the FOV to around 40 degrees horizontally rather and keep the lower angular resolution a 1080p display.  Most people will not seriously notice the 1.4X angular resolution different (but they will by about 2x).

Commentary on FOV

I know people want everything, but I really don’t understand the criticism of the FOV of Hololens.  What we can see here is a bit of “choose your poison.”  With existing affordable (or even not so affordable) technology you can’t support a wide field of view while simultaneously good angular resolution, it is simply not realistice.   One can imaging optics that would let you zoom between a wide FOV with lower angular resolution and a smaller FOV with higher angular resolution.  The control of this zooming function could perhaps be controlled by the content or feedback from the user’s eyes and/or brain activity.

Lumens versus Candelas/Meter2 (cd/m2 or nits)

With an HMD or HUD, what we care about is the light that reaches the eye.   In a typical front projector system, only an extremely small percentage of the light that goes out of the projector, reflects off the screen and makes it back to any person’s eye, the vast majority of the light goes to illuminating the room.   With a HMD or HUD all we care about is the light that makes it into the eye.

Projector lumens or luminous flux, simply put, are a measure of the total light output and for a projector is usually measure when outputting a solid white image.   To get the light that makes it to the eye we have to account for the light hits a screen, and then absorbed, scattered, and reflects back at an angle that will get back to the eye.  Only an exceeding small percentage (a small fraction of 1%) of the projected light will make it into the eye in a typical front projector setup.

With HMDs and HUDs we talk about brightness in terms candelas-per-Meter-Squared (cd/m2), also referred to as “nits” (while considered an obsolete term, it is still often used because it is easier to write and say).  Cd/m2 (or luminance) is measure of brightness in a given direction which tells us how bright the light appears to the eye looking in a particular direction.   For a good quick explanation of lumens, cd/m2 I would recommend a Compuphase article.

Image result for hololens

Hololens appears to be “luminosity challenged” (lacking in  cd/m2) and have resorted to putting sunglasses on outer shield even for indoor use.  The light blocking shield is clearly a crutch to make up for a lack of brightness in the display.   Even with the shield, it can’t compete with bright light outdoors which is 10 to 50 times brighter than a well lit indoor room.

This of course is not an issue for the VR headsets typified by Oculus Rift.  They totally block the outside light, but it is a serious issue for AR type headsets, people don’t normally wear sunglasses indoors.

Now lets consider a HUD display.  A common automotive spec for a HUD in sunlight is to have 15,000 cd/m2 whereas a typical smartphone is between 500 and 600 cd/m2 our about 1/30th the luminosity of what is needed.  When you are driving a car down the road, you may be driving in the direction of the sun so you need a very bright display in order to see it.

The way HUDs work, you have a “combiner” (which may be the car’s windshield) that combines the image being generated with the light from the real world.  A combiner typical only reflects about 20% to 30% of the light which means that the display before the combiner needs to have on the order of 30,000 to 50,000 cd/m2 to support the 15,000 cd/m2 as seen in the combiner.  When you consider that you smartphone or computer monitor only has about 400 to 600 cd/m2 , it gives you some idea of the optical tricks that must be played to get a display image that is bright enough.

phone-hudYou will see many “smartphone HUDs” that simply have a holder for a smarphone and combiner (semi-mirror) such as the one pictured at right on Amazon or on crowdfunding sites, but rest assured they will NOT work in bright sunlight and only marginal in typical daylight conditions. Even with combiners that block more than 50% of the daylight (not really much of a see-through display at this point) they don’t work in daylight.   There is a reason why companies are making purpose built HUDs.

The cd/m2 also is a big issue for outdoor head mount display use. Depending on the application, they may need 10,000 cd/m2 or more and this can become very challenging with some types of displays and keeping within the power and cooling budgets.

At the other extreme at night or dark indoors you might want the display to have less than 100 cd/m2 to avoid blinding the user to their surrounding.  Note the SMPTE spec for movie theaters is only about 50 cd/mso even at 100 cd/m2 you would be about 2X the brightness of a movie theater.  If the device much go from bright sunlight to night use, you could be talking over a 1,500 to 1 dynamic range which turns out to be a non-trivial challenge to do well with today’s LEDs or Lasers.

Eye-Box and Exit Pupil

Since AR HMDs and HUDs generate images for a user’s eye in a particular place, yet need to compete with the ambient light, the optical system is designed to concentrate light in the direction of the eye.  As a consequence, the image will only be visible in a given solid angle “eye-box” (with HUDs) or “pupil” (with near eye displays).   There is also a trade-off in making the eyebox or pupil bigger and the ease of use, as the bigger the eye-box or pupil, the easier it will be the use.

With HUD systems there can be a pretty simple trade-off in eye-box size and cd/m2 and the lumens that must be generated.   Using some optical tricks can help keep from needing an extremely bright and power hungry light source.   Conceptually a HUD is in some ways like a head mounted display but with very long eye relief. With such large eye relieve and the ability of the person to move their whole head, the eyebox for a HUD has significantly larger than the exit pupil of near eye optics.  Because the eyebox is so much larger a HUD is going to need much more light to work with.

For near eye optical design, getting a large exit pupil is a more complex issue as it comes with trade-offs in cost, brightness, optical complexity, size, weight, and eye-relief (how far the optics are from the viewer’s eye).

Too small a pupil and/or with more eye-relief, and a near eye device is difficult to use as any small movement of the device causes you to to not be able to see the whole image.  Most people’s first encounter with an exit pupil is with binoculars or a telescope and how the image cuts offs unless the optics are centered well on the user’s eye.

Conclusions

While I can see that people are excited about the possibilities of AR and VR technologies, I still have a hard time seeing how the numbers add up so to speak for having what I would consider to be a mass market product.  I see people being critical of Hololens’ lower FOV without being realistic about how they could go higher without drastically sacrificing angular resolution.

Clearly there can be product niches where the device could serve, but I think people have unrealistic expectations for how fast the field of view can grow for product like Hololens.   For “real work” I think the lower field of view and high angular resolution approach (as with Hololens) makes more sense for more applications.   Maybe game players in the VR space are more willing to accept 1970’s type angular resolution, but I wonder for how long.

I don’t see any technology that will be practical in high volume (or even very expensive at low volume) that is going to simultaneously solve the angular resolution and FOV that some people want. AR displays are often brightness challenged, particularly for outdoor use.  Layered on top of these issues are those size, weight, cost, and power consumption which we will have to save these issues for another day.

 

Wrist Projector Scams – Ritot, Cicret, the new eyeHand

ritot-cicret-eyehand-001Wrist Projectors are the crowdfund scams that keeps on giving with new ones cropping up every 6 months to a year. When I say scam, I mean that there is zero chance that they will ever deliver anything even remotely close what they are promising. They have obviously “Photoshopped”/Fake pictures to “show” projected images that are not even close to possible in the the real world and violate the laws of physics (are forever impossible). While I have pointed out in this blog where I believe that Microvision has lied and mislead investors and showed very fake images with the laser beam scanning technology, even they are not total scammers like Ritot, Cicret, and eyeHand.

According to Ritot’s Indiegogo campaign, they have taken in $1,401,510 from 8917 suckers (they call them “backers”).   Cicret according to their website has a haul of $625,000 from 10,618 gullible people.

Just when you think that Ritot and Cicret had found all the suckers for wrist projectors, now CrowdFunder reports that eyeHand has raised $585,000 from individuals and claims to have raised another $2,500,000 in equity from “investors” (if they are real then they are fools, if not, then it is just part of the scam). A million here, $500K there, pretty soon you are talking real money.

Apparently Dell’s marking is believing these scams (I would hope their technical people know better) and has show video Ads that showed a similar impossible projectors.  One thing I will give them is that they did a more convincing “simulation” (no projecting “black”) and they say in the Ads that these are “concepts” and not real products. See for example the following stills from their Dell’s videos (click to see larger image).  It looks to me like they combined a real projected image (with the projector off camera and perpendicular to the arm/hand) and then add fake projector rays to try and suggest it came from the dummy device on the arm): dell-ritots-three

Ritot was the first of these scams I was alerted to and I help contribute some technical content to the DropKicker article http://drop-kicker.com/2014/08/ritot-projection-watch/. I am the “Reader K” that they thanked in the author’s note at the beginning of the article.  A number of others have called out the Ritot and Cicret as being scams but that did not keep them from continuing to raise money nor has it stopped the new copycat eyeHand scam.

The some of key problems with the wrist projector:

  1. Very shallow angle of projection.  Projectors normally project on a surface that is perpendicular to the direction of projection, but the wrist projectors have to project onto a surface that is nearly parallel to the direction of projection.  Their concepts show a projector that is only a few (2 to 4) millimeters above the surface. When these scammers later show “prototypes” they radically change the projection distance and projection angle.
  2. Extremely short projection distance.  The near side of the projection is only a few millimeters away while the far side of the image could be 10X or 50X further away.  There is no optics or laser scanning technology on earth that can do this.  There is no way to get such a wide image at such a short distance from the projector.  As light falls off with the square of distance, this results in an impossible illumination problem (the far side being over 100X dimmer than the near side).
  3. Projecting in ambient light – All three of the scammers show concept images where the projected image is darker than the surrounding skin.  This is absolutely impossible and violates the laws of physics.   The “black” of the image is set by the ambient light and the skin, the projector can only add light, it is impossible to remove light with a projector.  This shows ignorance and/or a callous regard for the the truth by the scammers.
  4. The blocking of the image by hairs, veins, and muscles.  At such a shallow angle (per #1 above) everything is in the way.
  5. There is no projector small enough.  These projector engines with their electronics that exist are more than 20X bigger in volume than what would be required to fit.
  6. The size of the orifice through with the light emerges is too small to support the size of the image that they want to project
  7.  The battery required to make them daylight readable would be bigger than the whole projector that they show.  These scammers would have you believe that a projector could work off a trivially small battery.
  8. Cicret and eyeHand show “touch interfaces” that won’t work due to the shallow angle.  The shadows cast by fingers working the touch interface would block the light to the rest of the image and made “multi-touch” impossible.   This also goes back to the shallow angle issue #1 above.

The issues above hold true whether the projection technology uses DLP, LCOS, or Laser Beam Scanning.

Cicret and Ritot have both made “progress reports” showing stills and  videos using projectors more than 20 times bigger and much higher and farther away (to reduce the projection angle) than the sleek wrist watch models they show in their 3-D CAD models.   Even then they  keep off-camera much/most of the electronics and battery/power-supply necessary needed to drive the optics that the show.

The image below is from a Cicret “prototype” video Feb of 2015 where they simply strapped a Microvision ShowWX+ HMDI upside down to a person’s wrist (I wonder how many thousand dollars they used engineering this prototype). They goofed in the video and showed enough of the projector that I could identify (red oval) the underside of the Microvision projector (the video also shows the distinctive diagonal roll bar of a Microvision LBS projector).  I have show the rest of the projector roughly to scale in the image below that they cropped off when shooting the video.  What you can’t tell in this video is that the projector is also a couple of inches above the surface of the arm in order to project a reasonable image.

cicret-001b

So you might think Cicret was going to use laser beam scanning, but no, their October 2016 “prototype” is showing a panel (DLP or LCOS) projector.  Basically it looks like they are just clamping whatever projector they find to a person’s wrist, there is no technology they are developing.  In this latest case, it looks like what they have done is found a small production projector taken its guts out and put it in a 3-D printed case.  Note the top of the case is going to be approximately 2 inches above a person’s wrist and how far away the image is from the projector.

cicret-002e

Ritot also has made update to keep their suckers on the hook.   Apparently Indiegogo only rule is that you much keep lying to your “backers” (for more on the subject of how Indiegogo condones fraud click here).  These updates at best show how little these scammers understood projection technology.   I guess one could argue that they were too incompetent to know they were lying.  ritot-demo-2014

On the left is a “demo” Ritot shows in 2014 after raising over $1M.  It is simply an off the shelf development system projector and note there is no power supply.  Note they are showing it straight on/perpendicular to the wrist from several inches away.

ritot-2015By 2015 Rito had their own development system and some basic optics.  Notice how big the electronics board is relative to the optics and that even this does not show the power source.

By April 2016 they showed an optical engine (ONLY) strapped to a persons wrist.  ritot-2016-04-20-at-25sCut off in the picture is the all the video drive electronics (see the flex cable in the red oval) that is off camera and likely a driver board similar to the one in the 2015 update  and the power supplies/battery.

In the April 2016 you should notice how the person’s wrist is bent to make make it more perpendicular to the direction of the projected image.  Also not that the image is distorted and about the size of an Apple watch’s image.   I will also guarantee that you will not have a decent view-able image when used outdoors in daylight.

The eyeHand scam has not shown anything like a prototype, just a poorly faked (projecting black) image.  From the low angle they show in their fake image, the projected would be blocked by the base of the thumb even if the person hold their hand flat.  To make it work at all they would have to move the projector well up the person’s arm and then bend the wrist, but then the person could not view it very well unless they hold their arm at an uncomfortable angle.  Then you have the problem of keeping the person from moving/relaxing their wrist and loosing the projection surface.   And of course it would not be view-able outdoors in daylight.

It it not like others have been trying to point out that these projectors are scams.  Google search “Ritot scam” or “Cicret scam” and you will find a number of references.  As best I can find, this blog is the first to call out the eyeHand scam:

  • The most technically in depth article was by Drop-Kicker on the Ritot scam
  • Captain Delusional has a  comic take on the Cicret scam on YouTube – He has some good insights on the issue of touch control but also makes some technical mistakes such as his comments on laser beam scanning (you can’t remove the laser scanning roll-bar by syncing the camera — also laser scanning has the same fall-off in brightness due do the scanning process).
  • Geek Forever had an article on the Ritot Scam 
  • A video about the Ritot Scam on Youtube
  • KickScammed about Ritot from 2014

The problem with scam startups is that they tarnish all the other startups trying to find a way to get started.  Unfortunately, the best liars/swindlers often do the best with crowdfunding.  The more they are willing to lie/exaggerate, the better it makes their product sound.

Indiegogo has proven time and again to have extremely low standards (basically if the company keep posting lies, they are good to go – MANY people tried to tell Indiegogo about the Ritot Scam but to no avail before Ritot got the funds). Kickstarter has some standards but the bar is not that large but at least I have not see a wrist projector on Kickstarter yet. Since the crowdfunding sites get a cut of the action whether the project delivers or not, their financial incentives are on the side of the companies and the people funding. There is no bar for companies that go with direct websites, it is purely caveat emptor.

I suspect that since the wrist projector scam has worked at least three (3) times so far, we will see other using it.   At least with eyeHand you have a good idea of what it will look like in two years (hint – like Ritot and Cicret).

Laser Beam Scanning Versus Laser-LCOS Resolution Comparison

cen-img_9783-celluon-with-uo

Side By Side Center Patterns (click on image for full size picture)

I apologize for being away for so long.  The pictures above and below were taken over a year ago and I meant to format and publish them back then but some other business and life events got in the way.

The purpose of this article is to compare the resolution of the Celluon PicoPro Laser Beam Scanning (LBS) projector and the UO Smart Beam Laser LCOS projector.   This is not meant to be a full review of both products, although I will make a few comments here and there, but rather, it is to compare the resolution between the two products.  Both projectors claim to have 720P resolution but only one of them actually has that “native/real” resolution.

This is in a way a continuation of the serious I have written about the PicoPro with optics developed by Sony and the beam scanning mirror and control by Microvision and in particular articles http://wp.me/p20SKR-gY and http://wp.me/p20SKR-hf.  With this article I am now included some comparison pictures I took of the UO Smart Beam projector (https://www.amazon.com/UO-Smart-Beam-Laser-Projector-KDCUSA/dp/B014QZ4FLO).

As per my prior articles, the Celluon PicoPro has no where close to it stated 1920×720 (non-standard) nor even 1280×720 (720P) resolution.  The UO projector while not perfect, does demonstrate 720P resolution reasonably well, but it does suffer from chroma aberrations (color separation) at the top of the image due to optical 100% offset (this is to be expected to some extent).

Let me be up front, I worked on the LCOS panel used in the UO projector when I was at Syndiant but I had nothing to do with the UO projector itself.   Take that as bias if you want, but the pictures I think tell the story.  I did not have any contact with either UO (nor Celluon for that matter) in preparing this article.

I also want to be clear that both the UO projector and the Celluon PicoPro tested are now over 1 year old and there may have been improvements since then.  I saw serious problems with both products, in particular with the color balance, the Celluon is too red (“white” is pink) and the UO very red deficient (“whilte is significantly blue-green).   The color is so far off on the Celluon that it would be a show stopper for me ever wanting to buy one as a consumer (hopefully UO has or will fix this).   Frankly, I think both projectors have serious flaws (if you want to know more, ask and I will write a follow-up article).

The UO Smart Beam had the big advantage in that it has “100% offset” which means that when placed on table-top, it will project upward not hitting the table without any keystone.   The PicoPro has zero offset and shoots straight out.  If you put it flat on a table the lower half of the image will shoot into the tabletop. Celluon includes a cheap and rather silly monopod that you can used to have the projector “float” above the table surface and then you can tilt it up and get a keystone image.  To take the picture, I had to mount the PicoPro on a much taller tripod and then shoot over the Projector so the image would not be keystoned

I understand that the next generation of the Celluon and a similar Sony MPCL1 projector (which has a “kickstand) have “digital keystone correction” which is not as good a solution as 100% offset as it reduces the resolution of the image; this is the “cheap/poor” way out and they really should have 100% offset like the UO projector (interestingly, earlier Microvision ShowWX projector with lower resolution had 100% offset) .

For the record – I like the Celluon PicoPro flatter form factor better; I’m not a fan of the UO cube as hurts the ability to put the projector in one’s pocket or a typical carrying bag.

Both the PicoPro with laser scanning and the Smart Beam with lasers illuminating an LCOS microdisplay have no focus knob and have a wide focus range (from about 50cm/1.5 feet to infinity), although they are both less sharp at the closer range.  The PicoPro with LBS is a Class 3R laser product whereas the Smart Beam with laser “illumination” of LCOS is only Class 1.   The measure dbrightness of the PicoPro was about 32 Lumens as rated when cold but dropped under 30 when heated up.  The UO while rated at 60 lumens was about 48 lumens when cold and about 45 when warmed up or more significantly below its “spec.”

Now onto the main discussion of resolution.  The picture at the top of this article shows the center crop from 720P test pattern generated by both projectors with the Smart Beam image on the left and the PicoPro on the right.   There is also an inset of the Smart Beam’s 1 pixel wide text pattern near the PicoPro’s 1 pixel wide pattern for comparison This test pattern shows a series of 1 pixel, 2 pixel and 3 pixel wide horizontal and vertical lines.

What you should hopefully notice is that the UO clearly resolves even the 1 pixel wide lines and the black lines are black whereas the 1 pixel wide lines are at best blurry and the 2 and even 3 pixel wide lines doe get to a very good black level (as in the contrast is very poor).  And the center is the very best case for the Celluon LBS whereas for the UO with it 100% offset it is a medium case (the best case is lower center).

The worst case for both projectors is one of the upper corners and below is a similar comparison of their upper right corner.  As before, I have included and inset of the UO’s single pixel image.

ur-img_9783-celluon-with-uo-overlay

Side By Side Center Patterns (click on image for full size picture)

What you should notice is that while there are still distinct 1 pixel wide lines in both directions in the UO projector, 1 pixel wide lines in the case of the Celluon LBS are a blurry mess.  Clearly they can’t resolve 1 pixel wide lines at 720P.

Because of the 100% offset optics the best case for the UO projector is at the bottom of the image (this is true almost any 100% offset optics) and this case is not much different than the center case for Celluon projector (see below):

lcen-celluon-with-uo-overlay

Below is a side by side picture I took (click on it for a full size image). The camera’s “white point” was an average between the two projectors (Celluon is too red/blue&green deficient and the UO is red deficient). The image below is NOT what I used in the cropped test patterns above as the 1 pixel features were too near the resolution limit of the Canon 70D camera (5472 by 3648 pixels) for the 1 pixel features.  So I used individual shots from each projector to double “sample” by the camera of the projected images.

side-by-side-img_0339-celluon-uo

For the Celluon PicoPro image I used the picture below (originally taken in RAW but digital lens corrected, cropped, and later converted to JPG for posting – click on image for full size):

img_9783-celluon-with-uo-overlay

For the UO Smart Beam image, I use the following image (also taken in RAW, digital lens corrected, straighten slightly, cropped and later converted to JPG for posting):

img_0231-uo-test-chart

As is my usual practice, I am including the test pattern (in lossless PNG format below for anyone who wants to verify and/or challenge my results:

interlace res-chart-720P G100A

I promise I will publish any pictures by anyone that can show better results with the PicoPro or any other LBS projector (Or UO projector for that matter) with the test pattern (or similar) above (I went to considerable effect to take the best possible PicoPro image that I could with a Canon 70D Camera).

Desperately Seeking the Next Big Thing – Head Mounted Displays (HMDs) — Part 1

Untitled-2With Microsoft’s big announcement of HoloLens and spending a reported $150 million just for HMD IP from the small Osterhout Design Group, reports of Facebook spending about $2 billion for Oculus Rift, and the mega publicity surrounding Google Glass and the hundreds of millions they have spent, Head Mounted Displays (HMD) are certainly making big news these days.

Most of the articles I have seen pretty much just parrot the company press releases and hype up these as being the next big thing.   Many of the articles have, to say the least, dubious technical content and at worst give misinformation.   My goal is to analyze the technology and much of what I am seeing and hearing does not add up.

The question is whether these are lab experiments with big budgets and companies jumping the gun that are chasing each other or whether HMDs really are going to be big in terms of everyone using them?    Or are the companies just running scared that they might miss the next big thing after cell phones and tablets.   Will they reach numbers rivaling cell phone (or at least a significant fraction)?    Or perhaps is there a “consolation prize market” which for HMDs would be to take significant share of the game market?

Let me get this out-of-the-way:  Yes, I know there is a lot of big money and smart people working on the problem.   The question is whether the problem is bigger than what is solvable?  I know I will hear from all the people with 20/20 hindsight all the successful analogies (often citing Apple) but for every success there many more that failed to catch on in a big way or had minor success and then dived.   As an example consider the investment in artificial intelligence (AI) and related computing in the 1980’s and the Intel iAPX 432 (once upon a time Intel was betting the farm on the 432 to be replacement for the 8086 until the IBM PC took off).    More recently and more directly related, 3-D TV has largely failed.  My point here is that big companies and lots of smart people make the wrong call on future markets all the time; sometimes the problems is bigger than all the smart people and money can solve.

Let me be clear, I am not talking about HMDs used in niche/dedicated markets.  I definitely see uses for HMDs applications where hands-free use is a definite.  A classic example is military applications where a soldier has to keep his hands free, is already wearing a helmet that messes up their hair and they don’t care what they look like, and they spend many hours in training.   There are also uses for HMD in the medical field for doctors as a visual aid and for helping people with impaired vision.  What I am talking about is whether we are on the verge of mass adoption.

Pardon me for being a bit skeptical, but on the technical side I still see some tremendous obstacles to HMD.    As I pointed out on this blog soon after Google Glass was announced http://www.kguttag.com/2012/03/03/augmented-reality-head-mounted-displays-part-1-real-or-not/ HMDs have a very long history of not living up to expectations.

I personally started working on a HMD in 1998 and learned about many of the issues and problems associated with them.    There are the obvious measurable issues like size, weight, fit/comfort and can you wear them with your glasses, display resolution, brightness, ruggedness, storage, and battery life.   Then there are what I call the “social issues” like how geeky it looks, does it mess up a person’s hair, and taking video (a particularly hot topic with Google Glass).   But perhaps the most insidious problems are what I lump into the “user interface” category which include input/control, distraction/safety, nausea/disorientation, and what I loosely refer to “as it just doesn’t work right.”   These issues only just touch on what I sometime joking refer to as “the 101 problems with HMDs.”

A lot is made of the display device itself, be it a transmissive LCD, liquid crystal on silicon (LCOS), OLED, or TI’s DLP.    I have about 16 years of history working on display devices, particularly LCOS, and I know the pro’s and con’s on each one in some detail.   But as it turns out, the display device and its performance is among the least of the issues with HMDs, I had a very good LCOS device way back in 1998.   As with icebergs, the biggest problems are the ones below the surface.

This first article is just to set up the series.  My plan is to go into the various aspects and issue with HMDs trying to be as objective as I can with a bit of technical analysis.    My next article will be on the subject of “One eye, two eyes, transparent or not.”

Whatever happened to pico projectors embedding in phones?

iPad smBack around 2007 when I was at Syndiant we started looking at the pico projector market, we talked to many of the major cell phone as well as a number of PC companies and almost everyone had at least an R&D program working on pico projectors.  Additionally there were market forecasts for rapid growth of embedded pico projectors in 2009 and beyond.  This convinced us to develop small liquid crystal on silicon (LCOS) microdisplay for embedded pico projectors.  With so many companies saying they needed pico projectors, it seemed like a good idea at the time.  How could so many people be wrong?

Here we are 6 years later and there are almost no pico projectors embedded in cell phones or much else for that matter.   So what happened?   Well, just about the same time we started working on pico projectors, Apple introduced their first iPhone.    The iPhone overnight roughly tripled the size of the display screen of a smartphone such as a Blackberry.  Furthermore Apple introduced ways to control the screen (pinch/zoom, double clicking to zoom in on a column, etc.) to make better use of what was still a pretty small display.   Then to make matter much worse, Apple introduce the iPad and tablet market took off almost instantaneously.    Today we have larger phones, so called “phablets,” and small tablets filling in just about every size in between.

Additionally I have written about before, the use model for a cell phone pico projector shooting on a wall doesn’t work.   There is very rarely if ever a dark enough place with something that will work well for a screen in a place that is convenient.

I found that to use a pico projector I had to carry a screen (at least a white piece of paper mounted on a stiff board in a plastic sleeve to keep clean and flat) with me.   Then you have the issue of holding the screen up so you can project on it and then find a dark enough place that the image looks good.    By the time you carry a pico projector and screen with you, a thin iPad/tablet works better, you can carry it around the room with ease, and you don’t have to have very dark environment.

The above is the subjective analysis, and the rest of this article will give some more quantitative numbers.

The fundamental problem with a front projector is that it has to compete with ambient light whereas flat panels have screens that absorb generally 91% to 96% of the ambient light (thus they look dark when off).     While display makers market contrast number, these very high contrast numbers assume a totally dark environment, in the real world what counts is the net contrast, that is the contrast factoring in ambient light.

Displaymate has an excellent set of articles (including SmartPhone Brightness Shootout, Mobile Brightness Shootout 2, and Smartphone Shootout 2) on the subject of what they call “Contrast Rating for High Ambient Light” (CRHAL)  which they define as the display brightness per unit area (in candela’s per meter squared, also known as “nits”) of the display divide by the reflectivity of ambient light in percent by the display.

Displaymate’s CRHAL is not a “contrast ratio,” but it gives a good way to compare displays when in reasonable ambient light.  Also important, is that for a front projector it does not take much ambient light to end up dominating the contrast.  For a front projector even dim room light is “high ambient light.”

The total light projected out of a projector is given in lumens so to compare it to a cell phone or tablet we have to know how big the projected image will be and the type of screen.   We can then compute the reflected light in “nits”  which is calculated by the following formula Candelas/meter2 = nits = Gn x (lumens/m2)/PI (where Gn is the gain of the screen and PI = ~3.1416).   If we assume a piece of white paper with a gain of 1 (about right for a piece of good printer paper) then all we have to do is calculate the screen area in meters-square, multiply by the lumens and divide by PI.

A pico projector projecting a 16:9 (HDTV aspect ratio) on a white sheet of notebook paper (with a gain of say 1) results in 8.8-inch by 5-inch image with an area of 0.028 m2 (about the same area as an iPad2 which I will use for comparison).    Plugging a 20 lumen projector in to the equation above with a screen of 0.028 m2 and a gain of 1.0 we get 227 nits.  The problem is that same screen/paper will reflected (diffusing it) about 100% of the ambient light.   Using Displaymate’s CRHAL we get 227/100 = 2.27.

Now compare the pico projector numbers to an iPad2 of the same display area which according to Displaymate has 410 nits and only reflects 8.7% of the ambient light.   The CRHAL for the iPad2 is 410/8.7  = 47.   What really crushes the pico projector by about 20 to 1 with CRHAL metric is that the flat panel display reflects less than 10th of the ambient light where the pico projector’s image has to fight with 100% the ambient light.

In terms of contrast,to get a barely “readable” B&W image, you need at least 1.5:1 contrast (the “white” needs to be 1.5 brighter than the black) and preferably more than 2:1.   To have moderately good (but not great) colors you need 10:1 contrast.

A well lit room has about 100 to 500 lux (see Table 1 at the bottom of this article) and a bright “task area” up to 1500 lux.   If we take 350 lux as a “typical” room then for the sheet of paper screen there are about 10 lumens of ambient light in our 0.028 m2 image from used above.   Thus our 20 lumen projector on top of the 10 lumens of ambient has a contrast ratio of 30/10 or about 3 to 1 which means the colors will be pretty washed out but black on white text will be readable.  To get reasonably good (but not great) colors with a contrast ratio of 10:1 we would need about 80 lumens.   By the same measure, the iPad2 in the same lighting would have a contrast ratio of about 40:1 or over 10x the contrast of a 20 lumen pico projector.   And the brighter the lighting environment the worse the pico projector will compare.    Even if we double or triple the lumens, the pico projector can’t compete.

With the information above, you can plug in whatever numbers you want for brightness and screen size and no matter was reasonable numbers you plug in, you will find that a pico projector can’t compete with a tablet even in moderate lighting conditions.

And all this is before considering the power consumption and space a pico projector would take.   After working on the problem for a number of years it became clear that rather than adding a pico projector with its added battery, they would be better off to just make the display bigger (ala the Galaxy S3 and S4 or even the Note).   The microdisplay devices created would have to look for other markets such as near eye (for example, Google Glass) and automotive Heads Up Display (HUD).

Table 1.  Typical Ambient Lighting Levels (from Displaymate)

Brightness Range

Description

0 lux  –

100 lux  –

500 lux  –

1,000 lux  –

3,000 lux  –

10,000 lux  –

20,000 lux  –

50,000 lux  –

100,000 lux  –

100 lux

500 lux

1,500 lux

5,000 lux

10,000 lux

25,000 lux

50,000 lux

75,000 lux

    120,000 lux

Pitch black to dim interior lightingResidential indoor lighting

Bright indoor lighting:  kitchens, offices, stores

Outdoor lighting in shade or an overcast sky

Shadow cast by a person in direct sunlight

Full daylight not in direct sunlight

Indoor sunlight falling on a desk near a window

Indoor direct sunlight through a window

Outdoor direct sunlight

Himax FSC LCOS in Google Glass — Seeking Alpha Article

Catwig to Himax ComparisonThis blog was the first to identify that there was a Himax panel in an early Google Glass prototype and the first to identify that there was a field sequential color LCOS panel inside Google Glass.  Due to the connection it was a reasonable speculation but there was no proof that Himax was in Google Glass.

Then when Catwig published a teardown of Google Glass last week (and my inbox lit up with people telling me about the article) there were no Himax logos to be seen which started people to wondering if there was indeed a Himax display inside.   As a result of my prior exclusive finds on Himax, LCOS and Google Glass, I was ask to contribute to Seeking Alpha and I just published an article that details my proof that there is a Himax LCOS display inside the current Google Glass.   In that article, I also discounted some recent speculation that Google Glass was going to use a Samsung OLED microdisplay anytime soon.

 

 

 

 

Extended Temperature Range with LC Based Microdisplays

cookies and freezing

Extreme Car Temperatures

A reader, Doug Atkinson, asked a question about meeting extended temperature ranges with LC based microdisplays, particularly with respect to Kopin.    He asked the classic “car dash in the desert and the trunk in Alaska” question. I thought the answer would have broader interest so I decided to answer it it here.

Kopin wrote a good paper that is available on the subject in 2006 titled “A Normally Black, High Contrast, Wide Symmetrical Viewing Angle AMLCD for Military Head Mounted Displays (HMDs) and Other Viewer Applications”. This paper is the most detailed one readily available describing the how Kopin’s transmissive panels meet the military temperature and shock requirements.  It is not clear that Kopin uses this same technology for their consumer products as this paper is specifically addressing what Kopin did for military products.

With respect to LC microdisplays in general, it should realized that there is not a huge difference in the technical spec’s of the liquid crystals between the LC’s  most small panel microdisplays use and large flat panels in most cases. They often just use different “blends” of the very similar materials. There are some major LC differences including TN (twisted nematic), VAN (vertically aligned nematic), and others.   Field sequential color are biased to wanting faster switching “blends” of the LC.

In general, anywhere a large flat panel LC can go, a microdisplay LC can go. The issue is designing the seals and and other materials/structures to withstand the temperature cycling and mechanical shock which requires testing,  experimentation, and development.

The liquid crystals themselves generally will go through different phases from freezing (which is generally fatal) to heating up to the the “clearing point” where the display stops working (but is generally recoverable).  There is also a different spec for “storage temperature range” versus “operating temperature range.” Generally it is assumed the device only has to work in a temperature range in which a human could survive.

At low temperature the LC gets “sluggish” and does not operate well but this can be cured by various “heater mechanisms” including having heating mechanisms designed into the panel itself.  The liquid crystal blends are often designed/picked to work best at a higher temperature range because it is easier to heat than cool.

Field sequential color LCOS is more affected by temperature change because temperature affects not only the LC characteristics, but the switching speed. Once again, this can be dealt with by designing for the higher temperature range and then heating if necessary.

As far as Kopin’s “brightness” goes (another of Doug’s questions), a big factor is how powerful/bright the back light has to be. The Kopin panel blocks something like 98.5% of the light by their own spec’s. What you can get away with in a military headset is different than what you may accept in a consumer product in terms of size, weight, and power consumption. Brightness in daylight is a well known (inside the industry) issue for Kopin’s transmissive panels and one reason that near eye display makers have sought out LCOS.

[As an aside for completeness about FLC]  Displaytech which was sold the Micron and then sold to Citizen Finetech Miyota and the Kopin bought Forth Dimension Display (FDD) both use Ferro-electric LC (FLC / FLCOS) which does have a pretty dramatically different temperature profile that is very near “freezing” (going into a solid state) a little below 0C which would destroy the device. Displaytech claimed (I don’t know about FDD) that they had extended the low temperature range but I don’t know by how much. The point is that the temperature range of FLC is so different that meeting military spec’s is much more difficult.