Archive for Laser Projection

Microvision Laser Beam Scanning: Everything Old Is New Again

Reintroducing a 5 Year Old Design?

Microvision, the 23 year old “startup” in Laser Beam Scanning (LBS), has been a fun topic on this blog since 2011. They are a classic example of a company that tries to make big news out of what other companies would consider to not be news worthy.

Microvision has been through a lot of “business models” in their 23 years. They have been through selling “engines”, building whole products (the ShowWX), licensing model with Sony selling engines, and now with their latests announcement “MicroVision Begins Shipping Samples to Customers of Its Small Form Factor Display Engine they are back to selling “engines.”

The funny thing is this “new” engine doesn’t look very much different from it “old” engine it was peddling about 5 years ago. Below I have show 3 laser microvision engines from 2017, 2012, and 2013 to roughly to the same scale and they all look remarkably similar. The 2012 and 2017 engine are from Microvision and the 2013 engine was inside the 2013 Pioneer aftermarket HUD. The Pioneer HUD appears use a nearly identical engine and within 3mm of the length of the “new” engine. 


The “new” engine is smaller than the 2014 Sony engine that used 5 lasers (two red, two green, and one blue) to support higher brightness and higher power with lower laser speckle shown at left.  It appears that the “new” Microvision engine is really at best a slightly modified 2012 model, with maybe some minor modification and newer laser diodes.

What is missing from Microvision’s announcement is any measurable/quantifiable performance information, such as the brightness (lumens) and power consumption (Watts). In my past studies of Microvision engines, they have proven to have much worse lumens per Watt compared to other (DLP and LCOS) technologies. I have also found their measurable resolution to be considerably less (about half in horizontally and vertically) than they their claimed resolution.

While Microvision says, “The sleek form factor and thinness of the engine make it an ideal choice for products such as smartphones,” one needs to understand that the size of the optical engine with is drive electronics is about equal to the entire contents of a typical smartphone. And the projector generally consumes more power than the rest of the phone which makes it both a battery size and a heat issue.

Magic Leap – Fiber Scanning Display Follow UP

Some Newer Information On Fiber Scanning

Through some discussions and further searching I found some more information about Fiber Scanning Displays (FSD) that I wanted to share. If anything, this material further supports the contention that Magic Leap (ML) is not going to have a high resolution FSD anytime soon.

Most of the images available is about fiber scanning for use as a endoscope camera and not as a display device. The images are of things like body parts they really don’t show resolution or the amount of distortion in the image. Furthermore most of the images are from 2008 or older which gives quite a bit of time for improvement. I have found some information that was generated in the 2014 to 2015 time frame that I would like to share.

Ivan Yeoh’s 2015 PhD dissertation

2015-yeoh-laser-projection

In terms of more recent fiber scanning technology, Ivan Yeoh’s name seems to be a common link. Show at left is a laser projected image and the source test pattern from Ivan Yeoh’s 2015 PhD dissertation “Online Self-Calibrating Precision Scanning Fiber Technology with Piezoelectric Self-Sensing“at the University of Washington. It is the best quality image of a test pattern or known image that I have found of a FSD anywhere. The dissertation is about how to use feedback to control the piezoelectric drive of the fiber. While his paper is about the endoscope calibration, he nicely included this laser projected image.

The drive resulted in 180 spirals which would nominally be 360 pixels across at the equator of the image with a 50Hz frame rate. But based on the resolution chart, the effective resolution is about 1/8th of that or only ~40 pixels, but about half of this “loss” is due to resampling a rectilinear image onto the spiral. You should also note that there is considerably more distortion in the center of the image where the fiber will be moving more slowly.

2015-yeoh-endoscope-manual-calibrationYeoh also included some good images at right showing how had previously used a calibration setup to manually calibrate the endoscope before use as it would go out of calibration with various factors including temperature. These are camera images and based on the test charts they are able to resolve about 130 pixels across which is pretty close to the Nyquist sampling rate from a 360 samples across spiral. As expected the center of the image where the fiber is moving the slowest is the most distorted.

While a 360 pixel camera is still very low resolution by today’s standards, it is still 4 to 8 times better than the resolution of the laser projected image. Unfortunately Yeoh was concerned with distortion and does not really address resolution issues in his dissertation. My resolution comments are based on measurements I could make from the images he published and copied above.

Washington Patent Application Filed in 2014

uow-2016-fsd-applicationYeoh is also the lead inventor on the University of Washington patent application US 2016/0324403 filed in 2014 and published in June 2016. At left is Fig. 26 from that application. It is supposed to be of a checkerboard pattern which you may be able to make out. The figure is described as using a “spiral in and spiral out” process where the rather than having a retrace time, they just reverse the process. This applications appears to be related to Yeoh’s dissertation work. Yeoh is shown as living in Fort Lauderdale, FL on the application, near Magic Leap headquarters.   Yeoh is also listed as an inventor on the Magic Leap application US 2016/0328884 “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” that I discuss in my last article. It would appear that Yeoh is or has worked for Magic Leap.

2008 YouTube Video

ideal-versus-actually-spiral-scan

Additionally, I would like to include some images from a 2008 YouTube Video that kmanmx from the Reddit Magic Leap subreddit alerted me to. White this is old, it has a nice picture of the fiber scanning process both as a whole and with close-up image near the start of the spiral process.

For reference on the closeup image I have added the size of a “pixel” for a 250 spiral / 500 pixel image (red square) and what a 1080p pixel (green square) would be if you cropped the circle to a 16:9 aspect ratio. As you hopefully can see the spacing and jitter variations-error in the scan process are several 1080p pixels in size. While this information is from 2008, the more recent evidence above does not show a tremendous improvement in resolution.

Other Issues

So far I have mostly concentrated on the issue of resolution, but there are other serious issues that have to be overcome. What is interesting in the Magic Leap and University of Washington patent literature is the lack of patent activity to address the other issues associated with generating a fiber scanned image. If Magic Leap were serious and had solved these issues with FSD, one would expect to see patent activity in making FSD work at high resolution.

One major issue that may not be apparent to the casual observer is the the controlling/driving the lasers over an extremely large dynamic range. In addition to support the typical 256 (8-bits) per color and supporting overall brightness adjustment based on the ambient light, the speed of the scan varies by a large amount an they must compensate for this or end up with a very bright center where the scan is moving more slowly. When you combine it all together they would seem to need to control the lasers over a greater than 2000:1 dynamic range from a dim pixel at the center to a brightest pixel at the periphery.

Conclusion

Looking at all the evidence there is just nothing there to convince me that Magic Leap is anywhere close to having perfected a FSD to the point that it could be competitive with a conventional display device like LCOS, DLP or Micro-OLED, not less the 50 megapixel resolutions they talk about. Overall, there is reasons to doubt that a electromechanical scan process is going to in the long run compete with an all electronic method.

It very well could be that Magic Leap had hoped that FSD would work and/or it was just a good way to convince investors that they had a technology that would lead to super high resolution in the future. But there is zero evidence that have seriously improved on what the University of Washington has done. They may still be pursuing it as an R&D effort but there is no reason to believe that they will have it in a product anytime soon.

All roads point to ML using either LCOS (per Business Insider of October 2016) or a DLP based what I have heard is in some prototypes. This would mean they will likely have either 720p or 1080p resolution display, or the same as others such as Hololens (which will likely have a 1080p version soon).

The whole FSD is about trying to break through the physical pixel barrier of conventional technologies.  There are various physics (diffraction is becoming a serious issue) and material issues that will likely make it tough to make physical pixels much smaller than 3 micron.

Even if there was a display resolution breakthrough (which I doubt based on the evidence), there are issues as to whether this resolution could make it through the optics. As the resolution improves the optics have to also improve or else they will limit the resolution. This is a factor that particularly concerns me with the waveguide technologies I have seen to date that appear to be at the heart of Magic Leap optics.

Magic Leap – No Fiber Scan Display (FSD)

Sorry, No Fiber Scan Displays

For those that only want my conclusion, I will cut to the chase. Anyone that believes Magic Leap (ML) is going to have a Laser Fiber Scanned Display (FSD) anytime soon (as in the next decade) is going to be sorely disappointed. FSDs is one of those concepts that sounds like it would work until you look at it carefully. Developed at the University of Washington in the mid to late 2000’s, they were able to generate some very poor quality images in 2009 and as best I can find, nothing better since.

The fundamental problem with this technology is that wiggling a fiber is very hard to control accurately enough to make a quality display. This problem is particularly true when the scanning fiber has to come to near rest in the center of the image. It is next to impossible (and impossible at a rational cost) to have the wiggling fiber tip with finite mass and its own resonate frequency follow a highly accurate and totally repeatable path.

Magic Leap has patents applications related to FSDs showing two different ways to try and increase the resolution, provide they could ever make a decent low resolution display in the first place. Effectively, they have patents that doubled down on FSD, one was the “array of FSDs” which I discussed in the Appendix of my last article that would be insanely expensive and would not work optically in a near eye system and the other doubles down on a single FSD that ML calls “Dynamic Region Resolution” (DRR) which I will discuss below after discussing the FSD basics.

The ML patent applications on the subject of FSD read more like technical fairy tales of what they wished they could do with a bit of technical detail and drawing scattered in to make it sound plausible. But the really tough problems of making it work are never even discussed, no less solutions proposed.

Fiber Scanning Display (FSD) Basics

ml-spiral-scanThe concept of the Fiber Scanning Display (FSD) is simple enough, two piezoelectric vibrators connected to one side of an optical fiber cause the fiber tip follow a spiral path starting from the center and a working its way out. The amplitude of the vibration starts at zero in the center and then gradually increases in amplitue causing the fiber to both speed up and follow a spiral path. At the fiber tip accelerates the tip moves outward radially. The spacing of each orbit is a function of the increase in speed.

ml-fiber-scanning-basic

Red, Green, and Blue (RGB) lasers are combined and coupled into the fiber at the stationary end. As the fiber moves, the lasers turn on an off to create “pixels” that come out the spiraling end of the fiber. At the end a scan, the lasers are turned off and drive is gradually reduced to bring the fiber tip back to the starting point under control (if they just stopped the vibration, it would wiggle uncontrollably).  This retrace period while faster than the scan takes a significant amount of time since it is a mechanical process.

An obvious issue is how well they can control a wiggling optical fiber. As the documents point out, the fiber will want to oscillate based on its resonance frequency that can be stimulated by the piezoelectric vibrators. Still, one would expect that the motion will not be perfectly stable, particularly at the beginning when it is moving slowly and has no momentum.  Then there is the issue as how well it will follow the exactly the same path from frame to frame when the image is supposed to be still.

One major complication I did not see covered in any of the ML or University of Washington (which originated the concept) documents or applications is what it takes to control the laser accurately enough. The fiber speeding up from near zero at its center to maximum speed as the end of the scan. At the center of the spiral the tip moving very slowly (near zero speed). If you turned a laser on for the same amount of time and brightness as the center, pixels would be many times closer together and brighter at the center than the periphery. The ML applications even recognize that increasing the resolution of a single electromechanical  FSD is impossible for all practical purposes.

Remember that they are electromechanically vibrating one end of the fiber to cause the tip to move in a spiral to cover the area of a circle. There is a limit to how fast they can move the fiber, how well they can control it, and the fact that they want fill a wide rectangular area so a lot of the circular area will be cut off.

Looking through everything I could find that was published on the FSD, including Schowengerdt (ML co-founder and Chief Scientist) et al’s SID 2009 paper “1-mm Diameter, Full-color Scanning Fiber Pico Projector” and SID2010 paper, “Near-to-Eye Display using Scanning Fiber Display Engine” only low resolution still images are available and no videos. Below are two images from the SID 2009 paper along with the “Lenna” standard image reproduced in one of them, perhaps sadly, these are best FSD images I could find anywhere. What’s more, there has never been a public demonstration of it producing a video which I believe would show additional temporal and motion problems. 2009-fsd-images2

What you can see in both of the actual FSD images is that the center is much brighter than the periphery. From the Lenna FSD image you can see how distorted the image is particularly in the center (look at Lenna’s eye in the center and the brim of the hat for example). Even the outer parts of the image are pretty distorted. They don’t even have an decent brightness control of the pixels and didn’t even attempt to show color reproduction (requiring extremely precise laser control). Yes the images are old, but there are a series of extremely hard problems outlined above that are likely not solvable which is likely why we have not seen any better pictures of an FSD from ANYONE (ML or others) in the last 7 years.

While ML may have improved upon the earlier University of Washington work, there is obviously nothing they are proud enough to publish, no less a video of it working. It is obvious that non of the released ML videos use a FSD.

Maybe ML had improved it enough to show some promise to get investors to believe it was possible (just speculating). But even if they could perfect the basic FSD, by their own admission in the patent applications, the resolution would be too low to support a high resolution near eye display. They would need to come up with a plausible way to further increase the effective resolution to meet the Magic Leap hype of “50 Mega Pixels.”

“Dynamic Region Resolution (DRR) – 50 Mega Pixels ???

Magic Leap on more than one occasion has talked about the need to 50 Megapixels to support the field of view (FOV) they want with the angular resolution of 1-arcminute/pixel that they say is desirable. Suspending the disbelief that they could even make a good low resolution FSD, they doubled down with what they call “Dynamic Region Resolution” (DRR).

US 2016/0328884 (‘884) “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” shows the concept. This would appear to answer the question of how ML convinced investors that having a 50 megapixel equivalent display could be plausible (but not possible).

ml-variable-scan-thinThe application shows what could be considered to be a “foveated display”, where various area’s of the display varies in image density based on where it will be projected onto the human’s retina. The idea is to have high pixel density where the image will project on the highest resolution part of the eye, the fovea, and that resolution is “wasted” on the parts of the eye that can’t resolve it.

The concept is simple enough as shown in ‘884’s figures 17a and 17b (left). The concept is to track the pupil to see where the eye is looking (indicated by the red “X” in the figures) and then adjust the scan speed, line density, and sequential pixel density based on where the eye is looking. Fig 17a show the pattern for when the eye is looking at the center of the image where they would accelerate more slowly in the center of the scan. In Fig. 17b they show the scanning density to be higher where the eye is looking at some point in the middle of the image. They increase the line density in a ring that covers where the eye is looking.

Starting at the center the fiber tip is always accelerating.  For denser lines they just accelerate less, for less dense areas they accelerate at a higher rate so this sound plausible. The devil is in the details in how the fiber tip behaves as it acceleration rate changes.

Tracking the pupil accurately enough seems very possible with today’s technology. The patent application discusses how wide the band of high resolution needs to be to cover a reasonable range of eye movement from frame to frame which make it sound plausible. Some of the obvious fallacies with this approach include:

  1. Control the a wiggling fiber with enough precision to meet the high resolution and to do it repeatedly from scan to scan. They can’t even do it at low resolution with constant acceleration.
  2. Stability/tracking of the fiber as it increase and decreases its acceleration.
  3. Controlling the laser brightness accurately at both the highest and lowest resolution regions.  This will be particularly tricky as the the fiber increases or decreases it acceleration rate.
  4. The rest of the optics including any lenses and waveguides must support the highest resolution possible for the use to be able to see it. This means that the other optics need to be extremely high precision (and expensive)
What about Focus Planes?

Beyond the above is the need to support ML’s whole focus plane (“poor person’s light field”) concept.  To support focus planes they need 2 to 6 or more images per eye per frame time (say 1/60th of a second). The fiber scanning process is so slow that even producing a single low resolution and highly distorted image in 1/60th is barely possible, no less multiple images per 1/60th of a second to support the plane concept.  So to support the focus plane concept they would need a FSD per focus plane with all its associated lasers and control circuitry; the size and cost to produce would become astronomical.

Conclusion – A Way to Convince the Gullible

The whole FSD appears to me to be a dead end other than to convince the gullible that it is plausible. Even getting a FSD to produce a single low resolution image would take more than one miracle.  The idea of a DRR just doubles down on a concept that cannot produce a decent low resolution image.

The overall impression I get from the ML patent applications is that they were written to impress people (investors?) that didn’t look at the details too carefully. I can see how one can get sucked into the whole DRR concept as the applications gives numbers and graphs that try and show it is plausible; but they ignore the huge issues that they have not figured out.

Magic Leap – The Display Technology Used in their Videos

So, what display technology is Magic Leap (ML) using, at least in their posted videos?   I believe the videos rule out a number of the possible display devices, and by a process of elimination it leaves only one likely technology. Hint: it is NOT laser fiber scanning prominently shown number of ML patents and article about ML.

Qualifiers

Magic Leap, could be posting deliberately misleading videos that show technology and/or deliberately bad videos to throw off people analyzing them; but I doubt it. It is certainly possible that the display technology shown in the videos is a prototype that uses different technology from what they are going to use in their products.   I am hearing that ML has a number of different levels of systems.  So what is being shown in the videos may or may not what they go to production with.

A “Smoking Gun Frame” 

So with all the qualifiers out of the way, below is a frame capture from Magic Leaps “A New Morning” while they are panning the headset and camera. The panning actions cause temporal (time based) frame shutter artifact in the form of partial ghost images as a result of the camera and the display running asynchronously and/or different frame rates. This one frame along with other artifacts you don’t see when playing the video, tells a lot about the display technology used to generate the image.

ml-new-morning-text-images-pan-leftIf you look at the left red oval you will see at the green arrow a double/ghost image starting and continuing below that point.  This is where the camera caught the display in its display update process. Also if you look at the right side of the image you will notice that the lower 3 circular icons (in the red oval) have double images where the top one does not (the 2nd to the top has a faint ghost as it is at the top of the field transition). By comparison, there is not a double image of the real world’s lamp arm (see center red oval) verifying that the roll bar is from the ML image generation.

ml-new-morning-text-whole-frameUpdate 2016-11-10: I have upload for those that would want to look at it.   Click on the thumbnail at left to see the whole 1920×1080 frame capture (I left the highlighting ovals that I overlaid).

Update 2016-11-14 I found a better “smoking gun” frame below at 1:23 in the video.  In this frame you can see the transition from one frame to the next.  In playing the video the frame transition slowly moves up from frame to frame indicating that they are asynchronous but at almost the same frame rate (or an integer multiple thereof like 1/60 or 1/30th)

ml-smoking-gun-002

 In addition to the “Smoking Gun Frame” above, I have looked at the “A New Morning Video” as well the “ILMxLAB and ‘Lost Droids’ Mixed Reality Test” and the early “Magic Leap Demo” that are stated to be “Shot directly through Magic Leap technology . . . without use of special effects or compositing.”through the optics.”   I was looking for any other artifacts that would be indicative of the various possible technologies

Display Technologies it Can’t Be

Based on the image above and other video evidence, I think it save to rule out the following display technologies:

  1. Laser Fiber Scanning Display – either a single or multiple  fiber scanning display as shown in Magic Leaps patents and articles (and for which their CTO is famous for working on prior to joining ML).  A fiber scan display scans in a spiral (or if they are arrayed an array of spirals) with a “retrace/blanking” time to get back to the starting point.  This blanking would show up as diagonal black line(s) and/or flicker in the video (sort of like an old CRT would show up with a horizontal black retrace line).  Also, if it is laser fiber scanning, I would expect to see evidence of laser speckle which is not there. Laser speckle will come through even if the image is out of focus.  There is nothing to suggest in this image and its video that there is a scanning process with blanking or that lasers are being used at all.  Through my study of Laser Beam Scanning (and I am old enough to have photographed CRTs) there is nothing in the still frame nor videos that is indicative of a scanning processes that has a retrace.
  2. Field Sequential DLP or LCOS – There is absolutely no field sequential color rolling, flashing, or flickering in the video or in any still captures I have made. Field sequential displays, display only one color at a time very rapidly. When these rapid color field changes beat against the camera’s scanning/shutter process.  This will show up as color variances and/or flicker and not as a simple double image. This is particularly important because it has been reported that Himax which makes field sequential LCOS devices, is making projector engines for Magic Leap. So either they are not using Himax or they are changing technology for the actual product.  I have seen many years of DLP and LCOS displays both live and through many types of video and still cameras and I see nothing that suggest field sequential color is being used.
  3. Laser Beam Scanning with a mirror – As with CRTs and fiber scanning, there has to be a blanking/retrace period between frames will show up in the videos as roll bar (dark and/or light) and it would roll/move over time.  I’m including this just to be complete as this was never suggested anywhere with respect to ML.
UPDATE Nov 17, 2016

Based on other evidence that as recently come in, even though I have not found video evidence of Field Sequential Color artifacts in any of the Magic Leap Videos, I’m more open to thinking that it could be LCOS or (less likely) DLP and maybe the camera sensor is doing more to average out the color fields than other cameras I have used in the past.  

Display Technologies That it Could Be 

Below are a list of possible technologies that could generate video images consistent with what has been shown by Magic Leap to date including the still frame above:

  1. Mico-OLED (about 10 known companies) – Very small OLEDs on silicon or similar substrates. As list of the some of the known makers is given here at OLED-info (Epson has recently joined this list and I would bet that Samsung and others are working on them internally). Micro-OLEDs both A) are small enough toinject an image into a waveguide for a small headset and B) has the the display characteristics that behave the way the image in the video is behaving.
  2. Transmissive Color Filter HTPS (Epson) – While Epson was making transmissive color filter HTPS devices, their most recent headset has switch to a Micro-OLED panel suggesting they themselves are moving away.  Additionally while Meta first generation used Epson’s HTPS, they moved away to a large OLED (with a very large spherical reflective combiner).  This technology is challenged in going to high resolution and small size.
  3. Transmissive Color Filter LCOS (Kopin) – is the only company making Color Filter Transmissive LCOS but they have not been that active as of last as a component supplier and they have serious issues with a roadmap to higher resolution and size.
  4. Color Filter reflective LCOS– I’m putting this in here more for completeness as it is less likely.  While in theory it could produce the images, it generally has lower contrast (which would translate into lack of transparency and a milkiness to the image) and color saturation.   This would fit with Himax as a supplier as they have color filter LCOS devices.
  5. Large Panel LCD or OLED – This would suggest a large headset that is doing something similar to the Meta 2.   Would tend to rule this out because it would go against everything else Magic Leap shows in their patents and what they have said publicly.   It’s just that it could have generated the image in the video.
And the “Winner” is I believe . . . Micro-OLED (see update above) 

By a process of elimination including getting rid of the “possible but unlikely” ones from above, it strongly points to it being Micro-OLED display device. Let me say, I have no personal reason to favor it being Micro-OLED, one could argue it might be to my advantage based on my experience for it to be LCOS if anything.

Before I started any serious analysis, I didn’t have an opinion. I started out doubtful that it was field sequential or and scanning (fiber/beam) devices due to the lack of any indicative artifacts in the video, but it was the “smoking gun frame” that convince me that that if the camera was catching temporal artifacts, it should have been catching the other artifact.

I’m basing this conclusions on the facts as I see them.  Period, full stop.   I would be happy to discuss this conclusion (if asked rationally) in the comments section.

Disclosure . . . I Just Bought Some Stock Based on My Conclusion and My Reasoning for Doing So

The last time I played this game of “what’s inside” I was the first to identify that a Himax LCOS panel was inside Google Glass which resulted in their market cap going up almost $100M in a couple of hours.  I had zero shares of Hixmax when this happened, my technical conclusion now as it was then was based on what I saw.

Unlike my call on Himax in Google Glass I have no idea which company make the device Magic Leap appears to using nor if Magic Leap will change technologies for their production device.  I have zero inside information and am basing this entirely on the information I have given above (you have been warned).   Not only is the information public, but it is based on videos that are many months old.

I  looked at companie on the OLED Microdisplay List by www.oled-info.com (who has followed OLED for a long time).  It turned out all the companies were either part of a very large company or were private companies, except for one, namely eMagin.

I have know of eMagin since 1998 and they have been around since 1993.  They essentially mirror Microvision doing Laser Beam Scanning and was also founded in 1993, a time where you could go public without revenue.  eMagin has spent/loss a lot of shareholder money and is worth about 1/100th from their peak in March 2000.

I have NOT done any serious technical, due diligence, or other stock analysis of eMagin and I am not a stock expert. 

I’m NOT saying that eMagine is in Magic Leap. I’m NOT saying that Micro-OLED is necessarily better than any other technology.  All I am saying is that I think that someone’s Micro-OLED technology is being using the Magic Leap prototype and that Magic Leap is such hotly followed company that it might (or might not) affect the stock price of companies making Micro-OLEDs..

So, unlike the Google Glass and Himax case above, I decided to place a small “stock bet” (for me) on ability to identify the technology (but not the company) by buying some eMagin stock  on the open market at $2.40 this morning, 2016-11-09 (symbol EMAN). I’m just putting my money where my mouth is so to speak (and NOT, once again, stock advice) and playing a hunch.  I’m just making a full disclosure in letting you know what I have done.

My Plans for Next Time

I have some other significant conclusions I have drawn from looking at Magic Leap’s video about the waveguide/display technology that I plan to show and discuss next time.

Near Eye AR/VR and HUD Metrics For Resolution, FOV, Brightness, and Eyebox/Pupil

Image result for oculus riftI’m planning on following up on my earlier articles about AR/VR Head Mounted Displays
(HMD) that also relate to Heads Up Displays (HUD) with some more articles, but first I would like to get some basic Image result for hololenstechnical concepts out of the way.  It turns out that the metrics we care about for projectors while related don’t work for hud-renaultmeasuring HMD’s and HUDs.

I’m going to try and give some “working man’s” definitions rather than precise technical definitions.  I be giving a few some real world examples and calculations to show you some of the challenges.

Pixels versus Angular Resolution

Pixels are pretty well understood, at least with today’s displays that have physical pixels like LCDs, OLEDs, DLP, and LCOS.  Scanning displays like CRTs and laser beam scanning, generally have additional resolution losses due to imperfections in the scanning process and as my other articles have pointed out they have much lower resolution than the physical pixel devices.

When we get to HUDs and HMDs, we really want to consider the angular resolution, typically measured in “arc-minutes” which are 1/60th of a degree; simply put this is the angular size that a pixel covers from the viewing position. Consumers in general haven’t understood arc-minutes, and so many companies have in the past talked in terms of a certain size and resolution display viewed from a given distance; for example a 60-inch diagonal 1080P viewed at 6 feet, but since the size of the display, resolution and viewing distance are all variables its is hard to compare displays or what this even means with a near eye device.

A common “standard” for good resolution is 300 pixels per inch viewed at 12-inches (considered reading distance) which translates to about one-arc-minute per pixel.  People with very good vision can actually distinguish about twice this resolution or down to about 1/2 an arc-minute in their central vision, but for most purposes one-arc-minute is a reasonable goal.

One thing nice about the one-arc-minute per pixel goal is that the math is very simple.  Simply multiply the degrees in the FOV horizontally (or vertically) by 60 and you have the number of pixels required to meet the goal.  If you stray much below the goal, then you are into 1970’s era “chunky pixels”.

Field of View (FOV) and Resolution – Why 9,000 by 8100 pixels per eye are needed for a 150 degree horizontal FOV. 

As you probably know, the human eye’s retina has variable resolution.  The human eyes has roughly elliptical FOV of about 150 to 170 degrees horizontally by 135 to 150 degrees vertically, but the generally good discriminating FOV is only about 40 degree (+/-20 degrees) wide, with reasonably sharp vision, the macular, being about 17-20 degrees and the fovea with the very best resolution covers only about 3 degrees of the eye’s visual field.   The eye/brain processing is very complex, however, and the eye moves to aim the higher resolving part of the retina at a subject of interest; one would want the something on the order of the one-arc-minute goal in the central part of the display (and since having a variable resolution display would be a very complex matter, it end up being the goal for the whole display).

And going back to our 60″, 1080p display viewed from 6 feet, the pixel size in this example is ~1.16 arc-minutes and the horizontal field of view of view will be about 37 degrees or just about covering the generally good resolution part of the eye’s retina.

Image result for oculus rift

Image from Extreme Tech

Now lets consider the latest Oculus Rift VR display.  It spec’s 1200 x 1080 pixels with about a 94 horz. by 93 vertical FOV per eye or a very chunky ~4.7 arc-minutes per pixel; in terms of angular resolution is roughly like looking at a iPhone 6 or 7 from 5 feet away (or conversely like your iPhone pixels are 5X as big).   To get to the 1 arc-minute per pixel goal of say viewing today’s iPhones at reading distance (say you want to virtually simulate your iPhone), they would need a 5,640 by 5,580 display or a single OLED display with about 12,000 by 7,000 pixels (allowing for a gap between the eyes the optics)!!!  If they wanted to cover the 150 by 135 FOV, we are then talking 9,000 by 8,100 per eye or about a 20,000 by 9000 flat panel requirement.

Not as apparent but equally important is that the optical quality to support these types of resolutions would be if possible exceeding expensive.   You need extremely high precision optics to bring the image in focus from such short range.   You can forget about the lower cost and weight Fresnel optics (and issues with “God rays”) used in Oculus Rift.

We are into what I call “silly number territory” that will not be affordable for well beyond 10 years.  There are even questions if any know technology could achieve these resolutions in a size that could fit on a person’s head as there are a number of physical limits to the pixel size.

People in gaming are apparently living with this appallingly low (1970’s era TV game) angular resolution for games and videos (although the God rays can be very annoying based on the content), but clearly it not a replacement for a good high resolution display.

Now lets consider Microsoft’s Hololens, it most criticized issue is is smaller (relative to the VR headsets such as Oculus) FOV of about 30 by 17.5 degrees.  It has a 1268 by 720 pixel display per eye which translates into about 1.41 arc-minutes per pixel which while not horrible is short of the goal above.   If they had used 1920×1080 (full HD) microdisplay devices which are becoming available,  then they would have been very near the 1 arc-minute goal at this FOV.

Let’s understand here that it is not as simple as changing out the display, they will also have to upgrade the “light guide” that the use as an combiner to support the higher resolution.   Still this is all reasonably possible within the next few years.   Microsoft might even choose to grow the FOV to around 40 degrees horizontally rather and keep the lower angular resolution a 1080p display.  Most people will not seriously notice the 1.4X angular resolution different (but they will by about 2x).

Commentary on FOV

I know people want everything, but I really don’t understand the criticism of the FOV of Hololens.  What we can see here is a bit of “choose your poison.”  With existing affordable (or even not so affordable) technology you can’t support a wide field of view while simultaneously good angular resolution, it is simply not realistice.   One can imaging optics that would let you zoom between a wide FOV with lower angular resolution and a smaller FOV with higher angular resolution.  The control of this zooming function could perhaps be controlled by the content or feedback from the user’s eyes and/or brain activity.

Lumens versus Candelas/Meter2 (cd/m2 or nits)

With an HMD or HUD, what we care about is the light that reaches the eye.   In a typical front projector system, only an extremely small percentage of the light that goes out of the projector, reflects off the screen and makes it back to any person’s eye, the vast majority of the light goes to illuminating the room.   With a HMD or HUD all we care about is the light that makes it into the eye.

Projector lumens or luminous flux, simply put, are a measure of the total light output and for a projector is usually measure when outputting a solid white image.   To get the light that makes it to the eye we have to account for the light hits a screen, and then absorbed, scattered, and reflects back at an angle that will get back to the eye.  Only an exceeding small percentage (a small fraction of 1%) of the projected light will make it into the eye in a typical front projector setup.

With HMDs and HUDs we talk about brightness in terms candelas-per-Meter-Squared (cd/m2), also referred to as “nits” (while considered an obsolete term, it is still often used because it is easier to write and say).  Cd/m2 (or luminance) is measure of brightness in a given direction which tells us how bright the light appears to the eye looking in a particular direction.   For a good quick explanation of lumens, cd/m2 I would recommend a Compuphase article.

Image result for hololens

Hololens appears to be “luminosity challenged” (lacking in  cd/m2) and have resorted to putting sunglasses on outer shield even for indoor use.  The light blocking shield is clearly a crutch to make up for a lack of brightness in the display.   Even with the shield, it can’t compete with bright light outdoors which is 10 to 50 times brighter than a well lit indoor room.

This of course is not an issue for the VR headsets typified by Oculus Rift.  They totally block the outside light, but it is a serious issue for AR type headsets, people don’t normally wear sunglasses indoors.

Now lets consider a HUD display.  A common automotive spec for a HUD in sunlight is to have 15,000 cd/m2 whereas a typical smartphone is between 500 and 600 cd/m2 our about 1/30th the luminosity of what is needed.  When you are driving a car down the road, you may be driving in the direction of the sun so you need a very bright display in order to see it.

The way HUDs work, you have a “combiner” (which may be the car’s windshield) that combines the image being generated with the light from the real world.  A combiner typical only reflects about 20% to 30% of the light which means that the display before the combiner needs to have on the order of 30,000 to 50,000 cd/m2 to support the 15,000 cd/m2 as seen in the combiner.  When you consider that you smartphone or computer monitor only has about 400 to 600 cd/m2 , it gives you some idea of the optical tricks that must be played to get a display image that is bright enough.

phone-hudYou will see many “smartphone HUDs” that simply have a holder for a smarphone and combiner (semi-mirror) such as the one pictured at right on Amazon or on crowdfunding sites, but rest assured they will NOT work in bright sunlight and only marginal in typical daylight conditions. Even with combiners that block more than 50% of the daylight (not really much of a see-through display at this point) they don’t work in daylight.   There is a reason why companies are making purpose built HUDs.

The cd/m2 also is a big issue for outdoor head mount display use. Depending on the application, they may need 10,000 cd/m2 or more and this can become very challenging with some types of displays and keeping within the power and cooling budgets.

At the other extreme at night or dark indoors you might want the display to have less than 100 cd/m2 to avoid blinding the user to their surrounding.  Note the SMPTE spec for movie theaters is only about 50 cd/mso even at 100 cd/m2 you would be about 2X the brightness of a movie theater.  If the device much go from bright sunlight to night use, you could be talking over a 1,500 to 1 dynamic range which turns out to be a non-trivial challenge to do well with today’s LEDs or Lasers.

Eye-Box and Exit Pupil

Since AR HMDs and HUDs generate images for a user’s eye in a particular place, yet need to compete with the ambient light, the optical system is designed to concentrate light in the direction of the eye.  As a consequence, the image will only be visible in a given solid angle “eye-box” (with HUDs) or “pupil” (with near eye displays).   There is also a trade-off in making the eyebox or pupil bigger and the ease of use, as the bigger the eye-box or pupil, the easier it will be the use.

With HUD systems there can be a pretty simple trade-off in eye-box size and cd/m2 and the lumens that must be generated.   Using some optical tricks can help keep from needing an extremely bright and power hungry light source.   Conceptually a HUD is in some ways like a head mounted display but with very long eye relief. With such large eye relieve and the ability of the person to move their whole head, the eyebox for a HUD has significantly larger than the exit pupil of near eye optics.  Because the eyebox is so much larger a HUD is going to need much more light to work with.

For near eye optical design, getting a large exit pupil is a more complex issue as it comes with trade-offs in cost, brightness, optical complexity, size, weight, and eye-relief (how far the optics are from the viewer’s eye).

Too small a pupil and/or with more eye-relief, and a near eye device is difficult to use as any small movement of the device causes you to to not be able to see the whole image.  Most people’s first encounter with an exit pupil is with binoculars or a telescope and how the image cuts offs unless the optics are centered well on the user’s eye.

Conclusions

While I can see that people are excited about the possibilities of AR and VR technologies, I still have a hard time seeing how the numbers add up so to speak for having what I would consider to be a mass market product.  I see people being critical of Hololens’ lower FOV without being realistic about how they could go higher without drastically sacrificing angular resolution.

Clearly there can be product niches where the device could serve, but I think people have unrealistic expectations for how fast the field of view can grow for product like Hololens.   For “real work” I think the lower field of view and high angular resolution approach (as with Hololens) makes more sense for more applications.   Maybe game players in the VR space are more willing to accept 1970’s type angular resolution, but I wonder for how long.

I don’t see any technology that will be practical in high volume (or even very expensive at low volume) that is going to simultaneously solve the angular resolution and FOV that some people want. AR displays are often brightness challenged, particularly for outdoor use.  Layered on top of these issues are those size, weight, cost, and power consumption which we will have to save these issues for another day.

 

Laser Beam Scanning Versus Laser-LCOS Resolution Comparison

cen-img_9783-celluon-with-uo

Side By Side Center Patterns (click on image for full size picture)

I apologize for being away for so long.  The pictures above and below were taken over a year ago and I meant to format and publish them back then but some other business and life events got in the way.

The purpose of this article is to compare the resolution of the Celluon PicoPro Laser Beam Scanning (LBS) projector and the UO Smart Beam Laser LCOS projector.   This is not meant to be a full review of both products, although I will make a few comments here and there, but rather, it is to compare the resolution between the two products.  Both projectors claim to have 720P resolution but only one of them actually has that “native/real” resolution.

This is in a way a continuation of the serious I have written about the PicoPro with optics developed by Sony and the beam scanning mirror and control by Microvision and in particular articles http://wp.me/p20SKR-gY and http://wp.me/p20SKR-hf.  With this article I am now included some comparison pictures I took of the UO Smart Beam projector (https://www.amazon.com/UO-Smart-Beam-Laser-Projector-KDCUSA/dp/B014QZ4FLO).

As per my prior articles, the Celluon PicoPro has no where close to it stated 1920×720 (non-standard) nor even 1280×720 (720P) resolution.  The UO projector while not perfect, does demonstrate 720P resolution reasonably well, but it does suffer from chroma aberrations (color separation) at the top of the image due to optical 100% offset (this is to be expected to some extent).

Let me be up front, I worked on the LCOS panel used in the UO projector when I was at Syndiant but I had nothing to do with the UO projector itself.   Take that as bias if you want, but the pictures I think tell the story.  I did not have any contact with either UO (nor Celluon for that matter) in preparing this article.

I also want to be clear that both the UO projector and the Celluon PicoPro tested are now over 1 year old and there may have been improvements since then.  I saw serious problems with both products, in particular with the color balance, the Celluon is too red (“white” is pink) and the UO very red deficient (“whilte is significantly blue-green).   The color is so far off on the Celluon that it would be a show stopper for me ever wanting to buy one as a consumer (hopefully UO has or will fix this).   Frankly, I think both projectors have serious flaws (if you want to know more, ask and I will write a follow-up article).

The UO Smart Beam had the big advantage in that it has “100% offset” which means that when placed on table-top, it will project upward not hitting the table without any keystone.   The PicoPro has zero offset and shoots straight out.  If you put it flat on a table the lower half of the image will shoot into the tabletop. Celluon includes a cheap and rather silly monopod that you can used to have the projector “float” above the table surface and then you can tilt it up and get a keystone image.  To take the picture, I had to mount the PicoPro on a much taller tripod and then shoot over the Projector so the image would not be keystoned

I understand that the next generation of the Celluon and a similar Sony MPCL1 projector (which has a “kickstand) have “digital keystone correction” which is not as good a solution as 100% offset as it reduces the resolution of the image; this is the “cheap/poor” way out and they really should have 100% offset like the UO projector (interestingly, earlier Microvision ShowWX projector with lower resolution had 100% offset) .

For the record – I like the Celluon PicoPro flatter form factor better; I’m not a fan of the UO cube as hurts the ability to put the projector in one’s pocket or a typical carrying bag.

Both the PicoPro with laser scanning and the Smart Beam with lasers illuminating an LCOS microdisplay have no focus knob and have a wide focus range (from about 50cm/1.5 feet to infinity), although they are both less sharp at the closer range.  The PicoPro with LBS is a Class 3R laser product whereas the Smart Beam with laser “illumination” of LCOS is only Class 1.   The measure dbrightness of the PicoPro was about 32 Lumens as rated when cold but dropped under 30 when heated up.  The UO while rated at 60 lumens was about 48 lumens when cold and about 45 when warmed up or more significantly below its “spec.”

Now onto the main discussion of resolution.  The picture at the top of this article shows the center crop from 720P test pattern generated by both projectors with the Smart Beam image on the left and the PicoPro on the right.   There is also an inset of the Smart Beam’s 1 pixel wide text pattern near the PicoPro’s 1 pixel wide pattern for comparison This test pattern shows a series of 1 pixel, 2 pixel and 3 pixel wide horizontal and vertical lines.

What you should hopefully notice is that the UO clearly resolves even the 1 pixel wide lines and the black lines are black whereas the 1 pixel wide lines are at best blurry and the 2 and even 3 pixel wide lines doe get to a very good black level (as in the contrast is very poor).  And the center is the very best case for the Celluon LBS whereas for the UO with it 100% offset it is a medium case (the best case is lower center).

The worst case for both projectors is one of the upper corners and below is a similar comparison of their upper right corner.  As before, I have included and inset of the UO’s single pixel image.

ur-img_9783-celluon-with-uo-overlay

Side By Side Center Patterns (click on image for full size picture)

What you should notice is that while there are still distinct 1 pixel wide lines in both directions in the UO projector, 1 pixel wide lines in the case of the Celluon LBS are a blurry mess.  Clearly they can’t resolve 1 pixel wide lines at 720P.

Because of the 100% offset optics the best case for the UO projector is at the bottom of the image (this is true almost any 100% offset optics) and this case is not much different than the center case for Celluon projector (see below):

lcen-celluon-with-uo-overlay

Below is a side by side picture I took (click on it for a full size image). The camera’s “white point” was an average between the two projectors (Celluon is too red/blue&green deficient and the UO is red deficient). The image below is NOT what I used in the cropped test patterns above as the 1 pixel features were too near the resolution limit of the Canon 70D camera (5472 by 3648 pixels) for the 1 pixel features.  So I used individual shots from each projector to double “sample” by the camera of the projected images.

side-by-side-img_0339-celluon-uo

For the Celluon PicoPro image I used the picture below (originally taken in RAW but digital lens corrected, cropped, and later converted to JPG for posting – click on image for full size):

img_9783-celluon-with-uo-overlay

For the UO Smart Beam image, I use the following image (also taken in RAW, digital lens corrected, straighten slightly, cropped and later converted to JPG for posting):

img_0231-uo-test-chart

As is my usual practice, I am including the test pattern (in lossless PNG format below for anyone who wants to verify and/or challenge my results:

interlace res-chart-720P G100A

I promise I will publish any pictures by anyone that can show better results with the PicoPro or any other LBS projector (Or UO projector for that matter) with the test pattern (or similar) above (I went to considerable effect to take the best possible PicoPro image that I could with a Canon 70D Camera).

Celluon/Sony/Microvision Optical Path

Celluon Light Path Labled KGOnTech

Today I’m going to give a bit of a guided tour through the Celluon optical path.  This optical engine was developed by Sony probably based on Microvision’s earlier work and using Microvision’s scanning mirror.   I’m going to give a “tour” of the optics and then give some comment on what I see in terms of efficiency (light loss) and cost.

Referring to the picture above and starting with the lasers at the bottom, there a 5 of them (two each of red and green and one blue) that are in a metal chassis (and not visible in the picture).   Each laser goes to it own beam spreading and alignment lens set.  These lenses enlarge the diameter of each laser beam and they are glued in place after alignment.  Note that the beams at this point are spread wider than the size of the scanning mirror and will be converged/focus back later in the optics.

Side Note: One reason for spreading the laser beams bigger than the scanning mirror is to reduce precision required of the optical components (making very small high precision optics with no/extremely-small defects becomes exponentially expensive).  But a better explanation is that it supports the despeckling process.  With the wider beam they can pass the light through more different paths before focusing it back.  There is a downside to this as seen in the Celluon output, namely is still too big when exiting the projector and thus the images are out of focus at short projection distances. 

After the beam spreading lenses there is glass plate at a 45 degree angle that splits a part of the light from the lasers down to a light sensors for each laser.   The light sensors are used to give feedback on the output of each laser and adjust to adjust them based on how they change with temperature and aging.

Side Note:  Laser heating and the changing of the laser output is a big issue with laser scanning. The lasers very quickly change in temperature/output.  In tests I have done, you can see the effect of bright objects on one side of the screen affecting the color on the other side of the screen in spite of the optical feedback.   

Most of the light from the sensor deflector continues to a complex structure of about 15 different pieces of optically coated solid glass elements glued together into a complex many faceted structure. There are about 3 times as many surfaces/components as would be required for simply combining 3 laser beams.   This structure is being used to combine the various colors into a single beam and has some speckle reducing structures.  As will be discussed later, having the light go through so many elements, each with their optical losses (and cost) results in loosing over half the light.

lenovo 21s cropFor reference compare this to the optical structure shown in the Lenovo video for their prototype laser projector in a smartphone at left (which uses an STMicro engine see).  There are just 3 lenses, 1 mirror (for red) and two dichroic plate combiners to combine the green and blue and a flat window. The Celluon/Sony/Microvision engine by comparison is using many more elements and instead of simple plate combiners they are using prisms which while having better optical performance, are considerably more expensive.  The Lenovo/STM engine does not show/have the speckle reduction elements nor the distortion correction elements (its two mirror scanning process inherently has less distortion) of the Celluon/Sony design.

Starting with the far left red laser light path, it goes to a “Half Mirror and 2nd Mirror” pair.   This two mirror assembly likely being done for speckle reduction.  Speckle is caused by light interfering with itself and by having the light follow different path lengths (the light off the 2nd mirror will follow a slightly longer path) it will reduce the speckle.  The next element is a red-pass/green-reflect dichroic mirror that combines left red and green lasers followed by a red&green-pass/blue-reflect dichroic combiner.

Then working from the right, there is another speckle reduction half-mirror/2nd-mirror pair for the right hand green laser followed by a green-pass/red-reflect dichroic mirror to combine the right side green and red lasers.  A polarizing combiner is (almost certainly) used to combine the 3 lasers on the left with the two lasers on the right into a single beam.

After the polarizing combiner there is a mirror that directs the combined light through a filter encased between two glass plates.  Most likely this filter either depolarizes or circularly polarizes the light because on exiting this section into the open air the previously polarized laser light has little if any linear polarization.   Next the light goes through a 3rd set of despeckling mirror pairs.   The light reflects off another mirror and exits into a short air gap.

Following the air gap there is a “Turning Block” that is likely part of the despeckling.   The material in the block probably has some light scattering properties to vary slightly the light path length and thus reduce speckle and thus the reason for the size/thickness of the block.   There is a curved light entry surface that will have a lens effect.

Light exiting the Turning Block goes through a lens that focuses the spread light back to a smaller beam that will reflect off the beam scanning mirror.  This lens set the way the beam diverges after it exits the projector.

After the converging lens the light reflects off a mirror that sends the light into the beam scanning mirror assembly.  The beam scanning mirror assembly, designed by Microvision, is it own complex structure and among other things has some strong magnets in it (supporting the magnetic mirror deflection).

Side Note: The STM/bTendo design in the Lenovo projector uses two simpler mirrors that move in only one axis rather than a single complex mirror that has to move in two axes.  The STM mirrors both likely uses a simple electrostatic only design whereas Microvision’s dual axis uses electrostatic for one direction and electromagnetic for the other.  

Finally, the light exits the projector via a Scanning Correction Lens that is made of plastic. It appears to be the only plastic optical element as all the other elements that could be easily accessed.   Yes, even though this is a laser scanning projector, it still has a correction lens, in this case to correct the otherwise “bow-tie” distorted scanning process.

Cost Issues

In addition to the obvious cost of the lasers (and needing 5 of them rather than just 3) and the Scanning Mirror Assembly, there are a large number of optically coated glass elements.  Addtionally, instead of using lower cost plate elements, the Celluon/Sony/Microvision engine use much more expensive solid prisms for the combiner and despeckling elements.   Each of these has to be precisely made, coated, and glued together. The cost of each element is a function of the quality/optical efficiency and which can vary significantly, but I would think there would be at least $20 to $30 of raw cost in just the glass elements even at moderately high volumes (and it could be considerably more).

Then there is a lot to assemble with precise alignment of all the various optics.  Finally, all of the lasers must be individually aligned after the unit with all the other elements has been assemble.

Optical Efficiency (>50% of the laser light is lost)

The light in the optical engine passes through and/or reflects off a large number of optical interfaces and there are light losses at each of these interfaces.  It is the “death by a thousand cuts” because while each element might have a 1% to 10% or more lose, the effects are multiplicative.   The use of solid rather than plate optics reduces the losses but as at added cost.  You can see in the picture of the walls of the chassis spots of colored light that has “escaped” the optical path and is lost.  You can also see the light glowing off optical elements including the lens; all of this is lost light.  The light that goes to the light sensors is also lost.

Celluon laser lable IMG_9715

Laser Warning Label From Celluon Case

Some percentage of the light that is spread will not be converged back onto the mirror.  Additionally, there are scattering losses in the Correction Lens and Turning block and in the rest of the optics.

When it is multiplied out, more than 50% of the laser light is lost in the optics.

This 50% light loss percentage agrees with the package labeling (see picture on the left) that says the laser light output for Green is 50mW even thought they are using two green lasers each of which likely outputs 50mW or more.

Next Time: Power Consumption

The Celluon system consumes ~2.6 Watts to put up a “black” image and ~6.1 Watts to put up a 32-lumen white image.  The delta between white and black being about 3.5 Watts or about 9 lumens per delta Watt from back to white.  For reference, the newer DLP projectors using LEDs can produce about double the delta lumens per Watt.  Next time, I plan on drilling down in the power consumption numbers.

Lenovo’s STMicro Based Prototype Laser Projector (part 1)

Lenovo Tech World Projector 001At Lenovo at their Tech World on May 27th 2015 showed a Laser Beam Scanning (LBS) projector integrated into a cell phone prototype (to be clear, a prototype and not a product).   White there has been no announcement of the maker of the LBS projector, there is no doubt that is made by STM as I will show below (to give credit where it is due, this was first shown on a blog by Paul Anderson focused on Microvision )

ST-720p- to Lenove comparison 2The comparison at left is base on video by Lenovo that included an exploded views of the projector and pictures of STM’s 720p projector from an article from Picoprojector-info.com on Jan 18, 2013.   I have drawn lines comparing various elements such as the size and placement of connectors and other components, the size and placement of the 3 major I.C.’s, and even the silk screen “STM” in the same place on both the Lenovo video and the STM article’s photo (circled in yellow).

While there are some minor differences, there are so many direct matches that there can be no doubt that Lenovo is using STM.

The next interesting to consider is how this design compares to the LBS design of Microvision and Sony in the Celluon projector.   The Lenovo video shows the projector as being about 34mm by 26mm by 5mm thick.  To check this I took the a photo from the STM to CelluonTO SCALE  003Picoprojector.com
article and was able to fit the light engine and electronic into a 34mm by 26mm rectangle arranged as they are in the Lenovo video (yet one more verification that it is STM).   I then took a picture I took of the Celluon board to the same scale and show the same 34x26mm rectangle on it.   The STM optics plus electronics are 1/4 the area and 1/5th the volume (STM is 5mm thick versus Microvision/Sony’s 7mm).

The Microvision/Sony is has probably about double the lumens/brightness of the STM module due to have two green and two red lasers and I have not had a chance to compare the image quality.   Taking out the extra two lasers would make the Microvision/Sony engine optics/heat-sinking smaller by about 25% and have a smaller impact on the board space, but this would still leave them over 3X bigger than STM.   The obvious next question is why.

One reason is that the STM either has a simpler electronics design or is more integrated and/or some combination thereof.  In particular the Microvision/Sony design requires an external DRAM (large rectangular chip in the Microvision/Sony).    STM probably still needs DRAM, but it is likely integrated into one of their chips.

There are not a lot of details on the STM optics (developed by bTendo of Israel before being acquired by STM).   But what we do know is STM uses separate simpler and smaller horizontal and vertical mirrors versus Microvision significantly larger and more complex single mirror assembly.  Comparing the photos above, the Microvision mirror assembly alone is almost as big as STM’s entire optical engine with lasers.   The Microvision mirror assembly has a lot of parts other than the MEMs mirror including some very strong magnets.  Generally the optical path of the Microvision engine requires a lot of space to enter and exit the Microvision mirror from the “right” directions.

btendo optics

On the right I have captured two frames from the Lenovo video showing the optics from two directions.  What you should notice is that the mirror assembly is perpendicular to the incoming laser light.  There appears to be a block of optics (pointed to by the red arrow in the two pictures) that redirects the light down to the first mirror and then returning it to the second mirror.  The horizontal scanning mirror is clearly shown in the video but it is not clear (so I took an educated guess) as to the location of the vertical scanning mirror.

Also shown at the right is bTendo patent 8,228,579 showing the path of light for their two scanning mirror design.   It does not show the more complex block of optics required to direct the light down to the vertical mirror and then redirect it back down to the horizontal mirror and then out as would be required in the Lenovo design.    You might also notice that there is a flat clear glass/plastic output cover shown in the at the 21s point in the video, this is very different from the Microvision/Celluon/Sony design show below.

Microvision mirror with measurements

Microvision Mirror Assembly and Exit Lens

Shown at left is the Microvision/Celluon beam scanning mirror and the “Exit” Lens.   First notices the size and complexity of the scanning mirror assembly with magnets and coils.  You can see the single round mirror with its horizontal hinge (green arrow) and the vertical hinge (yellow arrow) on the larger oval yoke.   The single mirror/pivot point causes an inherently bow-tied image.  You can see how distorted the mirror looks through the Exit Lens (see red arrow); this is caused by the exit lens correcting for the bow-tie effect.  This significant corrective lens is also a likely source of chroma aberrations in the final image.

Conclusions

All the above does not mean that the Leveno/STM is going to be a successful product.   I have not had a chance to evaluated the Lenovo projector and I still have serious reservations about any embedded projector succeeding in a cell phone (I outlined my reasons in an August 2013 article and I think they still hold true).    Being less than 1/5th the volume of the Microvision/Sony design is necessary but I don’t think is sufficient.

This comparison only shows that the STM design is much smaller than Microvisions and Microvision has only made relatively small incremental progress in size since the ShowWX announced in 2009) and Sony so far has not improved on it much, at least so far.

Celluon LBS Analysis Part 2B – “Never In-Focus Technology” Revisit

Celluon alignment IMG_9775

After Alignment alignment target (click for bigger image)

I received concerns that the chroma aberrations (color fringes) seen in the photos in Part 2B were caused by poor alignment of the lasers.   I had aligned the lasers per Celluon’s instructions before running the test but I decided to repeat the alignment to see if there would be a difference.

After my first redo of the alignment I notice that the horizontal resolution got slightly better in places but the vertical resolution got worse.   The problem I identified is that the alignment procedure does not make aligning the pairs of red and green lasers easy.  The alignment routine turns all 5 lasers on a once which makes it very difficult to see pairs of lasers of the same color.

To improve on the procedure, I put a red color filter in front of the projector output to eliminate the blue and two green lasers and then aligned the two red laser to each other.  Then using a green color filter, I aligned the two green lasers.  I did this for both horizontally and vertically.   On this first pass I didn’t worry about the other colors.  On the next pass I moved the red pair by always the same amount horizontally and vertically and similarly for the green pair.  I went around this loop a few times trying for the best possible alignment (see picture of alignment image above).

After the re-alignment I did notice some slightly better horizontal resolution in the vertical lines (but not that much and not everywhere) and some very slight improvement in the vertical resolution.   There was still the large chroma aberrations, particularly on the left side of the image (much less so on the right side) that some had claimed were “proof” that the lasers were horribly aligned (which they were not before).   The likely cause of the chroma aberrations is the output lens and/or angle error in the mechanical alignment of the lasers.

Below shows the comparison before and after on the 72-inch diagonal image.laser alignment comparison 2

Note the overall effect (and the key point of the earlier article_ of the projected image going further out of focus at smaller image sizes.   Even at 72-inch diagonal the image is far from what should be considered sharp/in-focus even after the re-calibration.

Below shows the left and right side of the 72-in diagonal image.  The green arrows show that there is minimal chroma aberration on the right side but there is a significant issue on the left side.   Additionally, you may note the sets of parallel horizontal lines have lost all definition on the left and right side and the 1 pixel wide targets are not resolved (compare to the center target above).   This loss of resolution on the sides of the image is inherent in Microvision’s scanning process.

Celluon 72-in diag left-right targets

Center left and center right of 72-in diag. after re-alignment (click on thumbnail for full resolution image)

While the re-alignment did make some parts of the image a little more defined, the nature of the laser scanning process could not fully resolved other areas.   In future article I hope to get into this some more.

One other small correction from the earlier article, the images labeled “24-inch diagonal” are actually closer to 22-inches in diagonal.

Below are the high-resolution (20 megapixel) images for the 72-in, 22-in, and 12-in images after calibration.  I used a slightly different test patter which is also below (click on the various images for the high-resolution version).

Celluon 72-in diag  recalibrated IMG_9783

Celluon 72-in diag re-calibrated (click for full size image)

Celluon 22-in diag  recalibrated IMG_9864

Celluon 22-in diag re-calibrated (click for full size image)

Celluon 12-in diag recalibrated IMG_9807

Celluon 12-in diag re-calibrated (click for full size image)

 

 

 

 

interlace res-chart-720P G100A

Test Chart for 1280×270 resolution (click for full resolution)

Just to verify that my camera/lens combination was in no way limiting the visible resolution of the projected image, I also took some pictures of about 1/3 of the image (to roughly triple the resolution) and with an 85mm F1.8 “prime” (non-zoom) lens shot at F6.3 so it would show extremely find detail (including the texture of the white wall the image was projected onto).

Below are the images showing the Center-Left, Center and Center-Right resolution targets of the test chart above.   Among other things to notice how the resolution of the projected image drops from the center to the left and right and also how the chroma/color aberrations/fringes are most pronounce on the center-left image.

 

Celluon 72-in diag 85mm Center-Left 9821

85mm Prime Lens Center Left Target and Lines (click for full size image)

Celluon 72-in diag 85mm lens center  9817

85mm Prime Lens Center Target and Lines (click for full size image)

Celluon 72-in diag 85mm center-right 9813

85mm Prime Lens Center-Right Target and Lines (click for full size image)

Karl

Celluon Laser Beam Scanning Projector Technical Analysis – Part 1

Celluon Light Path w800 IMG_8087The Celluon PicoPro projector has been out for a few months now for about $359.   I have read a number of so-called “reviews” that were very superficial and did little more than turn on the projector and run a few pictures and maybe make a video.   But I have not seen any serious technical analysis or review that really showed the resolution or measured anything beyond the lumens.   So I am going to be doing a multi-part technical analysis on this blog (there is just too much to cover in one article).

In the photo at the top, I took a picture with the lasers on to more clearly see the various light paths.  A surprise to many is that they used 5 lasers and not just three which adds to the cost and complexity of the design.   They use two red and green lasers to get to the spec’ed (and measured) brightness of 32 lumens.   In future articles, I will get into more details on the optical path and what is going on (there are a few “tricks” they are using).

It is no secret by now that the Celluon engine uses a beam scanning mirror from Microvision and the optical engine and electronics are from Sony (the engine looks identical to the one Sony Announced February 20, 2014) .  Below I have taken the cover off the electrical part so you can see some of the chips. If you look carefully at the red arrows in the picture below, you can see the 3 clearly identified Sony ASICs used in the driver board (the 4th large chip is a Samsung SDRAM and the smaller device is a Texas Instruments power supply chip — there are more power supply chips on the backside of the board).

Sony Devices IMG_9737

I have used test charts to measure the resolution, check the color control , and measured the power consumption.   I have also taken a look inside to see how it is made (per the pictures above).    I have collected data and many images so the biggest problem for me to boil  this down into a manageable form for presentation on this blog.   I decide to start with just a bit about the resolution and a summary of some other issues.

Celluon claims the resolution is “1920 x 720” pixels and not that is not a typo on my part, they really claim to have “1920” horizontal resolution with as claimed by Sony in a press release on the engine.  It is easily provable that the horizontal resolution is much less than 1920 or even 1280 pixels and the vertical resolution is not up to fully resolving 720 lines.   In fact the effective/measurable resolution of the Celluon engine is closer to 640 by 360 pixels than it is to 1280×720.

PC Magazine’s April 22, 2015 article on the Celluon PicoPro made the oxymoron statement “the image has a slight soft-focus effect.”  To me “soft-focus” means blurry and indeed the image is in fact both blurry and lower in resolution.   The article also stated “I also saw some reddish tinges in dark gray areas in some images, a problem that also showed up in a black-and-white movie clip“.   The image is definitely “off to the red” (white point at about 4000K) and it has very poor color control in the darker areas of the gray-scale.

Resolution is a big topic and I have a lot of photos, but to get things started, below I have taken a center crop of 1280×720 HDMI input into the Cellulon projector.   Below this image I have included the same crop of the text pattern in put zoomed in by 2X for comparison.   In the photo you will see a yellow measuring tape that was flush against the projection screen, this both shows the size of the projected image AND proves that the camera was focused well and had enough resolution to show pixels in the projected image.

Celluon test pattern comparison

720P Celluon Projected Image with Source Below It with key comparison point indicated by the red ovals

You might want to look at the various areas indicated by the red ovals corresponding to the same areas of the projected image and the test pattern.  What you can see is that there is effectively no modulation/resolution of the sets of 1 pixel wide vertical lines so the horizontal resolution is below 1280 (more like about half 1280).

There is some modulation, but not as much as you should get if this were truly 720p, of the horizontal lines center of the of the image but this will fade out towards the left and right side of the projected image (I will get into this more in a future article).

You may also notices that the overall Celluon image is blurry.  Yes, I know lasers are supposed to “always be in focus,” but the image is definitely out of focus.   It turns out that at the size of this image (12 inches vertical or 24 inches diagonal which is moderately big, the width of the scanned laser beams are wider than a pixel and thus overlap.

The image is even more blurry if the image is say about 7-inches high projected on a standard letter size sheet of paper (the image is very blurry).  The blurriness goes down if the image gets bigger but it is NEVER really sharp even with a 72-inch diagonal image.   In a future article I will post the same test pattern at different image sizes to show the effects of image size and blurriness/focus.  I have started to call this “never in-focus technology.”

Some summary observations (more to come on these subjects):

  1. Laser Speckle – much improved over previous Microvision ShowWX projectors.   It still is far from perfect an most annoying where there are large flat areas and text on a bright background.
  2. The Celluon eliminated the “bowtie” effect of earlier Microvision ShowWX product so that the image is rectangular
  3. The lost the 100% offset of the ShowWX meaning that this requires a “stand” and the image will either be keystone or the projector will be between the viewers eye and the image.  This is bad/wrong for a short throw projector.  There is no keystone correction supported by the product.
  4. Low effective resolution – absolutely nowhere close to 720p (see above, more on this in future articles).
  5. Blurry image – not the same per se as resolution.  The size of the laser beam appears to be bigger than a pixel until the image is very large.  Additionally there are issues with aligning the 5 lasers into a single “beam” and issue with the interlaced bi-directional scan process (see http://www.kguttag.com/2012/01/09/cynics-guild-to-ces-measuring-resolution/ for more on the scan process and how it hurts resolution).
  6. Class 3R laser product – This is a very serious problem as it is not safe for use with children (in fact laser safety glasses are recommended) but it this is not well marked.  The labels on the product are ridiculously tiny (particularly the one on the projector itself).  The EU is reported in the process of banning consumer products that emit 3R laser light (http://www.laserpointersafety.com/news/news/other-news_files/tag-european-union.php and http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32014D0059)
  7. Flicker – this is a serious problem with this product and I will discuss more about this in a later article.  About 1 in 7 people I showed the projector to said it gave them headaches or other problems (I had multiple people tell me to turn it off as it was painful to even be in the room with it).  The scan process is 60-hertz “interlaced” with no persistence (as with an old CRT).
  8. The power consumption is high taking about 2.6W to show a totally back image and 6.1W for a totally white 32 lumen image with the power consumption in between roughly proportional to the image content. Don’t let the lack of fans fool you, they are using heat spreading over the entire package to dissipate the heat from just the projector.  The device will quickly overheat if left on a tabletop (as opposed to the fan) as much of the heat is spread over the bottom of the package.  It will also overheat if a bright image is left on the screen for too long even if the device is floating in air.
  9. The color/gray scale control is pretty poor particularly with the darker parts of a gray ramp.  At the dark end of the gray scale the “gray” turns red.  Additionally there is “crosstalk” caused from the lasers heating or cooling based on the brightness on one part of the screen that affects the color/brightness on the other side of the screen.   In other words the content of the image in one area will affect the color in another area (particularly horizontally).

I have seen Microvision laser scanned projectors since the Microvision ShowWX came out in 2010 or 5 years ago and the Celluon unit has many of the same issues that I found with the ShowWX.  While the Celluon is much improved in terms of brightness and speckle, has better resolution (but not as near what is claimed) and it delivers about 3X the brightness for the about the same power (much of this is due to laser improvements over the last 5 years) the progress is very modest considering that 5 years have passed.

Frankly, I still consider this technology far from ready for “prime time” high volume and sill has some major and in many ways fatal flaws.  Being laser safety class 3R at only 32 lumens is chief among them.  The flicker I also consider to be a fatal problem for a consumer product but this perhaps could be solved by going to a higher refresh rate (which would require a much faster scanning mirror).   The power consumption is far too high for embedding into small portable products.

And then we come back to the issues with the “use model” that still exists with Pico Projectors (see my discussion from way back in 2011 about this).

On a final note, I know that Laser Beam Scanning has a very dedicated following with some people that vigorously defend it.   I will be providing test patterns and other information so people can duplicate my experiments and verify my results.   I am more than happy to discuss the technology and respond to dissenting opinions, but I won’t tolerate rude comments or personal attack in the discussion.

Addendum — Test Patterns

Below are some test patterns stored in lossless PNG format to try out on the Celluon or other 720p projector to see for yourself.

Right-Click on the given pattern download the original full size pattern. Note, they should be view at “100%” if not on a 720p monitor and should totally fill the screen on 720p projector.

The first one below is a resolution test with 9 “zone patterns” has well as sets of 1 pixel wide black and white horizontal and vertical lines.

interlace res-chart-720P G100A

 

Simple horizontal gray ramp.  This is totally neutral gray from 0 to 255.Horz 0 to 255 gray ramp

Below may look dark gray or even black but it a totally flat R=B=G=16 everyone (a flay gray of 16/255).   See how it looks on the Celluon.

gray 16