Search results for Celluon

Celluon Laser Beam Scanning Power Consumption (Over 6 Watts at 32 Lumens)

Celluon Power MeasurementsOn the left are a series of power measurements I made on the Celluon PicoPro projector with an optical engine designed by Sony using a Microvision scanning mirror.  The power was calculated based on the voltage and current from current coming from the battery using the HDMI input.

The first 6 measurements were with a solid image of the black/white/color indicated.  For the last 3 measurements I did an image that was half black on the left and the other half white, an image that was top half black, and a screen of 1 pixel wide vertical stripes.    The reason for the various colors/patterns was to gain some additional insight into the power consumption (and will be covered in a future article).  In addition to the power (in Watts) added  a column with the delta power from the Black image.

Celluon PicoPro Battery IMG_8069

Picture of Celluon PicoPro Battery

The Celluon PicoPro consumes 2.57 Watts for a fully black image (there are color lines at the bottom, presumably for laser brightness calibration) and 6.14W for a 32 lumen full white image.   When you consider that a smart phone running with the GPS only consumes about 2.5W and a smart phone LCD on full brightness consumes about 1W to 1.5W, over 6W is a lot of power (Displaymate has and excellent article on smartphone displays that includes the power consumption).   The Celluon has a 3260mah / 12.3Wh battery which is bigger than what goes in even large smartphones (and fills most of the left side of the case).

So why does the Celluon unit not need a fan, the answer is A) it only outputs 32-lumens and B) it use a lot of thermal management build into the case to spread the heat from the projector.  In the picture below I have shown some of the key aspects of the thermal management.  I have flipped over the projector and indicated with dashed rectangles were the thermal pads (a light blue color) go to the projector unit.  In addition the cast aluminum body used to hold the lasers and the optics which acts as a heat sink to spread the heat, there is gray flexible heat spreading material lining the entire top and bottom of the case plus a more hidden, a heat sink amalgamation essentially dedicated to the lasers as well as aluminum fins around the sides of the case.

2015-07-22_Case Heat Sinking 003

The heat spreading material on the left (as view) top of the case is pretty much dedicated to the battery, but all the rest of the heat spreading, particularly along the bottom of the case goes to the projector.

The most interesting feature is that there is a dedicated heat path from the area where the lasers are held in the cast body to the a heat sink “hidden chamber” or what I have nicknamed “the thermal corset”.   You should notice that there are three (3) light blue heat pads on the right side of the case top and that the middle one is isolated from the other two.  This middle one is also thicker and goes through a hole in the main case body to a chamber that filled with a heat sink material and then covered with an outer case.   This also explains why the Cellouon unit looks like it is in two parts from the outside.

Don’t get me wrong, having a fanless projector is desirable, but it is not due to the “magic” of using lasers.  Quite to the contrary, the Celluon unit has comparitively poor lumens per Watt, about double the power of what a similar DLP projector would take for the same lumens.

You may want to notice in the table that if you add up the “delta” red, green, and blue it totals to a lot more than the delta white.  The reason for this is that the Celluon unit never puts out “pure” fully saturated primary colors.  It always mixes a significant amount of the other two colors (I have verified this with several methods including using color filters over the output and using a spectral-meter).    This has to be done (and is done with LED projectors as well) so that the colors called for by standard movies and pictures are not over-saturated (if you don’t do this, green grass, for example” will look like it is glowing).

Another interesting result is that the device consumes more power if I put up a pattern were the left half is black and the right half is white rather than having the top half black and the bottom half white.   This probably has something to do with laser heating and not getting a chance to cool down between lines.

I also put up a pattern with alternating 1 pixel wide vertical lines and it should be noted that the power is between that of the left/right half screen image and the full white image.

So what does this mean in actual use?   With “typical” movie content, the image is typically about 25% to 33% (depends on the movie) of full white so the projector will be consuming about 4 Watts per hour which with a 12.3Wh battery will go about 3 hours.   But if you are web browsing, the content is often more like 90% of full white so it will be consuming over 6W per hour or 4 to 6 times what a typical smartphone displays consumes.    Note this is before you add in the power consumed in getting and processing the data (say from the internet).

Conclusion

The Celluon projector may be fanless,  but not because it is efficient.  From a product perspective, it does do a good job with its “thermal corset” of hiding/managing the power.

This study works from the “top down” by measuring the power and seeing where the heat is going in the case, the next time I plan to work some “bottom’s up” numbers to help show what causes the high power consumption and how it might change in the future.

Celluon/Sony/Microvision Optical Path

Celluon Light Path Labled KGOnTech

Today I’m going to give a bit of a guided tour through the Celluon optical path.  This optical engine was developed by Sony probably based on Microvision’s earlier work and using Microvision’s scanning mirror.   I’m going to give a “tour” of the optics and then give some comment on what I see in terms of efficiency (light loss) and cost.

Referring to the picture above and starting with the lasers at the bottom, there a 5 of them (two each of red and green and one blue) that are in a metal chassis (and not visible in the picture).   Each laser goes to it own beam spreading and alignment lens set.  These lenses enlarge the diameter of each laser beam and they are glued in place after alignment.  Note that the beams at this point are spread wider than the size of the scanning mirror and will be converged/focus back later in the optics.

Side Note: One reason for spreading the laser beams bigger than the scanning mirror is to reduce precision required of the optical components (making very small high precision optics with no/extremely-small defects becomes exponentially expensive).  But a better explanation is that it supports the despeckling process.  With the wider beam they can pass the light through more different paths before focusing it back.  There is a downside to this as seen in the Celluon output, namely is still too big when exiting the projector and thus the images are out of focus at short projection distances. 

After the beam spreading lenses there is glass plate at a 45 degree angle that splits a part of the light from the lasers down to a light sensors for each laser.   The light sensors are used to give feedback on the output of each laser and adjust to adjust them based on how they change with temperature and aging.

Side Note:  Laser heating and the changing of the laser output is a big issue with laser scanning. The lasers very quickly change in temperature/output.  In tests I have done, you can see the effect of bright objects on one side of the screen affecting the color on the other side of the screen in spite of the optical feedback.   

Most of the light from the sensor deflector continues to a complex structure of about 15 different pieces of optically coated solid glass elements glued together into a complex many faceted structure. There are about 3 times as many surfaces/components as would be required for simply combining 3 laser beams.   This structure is being used to combine the various colors into a single beam and has some speckle reducing structures.  As will be discussed later, having the light go through so many elements, each with their optical losses (and cost) results in loosing over half the light.

lenovo 21s cropFor reference compare this to the optical structure shown in the Lenovo video for their prototype laser projector in a smartphone at left (which uses an STMicro engine see).  There are just 3 lenses, 1 mirror (for red) and two dichroic plate combiners to combine the green and blue and a flat window. The Celluon/Sony/Microvision engine by comparison is using many more elements and instead of simple plate combiners they are using prisms which while having better optical performance, are considerably more expensive.  The Lenovo/STM engine does not show/have the speckle reduction elements nor the distortion correction elements (its two mirror scanning process inherently has less distortion) of the Celluon/Sony design.

Starting with the far left red laser light path, it goes to a “Half Mirror and 2nd Mirror” pair.   This two mirror assembly likely being done for speckle reduction.  Speckle is caused by light interfering with itself and by having the light follow different path lengths (the light off the 2nd mirror will follow a slightly longer path) it will reduce the speckle.  The next element is a red-pass/green-reflect dichroic mirror that combines left red and green lasers followed by a red&green-pass/blue-reflect dichroic combiner.

Then working from the right, there is another speckle reduction half-mirror/2nd-mirror pair for the right hand green laser followed by a green-pass/red-reflect dichroic mirror to combine the right side green and red lasers.  A polarizing combiner is (almost certainly) used to combine the 3 lasers on the left with the two lasers on the right into a single beam.

After the polarizing combiner there is a mirror that directs the combined light through a filter encased between two glass plates.  Most likely this filter either depolarizes or circularly polarizes the light because on exiting this section into the open air the previously polarized laser light has little if any linear polarization.   Next the light goes through a 3rd set of despeckling mirror pairs.   The light reflects off another mirror and exits into a short air gap.

Following the air gap there is a “Turning Block” that is likely part of the despeckling.   The material in the block probably has some light scattering properties to vary slightly the light path length and thus reduce speckle and thus the reason for the size/thickness of the block.   There is a curved light entry surface that will have a lens effect.

Light exiting the Turning Block goes through a lens that focuses the spread light back to a smaller beam that will reflect off the beam scanning mirror.  This lens set the way the beam diverges after it exits the projector.

After the converging lens the light reflects off a mirror that sends the light into the beam scanning mirror assembly.  The beam scanning mirror assembly, designed by Microvision, is it own complex structure and among other things has some strong magnets in it (supporting the magnetic mirror deflection).

Side Note: The STM/bTendo design in the Lenovo projector uses two simpler mirrors that move in only one axis rather than a single complex mirror that has to move in two axes.  The STM mirrors both likely uses a simple electrostatic only design whereas Microvision’s dual axis uses electrostatic for one direction and electromagnetic for the other.  

Finally, the light exits the projector via a Scanning Correction Lens that is made of plastic. It appears to be the only plastic optical element as all the other elements that could be easily accessed.   Yes, even though this is a laser scanning projector, it still has a correction lens, in this case to correct the otherwise “bow-tie” distorted scanning process.

Cost Issues

In addition to the obvious cost of the lasers (and needing 5 of them rather than just 3) and the Scanning Mirror Assembly, there are a large number of optically coated glass elements.  Addtionally, instead of using lower cost plate elements, the Celluon/Sony/Microvision engine use much more expensive solid prisms for the combiner and despeckling elements.   Each of these has to be precisely made, coated, and glued together. The cost of each element is a function of the quality/optical efficiency and which can vary significantly, but I would think there would be at least $20 to $30 of raw cost in just the glass elements even at moderately high volumes (and it could be considerably more).

Then there is a lot to assemble with precise alignment of all the various optics.  Finally, all of the lasers must be individually aligned after the unit with all the other elements has been assemble.

Optical Efficiency (>50% of the laser light is lost)

The light in the optical engine passes through and/or reflects off a large number of optical interfaces and there are light losses at each of these interfaces.  It is the “death by a thousand cuts” because while each element might have a 1% to 10% or more lose, the effects are multiplicative.   The use of solid rather than plate optics reduces the losses but as at added cost.  You can see in the picture of the walls of the chassis spots of colored light that has “escaped” the optical path and is lost.  You can also see the light glowing off optical elements including the lens; all of this is lost light.  The light that goes to the light sensors is also lost.

Celluon laser lable IMG_9715

Laser Warning Label From Celluon Case

Some percentage of the light that is spread will not be converged back onto the mirror.  Additionally, there are scattering losses in the Correction Lens and Turning block and in the rest of the optics.

When it is multiplied out, more than 50% of the laser light is lost in the optics.

This 50% light loss percentage agrees with the package labeling (see picture on the left) that says the laser light output for Green is 50mW even thought they are using two green lasers each of which likely outputs 50mW or more.

Next Time: Power Consumption

The Celluon system consumes ~2.6 Watts to put up a “black” image and ~6.1 Watts to put up a 32-lumen white image.  The delta between white and black being about 3.5 Watts or about 9 lumens per delta Watt from back to white.  For reference, the newer DLP projectors using LEDs can produce about double the delta lumens per Watt.  Next time, I plan on drilling down in the power consumption numbers.

Celluon LBS Analysis Part 2B – “Never In-Focus Technology” Revisit

Celluon alignment IMG_9775

After Alignment alignment target (click for bigger image)

I received concerns that the chroma aberrations (color fringes) seen in the photos in Part 2B were caused by poor alignment of the lasers.   I had aligned the lasers per Celluon’s instructions before running the test but I decided to repeat the alignment to see if there would be a difference.

After my first redo of the alignment I notice that the horizontal resolution got slightly better in places but the vertical resolution got worse.   The problem I identified is that the alignment procedure does not make aligning the pairs of red and green lasers easy.  The alignment routine turns all 5 lasers on a once which makes it very difficult to see pairs of lasers of the same color.

To improve on the procedure, I put a red color filter in front of the projector output to eliminate the blue and two green lasers and then aligned the two red laser to each other.  Then using a green color filter, I aligned the two green lasers.  I did this for both horizontally and vertically.   On this first pass I didn’t worry about the other colors.  On the next pass I moved the red pair by always the same amount horizontally and vertically and similarly for the green pair.  I went around this loop a few times trying for the best possible alignment (see picture of alignment image above).

After the re-alignment I did notice some slightly better horizontal resolution in the vertical lines (but not that much and not everywhere) and some very slight improvement in the vertical resolution.   There was still the large chroma aberrations, particularly on the left side of the image (much less so on the right side) that some had claimed were “proof” that the lasers were horribly aligned (which they were not before).   The likely cause of the chroma aberrations is the output lens and/or angle error in the mechanical alignment of the lasers.

Below shows the comparison before and after on the 72-inch diagonal image.laser alignment comparison 2

Note the overall effect (and the key point of the earlier article_ of the projected image going further out of focus at smaller image sizes.   Even at 72-inch diagonal the image is far from what should be considered sharp/in-focus even after the re-calibration.

Below shows the left and right side of the 72-in diagonal image.  The green arrows show that there is minimal chroma aberration on the right side but there is a significant issue on the left side.   Additionally, you may note the sets of parallel horizontal lines have lost all definition on the left and right side and the 1 pixel wide targets are not resolved (compare to the center target above).   This loss of resolution on the sides of the image is inherent in Microvision’s scanning process.

Celluon 72-in diag left-right targets

Center left and center right of 72-in diag. after re-alignment (click on thumbnail for full resolution image)

While the re-alignment did make some parts of the image a little more defined, the nature of the laser scanning process could not fully resolved other areas.   In future article I hope to get into this some more.

One other small correction from the earlier article, the images labeled “24-inch diagonal” are actually closer to 22-inches in diagonal.

Below are the high-resolution (20 megapixel) images for the 72-in, 22-in, and 12-in images after calibration.  I used a slightly different test patter which is also below (click on the various images for the high-resolution version).

Celluon 72-in diag  recalibrated IMG_9783

Celluon 72-in diag re-calibrated (click for full size image)

Celluon 22-in diag  recalibrated IMG_9864

Celluon 22-in diag re-calibrated (click for full size image)

Celluon 12-in diag recalibrated IMG_9807

Celluon 12-in diag re-calibrated (click for full size image)

 

 

 

 

interlace res-chart-720P G100A

Test Chart for 1280×270 resolution (click for full resolution)

Just to verify that my camera/lens combination was in no way limiting the visible resolution of the projected image, I also took some pictures of about 1/3 of the image (to roughly triple the resolution) and with an 85mm F1.8 “prime” (non-zoom) lens shot at F6.3 so it would show extremely find detail (including the texture of the white wall the image was projected onto).

Below are the images showing the Center-Left, Center and Center-Right resolution targets of the test chart above.   Among other things to notice how the resolution of the projected image drops from the center to the left and right and also how the chroma/color aberrations/fringes are most pronounce on the center-left image.

 

Celluon 72-in diag 85mm Center-Left 9821

85mm Prime Lens Center Left Target and Lines (click for full size image)

Celluon 72-in diag 85mm lens center  9817

85mm Prime Lens Center Target and Lines (click for full size image)

Celluon 72-in diag 85mm center-right 9813

85mm Prime Lens Center-Right Target and Lines (click for full size image)

Karl

Celluon Laser Beam Steering Analysis Part 2 – “Never In-Focus Technology”

June 6th 2015 – Note, I am in the process of updating this analysis with new photos.  The results are not dramatically different but I was able to improve the horizontal resolution slightly and now have some better pictures.    

Celluon image size comparison center cropsOne of the first things I noticed when projecting text pattern images with the Celluon PicoPro was that the images were very blurry.   I later found out that the smaller the image the blurrier it became.

To the left are high-resolution center crops of images taken with a 12-inch diagonal (about as big as you can get on a letter size sheet of paper, a 24-inch diagonal image (about as big as fits on a standard “B” size sheet of paper, and a 72-inch diagonal image I project on a wall.   For reference I have also included a the same portion of the source 3x magnified.

As you should notice the 12-in diagonal image is completely blurry even at 1/2 the stated resolution.  With the 24-inch diagonal you can start to see some “modulation” of the single pixel size lines horizontally but not vertically.  With the 72-inch diagonal the horizontal lines are pretty clear but still the vertical lines are still pretty much a blur (on close visual inspection there is a little modulation of the single pixel wide lines).

What is happening is that size of the laser beams is larger than the pixel size for small images.  The size of the beam diverges but at a slower rate than the size of the image grows so eventually the laser beam size is smaller than a “pixel” and you start to see separation between horizontal 1 pixel wide lines.

As for the horizontal resolution, whatever is driving the lasers in their horizontal sweep is not able to fully modulate them at single pixel resolution.

For the next set of 3 images (plus a 2x Magnified source) I have scale the images down so you can see more area.  Note you need to click on the image to see it at its intended size and to see the detail.  In these pictures you can see the ruler with both indicates the size of the image and shows that the camera was in-focus and could see the detail if it was in the projected image.

On the 24-inch diagonal and 72-in diagonal image I have drawn 3 ovals.  The left oval is around a set of 4 line pairs (see source image) of horizontal and vertical lines.   The middle and right ovals are each around 4 line pairs of vertical lines and two sets of 4 pairs of horizontal lines and where the horizontal and vertical lines cross is a set of 9 white pixels (never visible in any of the projected images).

Looking at the 72-inch image you may notice that you can barely make out the horizontal line pairs in the center oval but that they become blurry in the right oval.  This is due to the interlaced Lissajous scanning being done (for more detail on the Microvision interlaced scanning process see: http://www.kguttag.com/2012/01/09/cynics-guild-to-ces-measuring-resolution/).  The net effect of this scanning process is that vertical resolution is reduce from the center to the left and right sides.

Image Size Comparison

The 5 year old Microvision ShowWX having this blurring issue with small images.  In looking inside at the optics with the lasers on, I notice that the laser spot sizes were larger than expected.  I’m left wondering if the larger laser spot sizes were at least in part cause by efforts to reduce speckle or for some other reason.

Next time, I plan on giving a little “tour” of the optics.

Addendum – How the pictures were taken, full resolution images, and source pattern used

All the pictures were taken with a Canon 70D (5472 by 3648 pixel) DSLR.  By framing the pictures so that filled roughly 90% of the width, this meant there were roughly 4 camera pixel “samples” per pixel in the output image.   The ruler in the picture was both to keep track of the size of the image and to make sure the camera was in-focus and could resolve single pixels (if they were there).

I did selectively zoom in with the camera on smaller regions to see if it made any measurable difference in resolving features in the images and it did not.  I have included the test pattern I used and would welcome anyone using it to verify what I have shown.

By clicking on the thumbnails below you will bring up the full size image (depending on your browser it may not display full size until after you click on the magnifying glass).  You can then right click to download the images.   Each image is about 8 to 9 Megabytes and is stored in a high quality (low compression) JPG format.   The source test pattern is stored in loss-less PNG.

12-inch Diag Celluon_8572

12-in Diagonal Celluon Image (20 megapixels-click to see full size image)

24-inch Diag Celluon_8452

24-in Diagonal Celluon Image (20 megapixels click to see full size image)

72-inch Diag Celluon_8205

72-in Diagonal Celluon Image (20 megapixels click to see full size image)

Basic res-chart-720P

Test Pattern Source (1280×720 pixels PNG format, click for full size image)

Celluon Laser Beam Scanning Projector Technical Analysis – Part 1

Celluon Light Path w800 IMG_8087The Celluon PicoPro projector has been out for a few months now for about $359.   I have read a number of so-called “reviews” that were very superficial and did little more than turn on the projector and run a few pictures and maybe make a video.   But I have not seen any serious technical analysis or review that really showed the resolution or measured anything beyond the lumens.   So I am going to be doing a multi-part technical analysis on this blog (there is just too much to cover in one article).

In the photo at the top, I took a picture with the lasers on to more clearly see the various light paths.  A surprise to many is that they used 5 lasers and not just three which adds to the cost and complexity of the design.   They use two red and green lasers to get to the spec’ed (and measured) brightness of 32 lumens.   In future articles, I will get into more details on the optical path and what is going on (there are a few “tricks” they are using).

It is no secret by now that the Celluon engine uses a beam scanning mirror from Microvision and the optical engine and electronics are from Sony (the engine looks identical to the one Sony Announced February 20, 2014) .  Below I have taken the cover off the electrical part so you can see some of the chips. If you look carefully at the red arrows in the picture below, you can see the 3 clearly identified Sony ASICs used in the driver board (the 4th large chip is a Samsung SDRAM and the smaller device is a Texas Instruments power supply chip — there are more power supply chips on the backside of the board).

Sony Devices IMG_9737

I have used test charts to measure the resolution, check the color control , and measured the power consumption.   I have also taken a look inside to see how it is made (per the pictures above).    I have collected data and many images so the biggest problem for me to boil  this down into a manageable form for presentation on this blog.   I decide to start with just a bit about the resolution and a summary of some other issues.

Celluon claims the resolution is “1920 x 720” pixels and not that is not a typo on my part, they really claim to have “1920” horizontal resolution with as claimed by Sony in a press release on the engine.  It is easily provable that the horizontal resolution is much less than 1920 or even 1280 pixels and the vertical resolution is not up to fully resolving 720 lines.   In fact the effective/measurable resolution of the Celluon engine is closer to 640 by 360 pixels than it is to 1280×720.

PC Magazine’s April 22, 2015 article on the Celluon PicoPro made the oxymoron statement “the image has a slight soft-focus effect.”  To me “soft-focus” means blurry and indeed the image is in fact both blurry and lower in resolution.   The article also stated “I also saw some reddish tinges in dark gray areas in some images, a problem that also showed up in a black-and-white movie clip“.   The image is definitely “off to the red” (white point at about 4000K) and it has very poor color control in the darker areas of the gray-scale.

Resolution is a big topic and I have a lot of photos, but to get things started, below I have taken a center crop of 1280×720 HDMI input into the Cellulon projector.   Below this image I have included the same crop of the text pattern in put zoomed in by 2X for comparison.   In the photo you will see a yellow measuring tape that was flush against the projection screen, this both shows the size of the projected image AND proves that the camera was focused well and had enough resolution to show pixels in the projected image.

Celluon test pattern comparison

720P Celluon Projected Image with Source Below It with key comparison point indicated by the red ovals

You might want to look at the various areas indicated by the red ovals corresponding to the same areas of the projected image and the test pattern.  What you can see is that there is effectively no modulation/resolution of the sets of 1 pixel wide vertical lines so the horizontal resolution is below 1280 (more like about half 1280).

There is some modulation, but not as much as you should get if this were truly 720p, of the horizontal lines center of the of the image but this will fade out towards the left and right side of the projected image (I will get into this more in a future article).

You may also notices that the overall Celluon image is blurry.  Yes, I know lasers are supposed to “always be in focus,” but the image is definitely out of focus.   It turns out that at the size of this image (12 inches vertical or 24 inches diagonal which is moderately big, the width of the scanned laser beams are wider than a pixel and thus overlap.

The image is even more blurry if the image is say about 7-inches high projected on a standard letter size sheet of paper (the image is very blurry).  The blurriness goes down if the image gets bigger but it is NEVER really sharp even with a 72-inch diagonal image.   In a future article I will post the same test pattern at different image sizes to show the effects of image size and blurriness/focus.  I have started to call this “never in-focus technology.”

Some summary observations (more to come on these subjects):

  1. Laser Speckle – much improved over previous Microvision ShowWX projectors.   It still is far from perfect an most annoying where there are large flat areas and text on a bright background.
  2. The Celluon eliminated the “bowtie” effect of earlier Microvision ShowWX product so that the image is rectangular
  3. The lost the 100% offset of the ShowWX meaning that this requires a “stand” and the image will either be keystone or the projector will be between the viewers eye and the image.  This is bad/wrong for a short throw projector.  There is no keystone correction supported by the product.
  4. Low effective resolution – absolutely nowhere close to 720p (see above, more on this in future articles).
  5. Blurry image – not the same per se as resolution.  The size of the laser beam appears to be bigger than a pixel until the image is very large.  Additionally there are issues with aligning the 5 lasers into a single “beam” and issue with the interlaced bi-directional scan process (see http://www.kguttag.com/2012/01/09/cynics-guild-to-ces-measuring-resolution/ for more on the scan process and how it hurts resolution).
  6. Class 3R laser product – This is a very serious problem as it is not safe for use with children (in fact laser safety glasses are recommended) but it this is not well marked.  The labels on the product are ridiculously tiny (particularly the one on the projector itself).  The EU is reported in the process of banning consumer products that emit 3R laser light (http://www.laserpointersafety.com/news/news/other-news_files/tag-european-union.php and http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32014D0059)
  7. Flicker – this is a serious problem with this product and I will discuss more about this in a later article.  About 1 in 7 people I showed the projector to said it gave them headaches or other problems (I had multiple people tell me to turn it off as it was painful to even be in the room with it).  The scan process is 60-hertz “interlaced” with no persistence (as with an old CRT).
  8. The power consumption is high taking about 2.6W to show a totally back image and 6.1W for a totally white 32 lumen image with the power consumption in between roughly proportional to the image content. Don’t let the lack of fans fool you, they are using heat spreading over the entire package to dissipate the heat from just the projector.  The device will quickly overheat if left on a tabletop (as opposed to the fan) as much of the heat is spread over the bottom of the package.  It will also overheat if a bright image is left on the screen for too long even if the device is floating in air.
  9. The color/gray scale control is pretty poor particularly with the darker parts of a gray ramp.  At the dark end of the gray scale the “gray” turns red.  Additionally there is “crosstalk” caused from the lasers heating or cooling based on the brightness on one part of the screen that affects the color/brightness on the other side of the screen.   In other words the content of the image in one area will affect the color in another area (particularly horizontally).

I have seen Microvision laser scanned projectors since the Microvision ShowWX came out in 2010 or 5 years ago and the Celluon unit has many of the same issues that I found with the ShowWX.  While the Celluon is much improved in terms of brightness and speckle, has better resolution (but not as near what is claimed) and it delivers about 3X the brightness for the about the same power (much of this is due to laser improvements over the last 5 years) the progress is very modest considering that 5 years have passed.

Frankly, I still consider this technology far from ready for “prime time” high volume and sill has some major and in many ways fatal flaws.  Being laser safety class 3R at only 32 lumens is chief among them.  The flicker I also consider to be a fatal problem for a consumer product but this perhaps could be solved by going to a higher refresh rate (which would require a much faster scanning mirror).   The power consumption is far too high for embedding into small portable products.

And then we come back to the issues with the “use model” that still exists with Pico Projectors (see my discussion from way back in 2011 about this).

On a final note, I know that Laser Beam Scanning has a very dedicated following with some people that vigorously defend it.   I will be providing test patterns and other information so people can duplicate my experiments and verify my results.   I am more than happy to discuss the technology and respond to dissenting opinions, but I won’t tolerate rude comments or personal attack in the discussion.

Addendum — Test Patterns

Below are some test patterns stored in lossless PNG format to try out on the Celluon or other 720p projector to see for yourself.

Right-Click on the given pattern download the original full size pattern. Note, they should be view at “100%” if not on a 720p monitor and should totally fill the screen on 720p projector.

The first one below is a resolution test with 9 “zone patterns” has well as sets of 1 pixel wide black and white horizontal and vertical lines.

interlace res-chart-720P G100A

 

Simple horizontal gray ramp.  This is totally neutral gray from 0 to 255.Horz 0 to 255 gray ramp

Below may look dark gray or even black but it a totally flat R=B=G=16 everyone (a flay gray of 16/255).   See how it looks on the Celluon.

gray 16

 

Magic Leap CSI: Display Device Fingerprints

Introduction

I have gotten a lot of questions as to how I could be so sure that Magic Leap (ML) was using Micro-OLEDs in all their “Through Magic Leap Technology” and not say a scanning fiber display as so many had thought. I was in a hurry to get people to the conclusion. For this post, I am going to step back and show how I knew. When display devices have video and still pictures taking whit a camera, every display type has its own identifiable “fingerprint” but you have to know where to look.

Sometimes in video it might only be a few frames that give the clue as to the device being used. In this article I am going cropped image from videos for most of the technologies that capture their distinctive artifacts as captured by the camera, but for laser scanning the distinctive artifacts are best seen in the whole image so I am going to use thumbnails size images.

This article should not be new information to this blog’s readers, but rather it details how I knew what technology was in the ML through “the technology” videos. For the plot twist at the end, you have to know to parse ML’s words, as in “the technology” is not what they are planning on using in their actual product. The ML “through the technology videos” are using totally different technology than what they plan to use in the product.

Most Small Cameras Today Use a Rolling Shutter

First it is important to understand that cameras capture images much differently than the human eye. Most small cameras today, particularly those in cell phones, have a “rolling shutter.” Photography.net has a good article describing a rolling shutter and some of its effects. A rolling shutter captures a horizontal band of pixels (the width of this band varies from camera to camera) as it scans down vertically. With “real world analog” movement this causes moving objects to be distorted. This happens very famously with airplane propellers (above right). With the various display technologies they will reveal different effects.

OLEDs (And color filter LCDs)

When an object moves on a display device the same object in the digital image will jump in its location between the two frames displayed. If the rolling shutter is open when the image is changing, the camera will capture a double image.  This is shown classically with the Micro-OLED device from an ODG Horizon prototype. The icons and text in the image was moving vertically and the camera captured contend from two frames. Larger flat panel OLEDs work pretty much the same way as can be see in this cropped image from a Meta 2 headset at right.

From a video image artifact point of view, it is hard to distinguish the artifacts with a rolling shutter camera between OLED and color filter (most common) LCDs. Unlike old CRTs and scanning systems, OLEDs and LCD don’t have any “blanking” where there is no image. They just simply quickly row by row change the RGB (and White sometimes) sub-pixels of the image from one frame to the next (this video taken with a high speed camera demonstrates how it works).

Color Field Sequential DLP and LCOS

DLP and LCOS devices used in near eye displays use what is known as “field sequential color” (FSC). They have one set of “mirrors” and in rapid sequence display only the red sub-image and flash a red light source (LED or laser) and then repeat this for green and blue. Generally they sequence these very rapidly and usually repeate the red, green, and blue sub-images multiple times so the eye will fuse the colors together even if there is motion. If the colors are not sequenced fast enough (and for many other reasons that would take too long to explain), a person’s eye will not fuse the image and they will see fringing of colors in what is known as  “field sequential color breakup,” also known pejoratively as “the rainbow effect”. Due to the way DLP and LCOS works, LCOS does not have to sequence quite as rapidly to get the images to fuse in the human eye which is a good thing because they can’t sequence as fast as DLP.

In the case of field sequential color when there is motion, the camera can capture the various sub images individually as seen above-left of the Hololens that uses FSC LCOS. As seen it looks sort of like print were the various colors are shifted. IF you study the image you can even tell the color sequence.

Vuzix uses FSC DLP and has similar artifacts but they are harder to spot. Generally DLPs sequence their colors faster than LCOS (by about 2x) so it can be significantly harder to capture them (that is a clue to if it is DLP or LCOS). On the right, I have captured two icons when sill and when they are moving and you can see how the colors separate. You will notice that you don’t see all the colors because the DLP is sequencing more rapidly that the Hololens LCOS.

DLP and LCOS also have “blanking” between colors where the LEDs (and lasers maybe in the future) are turned off while the color sub-images are changing. The blanking is extremely fast and will only be see using high speed cameras and/or setting a very fast shutter time on a DLSR.

DLP and LCOS for Use with ML “Focus Planes”

If you have a high speed camera or other sensing equipment you can tell even more about the differences between the way in which DLP and LCOS generate field sequential color. But a very important aspect for Magic Leap”s time sequential focus planes is that DLP an sequence fields much faster than LCOS and thus support more focus planes.

I will be getting more into this in a future article, but to do focus planes with DLP or LCOS, Magic Leap will have to trade repeating the same single color sub-images for different images corresponding to different focus planes. The obvious problem for those that understand FSC, that the color field rates will become so low that color breakup (the rainbow effect) would seem inevitable.

Laser Beam Scanning

Laser scanning systems are a bit like old CRTs, they scan from top to bottom and then have a blanking time while the scanning mirror retraces quickly to the top corner. The top image on the left was taken with DSL at a 1/60th of a second shutter speed that reveals the blanking roll bar (called a roll bar because it will be in a different place if the camera and video source are not running at exactly the same speed).

The next two images were taken with a rolling shutter camera of the exact same projector. The middle image shows a dark wide roll bar (it moves) and the bottom image shows a thin white roll-bar. These variations from the same projector and camera are due to the frame rates generated by the image and/or the camera’s shutter rate.

Fiber Scanning Display (FSD) Expected Artifacts

FSD displays/projectors are so rare that nobody has published a video of them. Their scan rates are generally low and they have “zero persistence” (similar to the to laser scanning) and they would look horrible in a video which I suspect is why no one has published a video of them.

I they were video’ed I would expect a circular blanking effect similar to the laser beam scanning but circular. Rather than rolling vertically it would “roll” from center to the outside or vice versa. I have put a could of very crudely simulated whole frame images at left.

So What Did the Magic Leap “Through The Technology” Videos Use?

There is a obvious artifact match between the artifacts in all the Magic Leap “Through the Technology” videos and OLEDs (or filter LCD which are much less common in near eye displays). You see the distinctive double image with no color breakup.

Nowhere on any frames can be found field sequential color artifacts. So this rules out FSC DLP and LCOS.

In looking at the whole frame videos you don’t see any roll-bars effects of any kind. So this totally rules out both laser beam scanning and fiber scanning displays.

We have a winner. The ML through the technology videos could only be done with OLEDs (or color filter LCDs).

But OLEDs Don’t Work With Thin Waveguides!!!

Like most compelling detective mysteries there is a plot twist. OLEDs unlike LCOS, DLP, and Laser Scanning output wide spectrum colors and these don’t work with the thin waveguides like the Photonic Chip that Rony Abovitz, ML CEO, likes to show.

This is how it became obvious that while the “Through The Magic Leap Technology” videos were NOT using the same “Magic Leap Technology” as Magic Leap is planning to use for their production product. And this agrees with the much publicized ML Article from “The Information.”

Appendix – Micro HTPS LCD (Highly Unlikely)

I need to add, just to be complete, that theoretically they could use color filter HTPS LCDs illuminated by either LEDs or lasers to get a narrow spectrum and fairly colliminated light that might work with the waveguide.  They would have similar artifacts to those seen in the ML videos. EPSON has made such a device illuminated by LEDs that was used in their earlier headsets, but even EPSON that is moving to Micro-OLEDs for their next generation. I’m not sure the HTPS could support frame rates high enough to support focus planes.  I think therefore that using color filter HTPS panels while theoretically possible is highly unlikely.

Magic Leap – Separating Magic and Reality

The Goal – Explain What is Magic Leap Doing

Magic Leap has a way of talking about what they hope to do someday and not necessarily what they can do anytime soon.  Their patent applications are full of things that are totally impossible or impractical to implement.  I’ve been reading well over a thousand pages of Magic Leap (ML) patents/applications, various articles about the company, watching ML’s “through the optics” videos frame by frame, and then applying my own knowledge of display devices and the technology business to develop a picture of what Magic Leap might produce.

Some warnings in advance

If you want all happiness and butterflies, as well as elephants in your hand and whales jumping in auditoriums, or some tall tale of 50 megapixel displays and of how great it will be someday, you have come to the wrong place.  I’m putting the puzzle together based on the evidence and filling in with what is likely to be possible in both the next few years and for the next decade.

Separating Fact From Fiction

There have been other well meaning evaluations such as “Demystifying Magic Leap: What Is It and How Does It Work?“,  “GPU of the Brain“, and the videos by “Vance Vids” but these tend to start from the point of believing the promotion/marketing surrounding ML and finding support in the patent applications rather than critically evaluating them. Wired Magazine has a series of articles as well as Forbes and others have covered ML, but these have been are personality and business pieces that make no attempt to seriously understand or evaluate the technology.

ml-array-picAmong the biggest fantasies surrounding Magic Leap is the Arrayed Fiber Scanning Displays (FSD); many people think this is real. ML Co-founder and Chief Scientist, Brian Schowengerdt, develop this display concept at the University of Washington based off an innovative endoscope technology and it features prominently in a number of ML assigned patent applications.  There are giant issues in scaling up FSD technology to high resolution and what it would require.

In order to get on with what ML is most likely doing, I have moved to the Appendix why FSDs, light fields, and very complex waveguides are not what Magic Leap is doing. Once you get rid of all the “noise” of the impossible things in the ML patents, you are left with a much better picture of what they are actually could be doing.

What left is enough to make impressive demos and it may be possible to produce at a price that at least some people could afford in the next two years. But ML still has to live by what is possible to manufacture.

Magic Leaps Optical “Magic” – Focus Planes

Fm: Journal of Vision 2009

At the heart all of ML optical related patents is the concept eye vergence-accomodation where the focus of the of the various parts of a 3-D image should agree with their distances or it will cause eye/brain discomfort. For more details about this subject see this information about Stanford’s work in this area and their approach of using quantized (only 2 level) time sequential light fields.

There are some key similarities in that between the Stanford and Magic Leap’s approaches.  They both quantize to a few levels to make them possible to implement and they both present their images time sequentially and they rely on the eye/brain to both fill in between the quantizated levels and integrate a series of time sequential images. Stanford’s approach is decidedly not a “see through” with an Oculus-like setup with two LCD flat panel displays in series where Magic Leap’s goal is to merge the 3-D images with the real world with Mixed Reality (MR).

ml-focus-planesMagic Leap uses the concept of “focus planes” where they conceptually break up a 3-D image into quantized focus planes based on the distance of the virtual image.  While they show 6 virtual planes in Fig. 4 from the ML application above, that is probably what they would like to do but they are doing fewer planes (2 to 4) due to practical concerns.

Magic Leap then renders the parts of an image image into the various planes based on the virtual distance.  The ML optics make it planes appear to the eye like they are focus based their corresponding virtual distance. These planes are optically stacked on top of each other give the final image and they rely on the person’s eye/brain to fill in for the quantization.

Frame Sequential Focus Planes With SLMs

ml-slm-vfe-biocular-systemMagic Leap’s patents/applications show various ways to generate these focus planes, the most fully form concepts use a single display per eye and present the focus planes time sequentially in rapid succession, what ML refers to as “frame-sequential“where there is one focus plane per “frame.”

Both due to the cost and size multiple displays per eye and their associated optics including those to align and overlay them, the only possible way ML could build a product for even a modest volume market is by using frame sequential methods using a a high speed spatial light modulator (SLM) such a DLP, LCOS, or OLED microdisplay.

Waveguides and Focus Planes

Light rays that coming from a far away point that make into the eye are essentially parallel (collimated) and light rays from a near point have a wider set angles.  These differences in angles is what makes them focus differently, but at the same time creates problems for existing waveguide optics, such as what Hololens is using.

The very flat and thin optical structures call “waveguides” will only work with collimated light entering them because of how total light totally internally reflects to stay in the light guide and the the way the diffraction works to make the light exits.  So a simple waveguide would not work for ML.

ml-angle-mirror-deviceSome of ML’s concepts use use one or more beam splitting mirrors type optics rather than waveguides for this reasons. Various ML’s patent applications show using a single large beam splitter or multiple smaller ones (such as at left), but these will be substantially thicker than a typical waveguide.

magic-leap-combiner-cropWhat Magic Leap calls a “Photonics Chip” looks to be at least one layer of diffractive waveguide. There is no evidence of mirror structures, and because it bends the wood in the background (if it were just a simple plate of glass, the wood in the background would not be bent), it appears to be a diffractive optical structure.

Because ML is doing focus planes, they need to have not one, but a stack of waveguides, one per focus plane. The waveguides in ML’s patent applications show collimated light entering the each waveguide in the stack like a normal waveguide, but then the exit diffraction gratings both causes the light to exit also imparts the appropriate focus plane angle to the light.

To be complete, Magic Leap has shown in several patent applications shown some very thick “freeform optics” concepts, but none of this would look anything like the “Photonics Chip” that ML shows.  ML’s patent applications show many different optical configurations and they have demoed a variety of different designs. What we don’t know is if the Photonics Chip they are showing is what they hope to use in the future or if this will be in their first products.

Magic Leaps Fully Formed Designs In Their Recent Patent Applications

Most of Magic Leaps patent applications showing optics have more like fragments of ideas.  There are lots of loose ends and incomplete concepts.

More recently (one publish just last week) there are patent applications assigned to Magic Leap with more “fully formed designs” that look much more like they actually tried to design and/or build them.  Interestingly, these applications don’t include as inventors the founders Rony Abovitz, the CEO, nor even Brian T. Schowengerdt, Chief Scientist, while they may use ideas from those prior “founders patent application.”

While the earlier ML applications mention Spatial Light Modulators (SLMs) using DLP, LCOS, and OLED microdisplays and talk about Variable Focus Element (VFEs) for time sequentially generating focus planes, they don’t really show how to put them together to make anything (a lot is left to the reader).

freeform-opticsPatent Applications 2016/0011419 (left) and 2015/0346495 (below) show straight forward ways to achieve field sequential focus planes using a Spatial Light Modulator (SLM) such as DLP, LCOS or OLED microdisplay.  ml-vfe-with-dlp-003b

As focus plane is created by setting the a variable focus element (VFE) to a one focus point and then generating the image by the SLM. Then the VFE focus is then changed and a second focus plane is displayed by the SLM.  This process can be repeated to generate more focus planes and limited by how fast the SLM can generate image and by level of motion artifact that can be tolerated.

These are clearly among the simplest way to generate focus planes. All that is added over a “conventional” design is the VFE.  When I first heard about Magic Leap many months ago, I heard they were using DLPs with multiple focus depths but a more recent Business Insider is reporting ML is using using Himax LCOS.  Both of these could easily be adapted to support OLED microdisplays.

The big issue I have with the straight forward optical approaches are the optical artifacts I have seen in the videos and the big deal ML makes out of their Photonics Chip (waveguide).  Certainly their first generation might use a more straightforward optical design and then save the Photonics Chip for the next generation.

Magic Leaps Videos Show Evidence of Waveguide Optics

As I wrote last time, there is a lot of evidence from the videos ML has put out that they are using a waveguide at least for the video demos.  The problem is when you bend light in a short distance using diffraction gratings or holograms is that some of the light does not get bent correctly and this shows up colors not lining up (chroma aberrations) as well as what I have come to call the “waveguide glow”.  If at R2D2 below (you may have to click on the image see it clearly) you should see a blue/white glow around R2D2.  I have seen this kind of glow in every diffractive and holographic waveguide I have seen.  I have heard that the glow might be eliminated someday with laser/very narrow bandwidth colors and holographic optics.ml-r2d2-glow2

The point here is that there is a lot of artifact evident that ML was at least using some kind of waveguide in their videos.  This makes it more likely that their final product will also use waveguides and at the same time may have some or all of the same artifacts.

Best Fit Magic Leap Application with Waveguides

If you drew a venn diagram of all existing information, the one patent application that fits best it all is the very recent US 2016/0327789. This is no guarantee that it is what they are doing, but it fits the current evidence best. It combines the a focus plane sequential LCOS SLM (although it shows it could also support DLP but not OLED) with waveguide optics.

The way this works is that for every focus plane there are 3 Waveguides (RED, Green,and Blue) and spatial separate set of LEDs Because the are spatially separate,  they will illuminate the LCOS device at a different angle and after going through the beam splitter the waveguide “injection optics” will cause the light from the different spatially separated LEDs to be aimed at a different waveguide of the same color. Not shown in the figure below is that there is an exit grating that both causes the light to exit the waveguide and imparts an angle to the light based on the focus associated with that give focus plane.  I have coloring in the “a” and “b” spatially separated red paths below (there are similar pairs for blue and green).

With this optical configuration, the LCOS SLM is driven with the image date for a given color for a given focus plane and then the associated color LED for that plane is illuminated.  This process then continues with a different color and/or focus plane until all 6 waveguides for the 3 colors by 2 planes have been illuminated.  ml-slm-beam-splitter-lcos-type-optics-color

The obvious drawbacks with this approach:

  1. There are a lot of layers of waveguide with exit diffraction gratings that the user will be looking through and the number of layers grows by 3 with each added focus plane.  That is a lot of stuff to be looking though and it is bound to degrade the forward view.
  2. There are a lot of optical devices that all the light is passing through and even small errors and leak light builds up.  This can’t be good for the overall optical quality.  These errors have their effect on resolution/blurring, chroma aberrations, and glowing/halo effects.
  3. Being able to switch though all the colors and focus planes fast enough to avoid motion artifacts where the colors and/or the focus planes break up.  Note this issue exist with using any approach that both does field and focus plan sequential.   Obviously this issue becomes worse with more focus planes.

The ‘789 patent show an alternative implementation for using a DLP SLM. Interestingly, this arrangement would not work for OLED Microdisplays as they generate their own illumination so you would not be able to get the spatially separated illumination.

So what are they doing?  

Magic Leap is almost certainly using some form of spatial light modulator with field sequential focus planes (I know I will get push-back form the ML fans that want to believe in the FSD — see the Appendix below); but this is the only way I could see them going to production in the next few years.  Based on the Business Insider information, it could very well be an LCOS device in the production unit.

The the 2015/0346495 with the simple beam splitter would be what I would have choose for a first design provide there is an appropriate variable focus element (VFE) available.  It is by far the simplest design and would seem to have the lowest risk. The downside is that the angled large beamsplitter will make it thicker but I doubt that much more so.   Not only is it lower risk (if the VFE works) but the image quality will likely be better using a simple beam splitter and spherical mirror-combiner than many layers diffractive waveguide.

The 2016/0327789 application touches all the basis based on available information.  The downside is that they need 3 waveguides per focus plane.  So if they are going to say support just 3 focus planes (say infinity, medium, and short focus) they are going to have 9 (3×3) layers waveguides to manufacture and pay for and 9 layers to look through to see the real world.  Even if each layer is extremely good quality, the error will build up in so many layers of optics.  I have heard that the Waveguide in Hololens has been a major yield/cost item and what ML would have to build would seem to be much more complex.   

While Magic Leap certainly could have something totally different, but they can’t be pushing on all fronts at once.  They pretty much have to go with a working SLM technology and get their focus planes time sequentially to build an affordable product.

I’m fond to repeating the 90/90 rule that “it takes 90%  of the effort to get 90% of the way there, then it takes the other 90% to do the last 10%” and someone quipped back, it can also be 90/90/90. The point being is that you can have something that look pretty good and impresses people, but solving the niggling problems, making it manufacturable and cost effective almost always takes more time, effort, and money than people want to think. These problems tend to become multiplicative if you take on too many challenges at the same time.

Comments on Display Technologies

As far as display technologies go each of the spatial light technologies has it pro’s and cons.

  1. LCOS seems to be finding the widest acceptance due to cost.  It is generally lower power in near eye displays than DLP.   The downside is that it has a more modest field rate which could limit the number of focus planes.  It could also be used in any of the 3 prime candidate optical system.  Because the LEDs are separate from the display, they can support essentially any level of brightness.
  2. DLP has the fastest potential field rate which will support more focus planes.  With DLPs they could trade color depth for focus planes.  DLPs will also tend to have higher contrast.  Like LCOS, brightness will not an issue as the LEDs can provide more than enough light.  DLP tends to be higher in cost and power and due to the off axis illumination, tend to have a little bigger optical system that LCOS in near eye applications.
  3. OLED – It has a lot of advantages in that it does not have to sequentially change the color fields, but the current devices still have a slower frame rate than DLP and LCOS can support.  What I don’t know, is how much the field rate is limited by the OLED designs to date versus what they could support if pressed.   The lack of control of the angle of illumination such as used in the ‘789 application.  OLEDs put out rather diffuse with little angle control and this could limit its usefulness with respect to focus plane where you need to  control the angles of light.
  4. FSD Per my other comment and the Appendix below, don’t hold your breath waiting for FSDs.
Image Quality Concerns

I would be very concerned about Magic Leap’s image quality and resolution beyond gaming applications. Forget all those magazine writers and bloggers getting all geeked out over a demo with a new toy, at some point reality must set in.

Looking at what Magic Leap is doing and what I have seen in the videos about the effective resolution and image quality it is going to be low compared to what you get even on a larger cell phone.  They are taking a display device that could produce a good image (either 720p or maybe 1080p) under normal/simple optics and putting it through a torture test of optical waveguides and whatever optics used to generate their focus planes at a rational cost; something has to give.

I fully expect to see a significant resolution loss no matter what they do plus chroma aberrations, and waveguide halos provide they use waveguides.  Another big issue for me will be the “real world view” through whatever it takes to create the focus planes and how will it effect you say seeing you TV or computer monitor through the combiner/waveguide optics.

I would also be concerned about field sequential artifacts and focus plane sequential artifacts.  Perhaps these are why there are so many double images in the videos.

Not to be all doom and gloom.  Based on casual comments from people that have seen it and the fact that some really smart people invested in Magic Leap,  it must provide an interesting experience and image quality is not everything for many applications. It certainly could be fun to play with at least for a while. After all, Oculus rift has a big following and its angular resolution is so bad that they cover up by blurring and it has optical problems like “god rays.”

I’m more trying to level out the expectations.   I expect it to be a long way from replacing your computer monitor, as one reporter suggested, or even your cell phone, at least for a very long time. Remember that this has so much stuff in that in addition to the head worn optics and display you are going to have a cable down to the processor and battery pack (a subject I have only barely touched on above).

Yes, Yes, I know Magic Leap has a lot of smart people and a lot of money (and you could say the same for Hololens), but sometime the problem is bigger than all the smart people and money can solve.

Appendix: 

The Big Things Magic Leap is NOT Going To Make in Production Anytime Soon

The first step in understand Magic Leap is to remove all the clutter/noise that ML has generated.  As my father use to often say, there are to ways to hide information, you can remove it from view or your can bury it.” Below is a list of the big things that are discussed by ML themselves and/or in their patents that are either infeasible or impossible any time soon.

It would take a long article on each of these to give all the reasons why they are not happening, but hopefully the comments below will at least outline the why:

ml-array-pic

A) Laser Fiber Scanning Display (FSD) 

A number of people of picked up on this particularly because the co-founder and Chief Scientist, Brian Schowengerdt, developed this at the University of Washington.  The FSD comes in two “flavors” the low resolution single FSD and the Arrayed FSD

1) First, you pretty limited on the resolution of a single mechanically scanning fiber (even more so than Mirror scanners). You can only make them spiral so fast and they have their own inherent resonance. They make an imperfectly space circular spiral that you then have to map a rectangular grid of pixels onto. You can only move the fiber so fast and you can trade frame rate for resolution a bit but you can’t just make the fiber move faster with good control and scale up the resolution. So maybe you get 600 spirals but it only yields maybe 300 x 300 effective pixels in a square.

2) When you array them you then have to overlap the spirals quite a bit. According to ML patent US 9,389,424 it will take about 72 fibers scanner to made a 2560×2048 array (about 284×284 effective pixels per fiber scanner) at 72 Hz.

3) Lets say we only want 1920×1080 which is where the better microdisplays are today or about 1/2.5 of 72 fiber scanners or about 28 of them. This means we need 28 x 3 (Red, Green, Blue) = 84 lasers. A near eye display typical outputs between 0.2 and 1 lumen of light and you divide this then by 28. So you need a very large number really tiny lasers that nobody I know of makes (or may even know how to make). You have to have individual very fast switching lasers so you can control them totally independently and at very high speed (on-off in the time of a “spiral pixel”).

4) So now you need to convince somebody to spend hundreds of millions of dollars in R&D to develop very small and very inexpensive direct green (particularly) lasers (those cheap green lasers you find in laser pointers won’t work because they switch WAY to slow and are very unstable). Then after they spend all that R&D money they have to then sell them to you very cheap.

5) Laser Combining into each fiber. You then have the other nasty problem of getting the light from 3 lasers into a single fiber; it can be done with dichroic mirrors and the like but it has to be VERY precise or you miss the fiber. To give you some idea of the “combining” process you might want to look at my article on how Sony combined 5 lasers (2 Red, 2 Green, and 1 Blue for brightness) for a laser mirror scanning projector http://www.kguttag.com/2015/07/13/celluonsonymicrovision-optical-path/. Only now you don’t do this just once but 28 times. This problem is not impossible but requires precision and precision cost money. Maybe if you put enough R&D money into it you can make it on a single substrate.  BTW, It looks to me that in the photo you see of Magic Leap prototype (https://www.wired.com/wp-content/uploads/2016/04/ff_magic_leap-eric_browy-929×697.jpg) it looks like they didn’t bother combining the lasers into single fibers.

6) Next to get the light injected into a waveguide you need to collimate the arrays of cone shaped light rays. I don’t know of any way, even with holographic optics that you can Collimate this light because you have overlapping rays of light going in different directions.  You can’t collimate the individual cones of light rays or there is not way to get them to overlap to make a single image without gaps in it. I have been looking through the ML patent applications an they never seem to say how they will get this array of FSDs injected into a waveguide. You might be able to build this in a lab for one that is horribly inefficient by diffusing the light first but it would be horribly inefficient.

7) Now you have the issue of how are you going to support multiple focus planes. 72Hz is not fast enough to do it Field Sequentially so you have to put in either parallel ones so multiply by the number of focus planes. The question at this point is how much more than a Tesla Model S (starting at $66K) will it cost in production.

I think this is a big ask when you can buy an LCOS engine at 720p (and probably soon 1080p) for at about $35 per eye. The theoretical FSD advantage is that it might be able to be scaled it up to higher resolutions but you are several miracles away from that today.

ml-wavefrontB) Light Fields, Light Waves, etc.

 There is no way to support any decent resolution with Light Fields that is going to fit on anyone’s head.  It takes about 50 to 100 times the simultaneous image information to support the same resolution with a light field.  Not only can’t you afford to display all the information to support good resolution, it would take and insane level of computer processing. What ML is doing is a “shortcut” of multiple focus planes which is at least possible.  The “light wave display” is insane-squared, it requires the array of fibers to be in perfect sync among other issues.

ml-multi-displayC) Multiple Displays Driving the Waveguides

ML patents show passive waveguides with multiple displays (fiber scanning or conventional) driving them. It quickly becomes cost prohibitive to support multiple displays (2 to 6 as the patents show) all with the resolution required.

ml-vfe-compensation4) Variable Focus Optics on either side of the Waveguides

Several of their figures show electrically controlled variable focus elements (VFE) optics on either side of the waveguides with one set changing the focus of a frame sequential image plane compensating while a second set of VFE compensates so the  “real world” view remains in focus. There is zero probability of this working without horribly distorting the real world view.

What Magic Leap Is Highly Unlikely to Produce

multiplane-waveguideActive Switching Waveguides – ML patents applications show many variations they drawn attention from other articles. The complexity of making them and the resultant cost is one big issue.  There would likely be serious the degradation to the view all the layers and optical structures through to the real world.  Then you have the cost both in terms of displays and optics to get images routed to the various planes of the waveguide.  ML’s patent applications don’t really say how the switching would work other than saying they might use liquid crystal or lithium niobate but nothing so show they have really thought it through.   I put this in the “unlikely” category because companies such as DigiLens have built switchable Bragg Gratings.

Laser Beam Scanning Versus Laser-LCOS Resolution Comparison

cen-img_9783-celluon-with-uo

Side By Side Center Patterns (click on image for full size picture)

I apologize for being away for so long.  The pictures above and below were taken over a year ago and I meant to format and publish them back then but some other business and life events got in the way.

The purpose of this article is to compare the resolution of the Celluon PicoPro Laser Beam Scanning (LBS) projector and the UO Smart Beam Laser LCOS projector.   This is not meant to be a full review of both products, although I will make a few comments here and there, but rather, it is to compare the resolution between the two products.  Both projectors claim to have 720P resolution but only one of them actually has that “native/real” resolution.

This is in a way a continuation of the serious I have written about the PicoPro with optics developed by Sony and the beam scanning mirror and control by Microvision and in particular articles http://wp.me/p20SKR-gY and http://wp.me/p20SKR-hf.  With this article I am now included some comparison pictures I took of the UO Smart Beam projector (https://www.amazon.com/UO-Smart-Beam-Laser-Projector-KDCUSA/dp/B014QZ4FLO).

As per my prior articles, the Celluon PicoPro has no where close to it stated 1920×720 (non-standard) nor even 1280×720 (720P) resolution.  The UO projector while not perfect, does demonstrate 720P resolution reasonably well, but it does suffer from chroma aberrations (color separation) at the top of the image due to optical 100% offset (this is to be expected to some extent).

Let me be up front, I worked on the LCOS panel used in the UO projector when I was at Syndiant but I had nothing to do with the UO projector itself.   Take that as bias if you want, but the pictures I think tell the story.  I did not have any contact with either UO (nor Celluon for that matter) in preparing this article.

I also want to be clear that both the UO projector and the Celluon PicoPro tested are now over 1 year old and there may have been improvements since then.  I saw serious problems with both products, in particular with the color balance, the Celluon is too red (“white” is pink) and the UO very red deficient (“whilte is significantly blue-green).   The color is so far off on the Celluon that it would be a show stopper for me ever wanting to buy one as a consumer (hopefully UO has or will fix this).   Frankly, I think both projectors have serious flaws (if you want to know more, ask and I will write a follow-up article).

The UO Smart Beam had the big advantage in that it has “100% offset” which means that when placed on table-top, it will project upward not hitting the table without any keystone.   The PicoPro has zero offset and shoots straight out.  If you put it flat on a table the lower half of the image will shoot into the tabletop. Celluon includes a cheap and rather silly monopod that you can used to have the projector “float” above the table surface and then you can tilt it up and get a keystone image.  To take the picture, I had to mount the PicoPro on a much taller tripod and then shoot over the Projector so the image would not be keystoned

I understand that the next generation of the Celluon and a similar Sony MPCL1 projector (which has a “kickstand) have “digital keystone correction” which is not as good a solution as 100% offset as it reduces the resolution of the image; this is the “cheap/poor” way out and they really should have 100% offset like the UO projector (interestingly, earlier Microvision ShowWX projector with lower resolution had 100% offset) .

For the record – I like the Celluon PicoPro flatter form factor better; I’m not a fan of the UO cube as hurts the ability to put the projector in one’s pocket or a typical carrying bag.

Both the PicoPro with laser scanning and the Smart Beam with lasers illuminating an LCOS microdisplay have no focus knob and have a wide focus range (from about 50cm/1.5 feet to infinity), although they are both less sharp at the closer range.  The PicoPro with LBS is a Class 3R laser product whereas the Smart Beam with laser “illumination” of LCOS is only Class 1.   The measure dbrightness of the PicoPro was about 32 Lumens as rated when cold but dropped under 30 when heated up.  The UO while rated at 60 lumens was about 48 lumens when cold and about 45 when warmed up or more significantly below its “spec.”

Now onto the main discussion of resolution.  The picture at the top of this article shows the center crop from 720P test pattern generated by both projectors with the Smart Beam image on the left and the PicoPro on the right.   There is also an inset of the Smart Beam’s 1 pixel wide text pattern near the PicoPro’s 1 pixel wide pattern for comparison This test pattern shows a series of 1 pixel, 2 pixel and 3 pixel wide horizontal and vertical lines.

What you should hopefully notice is that the UO clearly resolves even the 1 pixel wide lines and the black lines are black whereas the 1 pixel wide lines are at best blurry and the 2 and even 3 pixel wide lines doe get to a very good black level (as in the contrast is very poor).  And the center is the very best case for the Celluon LBS whereas for the UO with it 100% offset it is a medium case (the best case is lower center).

The worst case for both projectors is one of the upper corners and below is a similar comparison of their upper right corner.  As before, I have included and inset of the UO’s single pixel image.

ur-img_9783-celluon-with-uo-overlay

Side By Side Center Patterns (click on image for full size picture)

What you should notice is that while there are still distinct 1 pixel wide lines in both directions in the UO projector, 1 pixel wide lines in the case of the Celluon LBS are a blurry mess.  Clearly they can’t resolve 1 pixel wide lines at 720P.

Because of the 100% offset optics the best case for the UO projector is at the bottom of the image (this is true almost any 100% offset optics) and this case is not much different than the center case for Celluon projector (see below):

lcen-celluon-with-uo-overlay

Below is a side by side picture I took (click on it for a full size image). The camera’s “white point” was an average between the two projectors (Celluon is too red/blue&green deficient and the UO is red deficient). The image below is NOT what I used in the cropped test patterns above as the 1 pixel features were too near the resolution limit of the Canon 70D camera (5472 by 3648 pixels) for the 1 pixel features.  So I used individual shots from each projector to double “sample” by the camera of the projected images.

side-by-side-img_0339-celluon-uo

For the Celluon PicoPro image I used the picture below (originally taken in RAW but digital lens corrected, cropped, and later converted to JPG for posting – click on image for full size):

img_9783-celluon-with-uo-overlay

For the UO Smart Beam image, I use the following image (also taken in RAW, digital lens corrected, straighten slightly, cropped and later converted to JPG for posting):

img_0231-uo-test-chart

As is my usual practice, I am including the test pattern (in lossless PNG format below for anyone who wants to verify and/or challenge my results:

interlace res-chart-720P G100A

I promise I will publish any pictures by anyone that can show better results with the PicoPro or any other LBS projector (Or UO projector for that matter) with the test pattern (or similar) above (I went to considerable effect to take the best possible PicoPro image that I could with a Canon 70D Camera).

Lenovo’s STMicro Based Prototype Laser Projector (part 1)

Lenovo Tech World Projector 001At Lenovo at their Tech World on May 27th 2015 showed a Laser Beam Scanning (LBS) projector integrated into a cell phone prototype (to be clear, a prototype and not a product).   White there has been no announcement of the maker of the LBS projector, there is no doubt that is made by STM as I will show below (to give credit where it is due, this was first shown on a blog by Paul Anderson focused on Microvision )

ST-720p- to Lenove comparison 2The comparison at left is base on video by Lenovo that included an exploded views of the projector and pictures of STM’s 720p projector from an article from Picoprojector-info.com on Jan 18, 2013.   I have drawn lines comparing various elements such as the size and placement of connectors and other components, the size and placement of the 3 major I.C.’s, and even the silk screen “STM” in the same place on both the Lenovo video and the STM article’s photo (circled in yellow).

While there are some minor differences, there are so many direct matches that there can be no doubt that Lenovo is using STM.

The next interesting to consider is how this design compares to the LBS design of Microvision and Sony in the Celluon projector.   The Lenovo video shows the projector as being about 34mm by 26mm by 5mm thick.  To check this I took the a photo from the STM to CelluonTO SCALE  003Picoprojector.com
article and was able to fit the light engine and electronic into a 34mm by 26mm rectangle arranged as they are in the Lenovo video (yet one more verification that it is STM).   I then took a picture I took of the Celluon board to the same scale and show the same 34x26mm rectangle on it.   The STM optics plus electronics are 1/4 the area and 1/5th the volume (STM is 5mm thick versus Microvision/Sony’s 7mm).

The Microvision/Sony is has probably about double the lumens/brightness of the STM module due to have two green and two red lasers and I have not had a chance to compare the image quality.   Taking out the extra two lasers would make the Microvision/Sony engine optics/heat-sinking smaller by about 25% and have a smaller impact on the board space, but this would still leave them over 3X bigger than STM.   The obvious next question is why.

One reason is that the STM either has a simpler electronics design or is more integrated and/or some combination thereof.  In particular the Microvision/Sony design requires an external DRAM (large rectangular chip in the Microvision/Sony).    STM probably still needs DRAM, but it is likely integrated into one of their chips.

There are not a lot of details on the STM optics (developed by bTendo of Israel before being acquired by STM).   But what we do know is STM uses separate simpler and smaller horizontal and vertical mirrors versus Microvision significantly larger and more complex single mirror assembly.  Comparing the photos above, the Microvision mirror assembly alone is almost as big as STM’s entire optical engine with lasers.   The Microvision mirror assembly has a lot of parts other than the MEMs mirror including some very strong magnets.  Generally the optical path of the Microvision engine requires a lot of space to enter and exit the Microvision mirror from the “right” directions.

btendo optics

On the right I have captured two frames from the Lenovo video showing the optics from two directions.  What you should notice is that the mirror assembly is perpendicular to the incoming laser light.  There appears to be a block of optics (pointed to by the red arrow in the two pictures) that redirects the light down to the first mirror and then returning it to the second mirror.  The horizontal scanning mirror is clearly shown in the video but it is not clear (so I took an educated guess) as to the location of the vertical scanning mirror.

Also shown at the right is bTendo patent 8,228,579 showing the path of light for their two scanning mirror design.   It does not show the more complex block of optics required to direct the light down to the vertical mirror and then redirect it back down to the horizontal mirror and then out as would be required in the Lenovo design.    You might also notice that there is a flat clear glass/plastic output cover shown in the at the 21s point in the video, this is very different from the Microvision/Celluon/Sony design show below.

Microvision mirror with measurements

Microvision Mirror Assembly and Exit Lens

Shown at left is the Microvision/Celluon beam scanning mirror and the “Exit” Lens.   First notices the size and complexity of the scanning mirror assembly with magnets and coils.  You can see the single round mirror with its horizontal hinge (green arrow) and the vertical hinge (yellow arrow) on the larger oval yoke.   The single mirror/pivot point causes an inherently bow-tied image.  You can see how distorted the mirror looks through the Exit Lens (see red arrow); this is caused by the exit lens correcting for the bow-tie effect.  This significant corrective lens is also a likely source of chroma aberrations in the final image.

Conclusions

All the above does not mean that the Leveno/STM is going to be a successful product.   I have not had a chance to evaluated the Lenovo projector and I still have serious reservations about any embedded projector succeeding in a cell phone (I outlined my reasons in an August 2013 article and I think they still hold true).    Being less than 1/5th the volume of the Microvision/Sony design is necessary but I don’t think is sufficient.

This comparison only shows that the STM design is much smaller than Microvisions and Microvision has only made relatively small incremental progress in size since the ShowWX announced in 2009) and Sony so far has not improved on it much, at least so far.