Archive for Pico Projection

Magic Leap Shout Out?: Grumpy Mouse Tech Blogger Here

grumpy-mouse-tech-bloggerI have been very busy the last few days and just realize it looks like I got a shout out Tweet from Rony Abovit, the CEO of Magic Leap. On the evening of Dec. 8th, 2016 he wrote, “To a few of the grumpy mouse tech blogger writers: you too will get to play the real thing when we ship.” As far as I am aware, I’m the only “tech blogger” that has been critical of what Magic Leap is doing and on the off-chance that Mr. Abovit did not know of my blog before, it was the only tech blog that was critical of Magic Leap cited in the “The Information” Article by Reed Albergotti that appeared on the 8th.

Mr. Albergotti is an writer for a legitimate news source and not a blogger. Maybe Mr. Abovit was trying to put him down as a “mere blogger” or was his petulant way to try and put down both of us.

In any event, is this the right way for a CEO who has raised $1.4B to strike back at writers he disagreed with?  Why can’t he be specific about with  whom and what he disagrees?  The best he could muster is an ad hominem attack and a bunch of unverifiable whistling in the dark tweets.

I’ve been laying out my proof in this blog. I was only trying to answer the question, “what is Magic Leap doing?” because as I knew that almost all the existing writing about Magic leap was doing was wrong and I thought it would be a fun to be the first to solve the puzzle. If I figured out they were doing something great, I would have reported it. But what I found as I studied the patents, technical material and the released Magic Leap’s videos combined my technical experience in the field, their whole technical story related to the display started to unravel.

Magic Leap: “The Information” Article

The Information: The Reality Behind Magic Leap

the-information-magic-leap-dec-8-2016-coverThe online news magazine “The Information” released the article “The Reality Behind Magic Leap” on Dec. 8th, 2016, by Reed Albergotti and in the story gave a link to this blog. So you may be a new reader.  The article appears to be well researched and I understand that “The Information” has a reputation as a reliable news source. The article also dovetails nicely on the business side with what I have been discussing with this blog on the technical side. The magazine is a paid publication but there is a summary on The Verge along with their added commentary and a lot of the text from the article has shown up in discussion forums about Magic Leap (ML).

For this blog post, I am going to try put 2+2 together between what I have figured out on the technical side and what Mr. Albergotti reported on the business side. Note, I have not seen what he as seen so I am reading between the lines somewhat but hopefully it will give a more complete picture.

The Magic Leap Prototypes

The article states “Magic Leap CEO Rony Abovitz acknowledged that the prototypes used different technology.” This blog has identified that the early prototypes as:

  1. ml-495-applicationA DLP based prototype that uses a variable focus lens to produce “focus planes” by generating different images for different distances and changing the focus between images and supported maybe 3 to 6 focus planes. This is probably their earliest one and is what the article calls “The Beast” and described as the “size of a refrigerator.”
  2. One or more OLED base variations once again using an electrically controlled focus element where ML made a smaller helmet version. The article discussed only one version, dubbed “WD3” but I suspect that they had variations of this one with different capabilities (as in maybe a WD1, WD2, WD3 and maybe more). I believe based on the video evidence a version that could only change focus was used for their Oct. 14, 2015 through the technology” video.  Their later “A New Morning” and “Lost Droids” videos appear to use an Mico-OLED based optics that supported at least two simultaneous focus planes by running the OLED at 120hz to generate two 60hz sequential “focus plane” images and changing the focus be each one.
  3. ml-slm-beam-splitter-lcos-type-optics-colorThe LCOS version that is using their “Photonic Chip” and supports about 2 focus planes with no moving focusing optics (according to the article); what the article dubbed the “PEQ” prototype.

If you want to get more into the gory technical details on how the above work, I would suggest one of my earlier articles titled “Magic Leap – Separating Magic and Reality“. And if you really want to get dirty, read the ML patent applications they reference but be prepared for a long read as they they cover a lot of totally different concepts.

As this blog has been reporting (and for which I have gotten criticism on some of the on-line discussion forms), the must discussed “fiber scanning display” (FSD) has not been perfected and with it any chance of making the “light field display” ML has talked so much about. Quoting the article,”Magic Leap relegated the fiber scanning display to a long-term research project in hopes that it might one day work, and significantly pared back on its light field display idea.

Possible Scenario – A Little Story

Based on my startup and big company experiences, I think I understand roughly how it went down. Please take the rest of this section as reasonable speculation and reading between the lines of known information. So I am going to play Columbo (old TV series reference) below to give my theory of how it went down.

Startups have sometimes been described as “Jumping out of a plane and sewing a parachute on the way down.” This appears to be the case with Magic Leap. They had a rough idea of what they wanted to do and were able to build an impressive demo system and with some good hand waving convince investors they could reduce it to a consumer headset.

They found Brian Schowengerdt, co-founder and Chief Scientist, who worked on the fiber scanning display (FSD) technology and the issue of vergence and accomodation at the University of Washington to join. Mr. Schowengerdt is clearly a smart person that added a lot of credibility to Rony Abovitz’s dreams. The problem with “university types” is that they often don’t appreciate what it takes to go from R&D to a real high volume product.

The “new optical people” built “The Beast” prototype using DLP’s and electrical controlled focusing lenses to support multiple focus plane, to address the vergence and accommodation issue. They then used the “Jedi Hand Wave Mind Trick” (ordinary hand waving may not be enough) to show the DLP engine, the crude low resolution FSD display from the U of W, some non-functional waveguides, and a mock-up of how wonderful it would be someday with a simple application of money and people (if you can dream it you can build it, right?).

This got them their “big fish,” Google who was attuned to the future of near eye displays with their investment in Google Glass and all the big noise with Oculus Rift. There is phenomenal FoMO (Fear of Missing Out) going on with AR/VR/MR  The fact they got a lot of money from a big name company became it own publicity and fund raising engine. ML then got showered with money and that they hoped could cover the bet. Have Google invest publicly also became its own shield against any question of whether it would work.

All the money gave them a lot of altitude to try and build the proverbial parachute on the way down. But sometimes the problem is harder than all the smart people and money can solve. As I have pointed out on this blog, making the fiber scan display work at high resolution is no small task if not impossible. They came to realize this at some point, probably early on, that FSD were not going to happen in a meaningful time frame.

So “plan B” became to use an existing working display technology to give a similar visual effect, even if much reduced in resolution. The beast is way to big and expensive to cost reduce and then need to have more demo systems that are easier to make.

So then they make the WDx based on OLEDs. But there is fatal flaw with using OLEDs (and it tripped me up at first when looking at the videos). While OLED make the design much easier and smaller the don’t work due to the nature of the they put out with the wonderfully flat waveguides (what ML calls their “Photonics Chip”) that ML has convince investors are part of their secret sauce.

So if they couldn’t use the Photonics Chip with OLEDs and the FSD is a no-go, what do you tell investors, both of your secret sauces are a bust? So in parallel they are working on plan “C” which is to use LCOS panels with LED light sources that will work with some type of waveguide which they will dub the “Photonics Chip”.

But then there is a fly in the ointment. Microsoft starts going public with their Hololens system making Magic Leap look like they are way behind the giant Microsoft that can spend even more money than ML can raise. They need to show something to stay relevant. They start with totally fake videos and get called on the carpet for being obviously fake. So they need a “Magic Leap Technology” (but not the optics they are actually planning on using) demo.

The “Beast System” with it DLP’s and field sequential color will not video well. The camera will reveal to any knowledgeable expert what they are using. So for the video they press into service the WDx OLED systems that will video better. By cleaver editing and only showing short clips, they can demonstrate some focus effects while not showing the limitations of the WDx prototypes. These videos then make ML seem more “real” and keep people from asking too many embarrassing questions.

A problem jhere is that LCOS is much slower than DLP’s and thus they may only be able to support about 2 focus planes. I also believe from 16 years working with LCOS that this likely to look like crap to the eye due to color field breakup; but reapplying the Jedi Mind Trick, maybe two focus planes will work and people won’t notice the color field breakup. And thus you have the PEQ which still does not work well or would be demoing with it rather than the helmet sized WD3.

I suspect that Reed Albergotti from “The Information” had gotten the drop on ML by doing some good investigative journalism work. He told them he was going to run with the story and ML decided to try see if they could do damage control and invited him in. But apparently he was prepared and still saw the holes in their story.

Epilogue: It sounds like Mr. Schowengerdt has been put off to the side having served is usefulness in raising money. They used the money to hire other optical experts who knew how to design the optics they would actually be using. He may be still playing around the FSD to keep the dream alive of a super high resolution display someday and maybe the the next to impossible high resolution light fields (I would suggest reading “The Horse Will Talk Fable” to gain insight into why they would keep doing this as an “R&D” program).

I’m probably a little off in the details, but it probably went down something like the above. If not, hopefully you found it an amusing story. BTW, if you want to make a book and or movie out of this original story please consider it my copyrighted work (c) 2016 (my father was and two brothers are Patent Lawyers and I learned about copyright as a small child at my fathers knee).

Lessons Learned

In my experience startups that succeed in building their product have more than a vague idea of what they want to do and HOW they are going to do it. They realize that money and smart people can’t cure all ills. Most importantly they understand where they have risk and then only have at most A SINGLE serious risk. They then focus on making sure they covering that risk. In the case of Magic Leap, they had multiple major risks in many different areas. You can’t focus on the key risk because there so many and it is a prescription for product failure no matter how much money is applied.

Its even possible the “smart money” that invested realized that ML realized that they were unlikely to totally succeed but thought with money and smart people they might spin out some valuable technology and/or patents. The “equation works” if they multiply a hoped by $100B/year market by even a small chance of success. If a big name places what is for them a small bet, it is surprising how much money will follow along assuming the big name investor had done all the hard work of due diligence.

Even if they get paste the basic technology risk get the PEQ running. We they will then have the problem of building a high volume product, worse yet they are building their own factory. And then we have the 90/90 rule which states, “it takes 90% of the effort to get 90% of the way there and then another 90% to solve the last 10%.” When you have a fully working prototype that behaves well (which by the reports in ML has NOT achieved yet) you have just made it to the starting line; then you have to make it manufacturable at a reasonable cost and yield. Other have said it is really 90/90/90 where there is a third 90%. This is where many a Kickstarter company has spun their wheels.

Magic Leap: When Reality Hits the Fan

Largely A Summary With Some New Information

ml-slm-beam-splitter-lcos-type-optics-colorI have covered a lot of material and even then only glossed at the surface of what I have learned about Magic Leap (ML). By combining the information available (patent applications, articles, and my sources), I have a fairly accurate picture of what Magic Leap is actually doing based on feedback I have received from multiple sources.

This blog has covered a lot of different topics and some conclusions have changed slightly as I discovered more information and with feedback from some of my sources. Additionally, many people just want “the answer.” So I thought it would be helpful to summarize some of the key results including some more up to date information.

What Magic Leap Is Not Doing In The Product

Between what I have learned and feedback from sources I can say conclusively that ML is not doing the following:

  1. Light Fields – These would requires a ridiculously large and expensive display system for even moderate resolution.
  2. Fiber Scan Displays – They have demonstrated low resolution versions of these and may have used them to convince investors that they had a way to break through the limitations of pixel size of Spatial Light Modulators (SLM) like LCOS, DLP, and OLEDs. Its not clear how much they improved the technology over what the University of Washington had done, but they have given up on these being competitive in resolution and cost with SLMs anytime soon. It appears to have been channeled into being a long term R&D effort and to keep the dream alive with investors.
  3. Laser Beam Scanning (LBS) by Microvision or anyone else – I only put this on the list because of an incredibly ill-informed new release by Technavio stating “Magic Leap is yet to release its product, and the product is likely to adopt MicroVision’s VRD technology.” Based on this, I would give the entire report they are marketing zero credibility; I think they are basing their reports on reading fan-person blogs about Microvision.
  4. OLED Microdisplays – They were using these in their demos and likely in the video they made, but OLED are incompatible optically with there use of a diffractive waveguide (= ML’s Photonic Chip).
Prototypes that Magic Leap Has Shown
  1. FSD – Very low resolution/crude green only fiber scanned display. This is what Rachel Metz described (with my emphasis added) in her MIT Technology Review March/April 2015 article, “It includes a projector, built into a black wire, that’s smaller than a grain of rice and channels light toward a single see-through lens. Peering through the lens, I spy a crude green version of the same four-armed monster that earlier seemed to stomp around on my palm.
  2. ml-495-applicationTI DLP with a conventional combiner and  a “variable focus element” (VFE). They use the DLP to generate a series of focus planes time sequentially and change the VFE between the sequential focus planes. Based on what I have heard, this is their most impressive demo visually and they have been using this for over a year, but the system is huge.
  3. OLED with a conventional combiner (not a waveguide/”Photonics Chip”). This is likely the version they used to shoot their “Through Magic Leap Technology” videos that I analyzed in my Nov. 9th, 2016 blog post. In that article I though that Micro-OLED might be used in the final product, but I have revised this opinion. OLEDs output very wide bandwidth light that is incompatible with waveguides, so it would be incompatible with working with Photonics Chip ML makes such a big deal about.

What is curious is that none of these prototypes, with the possible exception of #1, the single color low resolution FSD, are using a “waveguide.” Waveguides are largely incompatible with OLEDs and having a variable focus element is also problematical.  Also none of these are using LCOS, the most likely technology in the final product.

What Magic Leap Is Trying to Do In Their First “Product”

I’m going to piece together below what I believe based on the information available from both public information and some private conversations (but none of it is based on NDA’ed information as far as I am aware).

  1. ml-slm-beam-splitter-lcos-type-optics-colorLCOS Microdisplay – All the evidence including Business Insider’s October 27, 2016 points to ML using LCOS. They need a technology that will work well with waveguides using narrow band (likely LED) light sources that they can make as bright as necessary and control the angle of the light illumination. LCOS is less expensive, more optically compact, and requires less power than DLP for near eye systems. All these reason are same as why Hololens is using LCOS. Note, I’m not 100% sure on them using LCOS, but it by far the most likely technology they will be using. They could also be using DLP but I would put that at less than a 10% chance. I’m now ruling out Micro-OLED because it would not work in a waveguide.
  2. Two (2) sequential focus planes are supported – The LCOS microdisplay is likely only able to support about 120 full color frames per second which is only enough to support 2 sequential focus planes per 1/60th of a second of a moving image. Supporting more planes at a slower rate would result in serious image breakup when things move. The other big issue is the amount of processing required. Having even two focus planes greatly increase the computation that have to be done. To make it work correctly, they will need to track the person’s pupils and factor that into their processing and deal with things like occlusion. Also with the limited number of focus planes they will have to figure out how to “fake” or deal with a wider range of focus.
  3. Variable Focus – What I don’t know is how they are supporting the change in focus between the sequential focus planes. They could be using some form of electrically alterable lens but it is problematical to have non-collimated light entering a waveguide. It would therefore seem more consistent for them to be using the technique shown in their patent application US 2016/0327789 that I discussed before.
  4. Photmagic-leap-combiner-croponics Chip (= Diffractive Waveguide) – ML has made a big deal about their Photonic’s Chip, what everyone else would call a “waveguide.” The Photonics Chip likely works similar to the one Hololens uses (for more information on waveguides, see my Oct 27th, 2016 post). The reports are that Hololens has suffered low yields with their Waveguides and Magic Leaps will have more to do optically to support focus planes.
Comments

Overall, I think it it is very clear that what they will actually make is only a fraction of he vision they have portrayed to the press. They may have wanted to do 50 megapixel equivalent foveated displays, use FSD as their display device, have 6 focus planes, or even (from Fortune July 12, 2016) ““light-field” technology essentially mimics the brain’s visual-perception mechanisms to create objects and even people who look and behave just the way they would in the real world, and interact with that world seamlessly.” But then, they have to build something that actually works and that people can afford to buy. Reality then hits the fan

 

Magic Leap – Separating Magic and Reality

The Goal – Explain What is Magic Leap Doing

Magic Leap has a way of talking about what they hope to do someday and not necessarily what they can do anytime soon.  Their patent applications are full of things that are totally impossible or impractical to implement.  I’ve been reading well over a thousand pages of Magic Leap (ML) patents/applications, various articles about the company, watching ML’s “through the optics” videos frame by frame, and then applying my own knowledge of display devices and the technology business to develop a picture of what Magic Leap might produce.

Some warnings in advance

If you want all happiness and butterflies, as well as elephants in your hand and whales jumping in auditoriums, or some tall tale of 50 megapixel displays and of how great it will be someday, you have come to the wrong place.  I’m putting the puzzle together based on the evidence and filling in with what is likely to be possible in both the next few years and for the next decade.

Separating Fact From Fiction

There have been other well meaning evaluations such as “Demystifying Magic Leap: What Is It and How Does It Work?“,  “GPU of the Brain“, and the videos by “Vance Vids” but these tend to start from the point of believing the promotion/marketing surrounding ML and finding support in the patent applications rather than critically evaluating them. Wired Magazine has a series of articles as well as Forbes and others have covered ML, but these have been are personality and business pieces that make no attempt to seriously understand or evaluate the technology.

ml-array-picAmong the biggest fantasies surrounding Magic Leap is the Arrayed Fiber Scanning Displays (FSD); many people think this is real. ML Co-founder and Chief Scientist, Brian Schowengerdt, develop this display concept at the University of Washington based off an innovative endoscope technology and it features prominently in a number of ML assigned patent applications.  There are giant issues in scaling up FSD technology to high resolution and what it would require.

In order to get on with what ML is most likely doing, I have moved to the Appendix why FSDs, light fields, and very complex waveguides are not what Magic Leap is doing. Once you get rid of all the “noise” of the impossible things in the ML patents, you are left with a much better picture of what they are actually could be doing.

What left is enough to make impressive demos and it may be possible to produce at a price that at least some people could afford in the next two years. But ML still has to live by what is possible to manufacture.

Magic Leaps Optical “Magic” – Focus Planes

Fm: Journal of Vision 2009

At the heart all of ML optical related patents is the concept eye vergence-accomodation where the focus of the of the various parts of a 3-D image should agree with their distances or it will cause eye/brain discomfort. For more details about this subject see this information about Stanford’s work in this area and their approach of using quantized (only 2 level) time sequential light fields.

There are some key similarities in that between the Stanford and Magic Leap’s approaches.  They both quantize to a few levels to make them possible to implement and they both present their images time sequentially and they rely on the eye/brain to both fill in between the quantizated levels and integrate a series of time sequential images. Stanford’s approach is decidedly not a “see through” with an Oculus-like setup with two LCD flat panel displays in series where Magic Leap’s goal is to merge the 3-D images with the real world with Mixed Reality (MR).

ml-focus-planesMagic Leap uses the concept of “focus planes” where they conceptually break up a 3-D image into quantized focus planes based on the distance of the virtual image.  While they show 6 virtual planes in Fig. 4 from the ML application above, that is probably what they would like to do but they are doing fewer planes (2 to 4) due to practical concerns.

Magic Leap then renders the parts of an image image into the various planes based on the virtual distance.  The ML optics make it planes appear to the eye like they are focus based their corresponding virtual distance. These planes are optically stacked on top of each other give the final image and they rely on the person’s eye/brain to fill in for the quantization.

Frame Sequential Focus Planes With SLMs

ml-slm-vfe-biocular-systemMagic Leap’s patents/applications show various ways to generate these focus planes, the most fully form concepts use a single display per eye and present the focus planes time sequentially in rapid succession, what ML refers to as “frame-sequential“where there is one focus plane per “frame.”

Both due to the cost and size multiple displays per eye and their associated optics including those to align and overlay them, the only possible way ML could build a product for even a modest volume market is by using frame sequential methods using a a high speed spatial light modulator (SLM) such a DLP, LCOS, or OLED microdisplay.

Waveguides and Focus Planes

Light rays that coming from a far away point that make into the eye are essentially parallel (collimated) and light rays from a near point have a wider set angles.  These differences in angles is what makes them focus differently, but at the same time creates problems for existing waveguide optics, such as what Hololens is using.

The very flat and thin optical structures call “waveguides” will only work with collimated light entering them because of how total light totally internally reflects to stay in the light guide and the the way the diffraction works to make the light exits.  So a simple waveguide would not work for ML.

ml-angle-mirror-deviceSome of ML’s concepts use use one or more beam splitting mirrors type optics rather than waveguides for this reasons. Various ML’s patent applications show using a single large beam splitter or multiple smaller ones (such as at left), but these will be substantially thicker than a typical waveguide.

magic-leap-combiner-cropWhat Magic Leap calls a “Photonics Chip” looks to be at least one layer of diffractive waveguide. There is no evidence of mirror structures, and because it bends the wood in the background (if it were just a simple plate of glass, the wood in the background would not be bent), it appears to be a diffractive optical structure.

Because ML is doing focus planes, they need to have not one, but a stack of waveguides, one per focus plane. The waveguides in ML’s patent applications show collimated light entering the each waveguide in the stack like a normal waveguide, but then the exit diffraction gratings both causes the light to exit also imparts the appropriate focus plane angle to the light.

To be complete, Magic Leap has shown in several patent applications shown some very thick “freeform optics” concepts, but none of this would look anything like the “Photonics Chip” that ML shows.  ML’s patent applications show many different optical configurations and they have demoed a variety of different designs. What we don’t know is if the Photonics Chip they are showing is what they hope to use in the future or if this will be in their first products.

Magic Leaps Fully Formed Designs In Their Recent Patent Applications

Most of Magic Leaps patent applications showing optics have more like fragments of ideas.  There are lots of loose ends and incomplete concepts.

More recently (one publish just last week) there are patent applications assigned to Magic Leap with more “fully formed designs” that look much more like they actually tried to design and/or build them.  Interestingly, these applications don’t include as inventors the founders Rony Abovitz, the CEO, nor even Brian T. Schowengerdt, Chief Scientist, while they may use ideas from those prior “founders patent application.”

While the earlier ML applications mention Spatial Light Modulators (SLMs) using DLP, LCOS, and OLED microdisplays and talk about Variable Focus Element (VFEs) for time sequentially generating focus planes, they don’t really show how to put them together to make anything (a lot is left to the reader).

freeform-opticsPatent Applications 2016/0011419 (left) and 2015/0346495 (below) show straight forward ways to achieve field sequential focus planes using a Spatial Light Modulator (SLM) such as DLP, LCOS or OLED microdisplay.  ml-vfe-with-dlp-003b

As focus plane is created by setting the a variable focus element (VFE) to a one focus point and then generating the image by the SLM. Then the VFE focus is then changed and a second focus plane is displayed by the SLM.  This process can be repeated to generate more focus planes and limited by how fast the SLM can generate image and by level of motion artifact that can be tolerated.

These are clearly among the simplest way to generate focus planes. All that is added over a “conventional” design is the VFE.  When I first heard about Magic Leap many months ago, I heard they were using DLPs with multiple focus depths but a more recent Business Insider is reporting ML is using using Himax LCOS.  Both of these could easily be adapted to support OLED microdisplays.

The big issue I have with the straight forward optical approaches are the optical artifacts I have seen in the videos and the big deal ML makes out of their Photonics Chip (waveguide).  Certainly their first generation might use a more straightforward optical design and then save the Photonics Chip for the next generation.

Magic Leaps Videos Show Evidence of Waveguide Optics

As I wrote last time, there is a lot of evidence from the videos ML has put out that they are using a waveguide at least for the video demos.  The problem is when you bend light in a short distance using diffraction gratings or holograms is that some of the light does not get bent correctly and this shows up colors not lining up (chroma aberrations) as well as what I have come to call the “waveguide glow”.  If at R2D2 below (you may have to click on the image see it clearly) you should see a blue/white glow around R2D2.  I have seen this kind of glow in every diffractive and holographic waveguide I have seen.  I have heard that the glow might be eliminated someday with laser/very narrow bandwidth colors and holographic optics.ml-r2d2-glow2

The point here is that there is a lot of artifact evident that ML was at least using some kind of waveguide in their videos.  This makes it more likely that their final product will also use waveguides and at the same time may have some or all of the same artifacts.

Best Fit Magic Leap Application with Waveguides

If you drew a venn diagram of all existing information, the one patent application that fits best it all is the very recent US 2016/0327789. This is no guarantee that it is what they are doing, but it fits the current evidence best. It combines the a focus plane sequential LCOS SLM (although it shows it could also support DLP but not OLED) with waveguide optics.

The way this works is that for every focus plane there are 3 Waveguides (RED, Green,and Blue) and spatial separate set of LEDs Because the are spatially separate,  they will illuminate the LCOS device at a different angle and after going through the beam splitter the waveguide “injection optics” will cause the light from the different spatially separated LEDs to be aimed at a different waveguide of the same color. Not shown in the figure below is that there is an exit grating that both causes the light to exit the waveguide and imparts an angle to the light based on the focus associated with that give focus plane.  I have coloring in the “a” and “b” spatially separated red paths below (there are similar pairs for blue and green).

With this optical configuration, the LCOS SLM is driven with the image date for a given color for a given focus plane and then the associated color LED for that plane is illuminated.  This process then continues with a different color and/or focus plane until all 6 waveguides for the 3 colors by 2 planes have been illuminated.  ml-slm-beam-splitter-lcos-type-optics-color

The obvious drawbacks with this approach:

  1. There are a lot of layers of waveguide with exit diffraction gratings that the user will be looking through and the number of layers grows by 3 with each added focus plane.  That is a lot of stuff to be looking though and it is bound to degrade the forward view.
  2. There are a lot of optical devices that all the light is passing through and even small errors and leak light builds up.  This can’t be good for the overall optical quality.  These errors have their effect on resolution/blurring, chroma aberrations, and glowing/halo effects.
  3. Being able to switch though all the colors and focus planes fast enough to avoid motion artifacts where the colors and/or the focus planes break up.  Note this issue exist with using any approach that both does field and focus plan sequential.   Obviously this issue becomes worse with more focus planes.

The ‘789 patent show an alternative implementation for using a DLP SLM. Interestingly, this arrangement would not work for OLED Microdisplays as they generate their own illumination so you would not be able to get the spatially separated illumination.

So what are they doing?  

Magic Leap is almost certainly using some form of spatial light modulator with field sequential focus planes (I know I will get push-back form the ML fans that want to believe in the FSD — see the Appendix below); but this is the only way I could see them going to production in the next few years.  Based on the Business Insider information, it could very well be an LCOS device in the production unit.

The the 2015/0346495 with the simple beam splitter would be what I would have choose for a first design provide there is an appropriate variable focus element (VFE) available.  It is by far the simplest design and would seem to have the lowest risk. The downside is that the angled large beamsplitter will make it thicker but I doubt that much more so.   Not only is it lower risk (if the VFE works) but the image quality will likely be better using a simple beam splitter and spherical mirror-combiner than many layers diffractive waveguide.

The 2016/0327789 application touches all the basis based on available information.  The downside is that they need 3 waveguides per focus plane.  So if they are going to say support just 3 focus planes (say infinity, medium, and short focus) they are going to have 9 (3×3) layers waveguides to manufacture and pay for and 9 layers to look through to see the real world.  Even if each layer is extremely good quality, the error will build up in so many layers of optics.  I have heard that the Waveguide in Hololens has been a major yield/cost item and what ML would have to build would seem to be much more complex.   

While Magic Leap certainly could have something totally different, but they can’t be pushing on all fronts at once.  They pretty much have to go with a working SLM technology and get their focus planes time sequentially to build an affordable product.

I’m fond to repeating the 90/90 rule that “it takes 90%  of the effort to get 90% of the way there, then it takes the other 90% to do the last 10%” and someone quipped back, it can also be 90/90/90. The point being is that you can have something that look pretty good and impresses people, but solving the niggling problems, making it manufacturable and cost effective almost always takes more time, effort, and money than people want to think. These problems tend to become multiplicative if you take on too many challenges at the same time.

Comments on Display Technologies

As far as display technologies go each of the spatial light technologies has it pro’s and cons.

  1. LCOS seems to be finding the widest acceptance due to cost.  It is generally lower power in near eye displays than DLP.   The downside is that it has a more modest field rate which could limit the number of focus planes.  It could also be used in any of the 3 prime candidate optical system.  Because the LEDs are separate from the display, they can support essentially any level of brightness.
  2. DLP has the fastest potential field rate which will support more focus planes.  With DLPs they could trade color depth for focus planes.  DLPs will also tend to have higher contrast.  Like LCOS, brightness will not an issue as the LEDs can provide more than enough light.  DLP tends to be higher in cost and power and due to the off axis illumination, tend to have a little bigger optical system that LCOS in near eye applications.
  3. OLED – It has a lot of advantages in that it does not have to sequentially change the color fields, but the current devices still have a slower frame rate than DLP and LCOS can support.  What I don’t know, is how much the field rate is limited by the OLED designs to date versus what they could support if pressed.   The lack of control of the angle of illumination such as used in the ‘789 application.  OLEDs put out rather diffuse with little angle control and this could limit its usefulness with respect to focus plane where you need to  control the angles of light.
  4. FSD Per my other comment and the Appendix below, don’t hold your breath waiting for FSDs.
Image Quality Concerns

I would be very concerned about Magic Leap’s image quality and resolution beyond gaming applications. Forget all those magazine writers and bloggers getting all geeked out over a demo with a new toy, at some point reality must set in.

Looking at what Magic Leap is doing and what I have seen in the videos about the effective resolution and image quality it is going to be low compared to what you get even on a larger cell phone.  They are taking a display device that could produce a good image (either 720p or maybe 1080p) under normal/simple optics and putting it through a torture test of optical waveguides and whatever optics used to generate their focus planes at a rational cost; something has to give.

I fully expect to see a significant resolution loss no matter what they do plus chroma aberrations, and waveguide halos provide they use waveguides.  Another big issue for me will be the “real world view” through whatever it takes to create the focus planes and how will it effect you say seeing you TV or computer monitor through the combiner/waveguide optics.

I would also be concerned about field sequential artifacts and focus plane sequential artifacts.  Perhaps these are why there are so many double images in the videos.

Not to be all doom and gloom.  Based on casual comments from people that have seen it and the fact that some really smart people invested in Magic Leap,  it must provide an interesting experience and image quality is not everything for many applications. It certainly could be fun to play with at least for a while. After all, Oculus rift has a big following and its angular resolution is so bad that they cover up by blurring and it has optical problems like “god rays.”

I’m more trying to level out the expectations.   I expect it to be a long way from replacing your computer monitor, as one reporter suggested, or even your cell phone, at least for a very long time. Remember that this has so much stuff in that in addition to the head worn optics and display you are going to have a cable down to the processor and battery pack (a subject I have only barely touched on above).

Yes, Yes, I know Magic Leap has a lot of smart people and a lot of money (and you could say the same for Hololens), but sometime the problem is bigger than all the smart people and money can solve.

Appendix: 

The Big Things Magic Leap is NOT Going To Make in Production Anytime Soon

The first step in understand Magic Leap is to remove all the clutter/noise that ML has generated.  As my father use to often say, there are to ways to hide information, you can remove it from view or your can bury it.” Below is a list of the big things that are discussed by ML themselves and/or in their patents that are either infeasible or impossible any time soon.

It would take a long article on each of these to give all the reasons why they are not happening, but hopefully the comments below will at least outline the why:

ml-array-pic

A) Laser Fiber Scanning Display (FSD) 

A number of people of picked up on this particularly because the co-founder and Chief Scientist, Brian Schowengerdt, developed this at the University of Washington.  The FSD comes in two “flavors” the low resolution single FSD and the Arrayed FSD

1) First, you pretty limited on the resolution of a single mechanically scanning fiber (even more so than Mirror scanners). You can only make them spiral so fast and they have their own inherent resonance. They make an imperfectly space circular spiral that you then have to map a rectangular grid of pixels onto. You can only move the fiber so fast and you can trade frame rate for resolution a bit but you can’t just make the fiber move faster with good control and scale up the resolution. So maybe you get 600 spirals but it only yields maybe 300 x 300 effective pixels in a square.

2) When you array them you then have to overlap the spirals quite a bit. According to ML patent US 9,389,424 it will take about 72 fibers scanner to made a 2560×2048 array (about 284×284 effective pixels per fiber scanner) at 72 Hz.

3) Lets say we only want 1920×1080 which is where the better microdisplays are today or about 1/2.5 of 72 fiber scanners or about 28 of them. This means we need 28 x 3 (Red, Green, Blue) = 84 lasers. A near eye display typical outputs between 0.2 and 1 lumen of light and you divide this then by 28. So you need a very large number really tiny lasers that nobody I know of makes (or may even know how to make). You have to have individual very fast switching lasers so you can control them totally independently and at very high speed (on-off in the time of a “spiral pixel”).

4) So now you need to convince somebody to spend hundreds of millions of dollars in R&D to develop very small and very inexpensive direct green (particularly) lasers (those cheap green lasers you find in laser pointers won’t work because they switch WAY to slow and are very unstable). Then after they spend all that R&D money they have to then sell them to you very cheap.

5) Laser Combining into each fiber. You then have the other nasty problem of getting the light from 3 lasers into a single fiber; it can be done with dichroic mirrors and the like but it has to be VERY precise or you miss the fiber. To give you some idea of the “combining” process you might want to look at my article on how Sony combined 5 lasers (2 Red, 2 Green, and 1 Blue for brightness) for a laser mirror scanning projector http://www.kguttag.com/2015/07/13/celluonsonymicrovision-optical-path/. Only now you don’t do this just once but 28 times. This problem is not impossible but requires precision and precision cost money. Maybe if you put enough R&D money into it you can make it on a single substrate.  BTW, It looks to me that in the photo you see of Magic Leap prototype (https://www.wired.com/wp-content/uploads/2016/04/ff_magic_leap-eric_browy-929×697.jpg) it looks like they didn’t bother combining the lasers into single fibers.

6) Next to get the light injected into a waveguide you need to collimate the arrays of cone shaped light rays. I don’t know of any way, even with holographic optics that you can Collimate this light because you have overlapping rays of light going in different directions.  You can’t collimate the individual cones of light rays or there is not way to get them to overlap to make a single image without gaps in it. I have been looking through the ML patent applications an they never seem to say how they will get this array of FSDs injected into a waveguide. You might be able to build this in a lab for one that is horribly inefficient by diffusing the light first but it would be horribly inefficient.

7) Now you have the issue of how are you going to support multiple focus planes. 72Hz is not fast enough to do it Field Sequentially so you have to put in either parallel ones so multiply by the number of focus planes. The question at this point is how much more than a Tesla Model S (starting at $66K) will it cost in production.

I think this is a big ask when you can buy an LCOS engine at 720p (and probably soon 1080p) for at about $35 per eye. The theoretical FSD advantage is that it might be able to be scaled it up to higher resolutions but you are several miracles away from that today.

ml-wavefrontB) Light Fields, Light Waves, etc.

 There is no way to support any decent resolution with Light Fields that is going to fit on anyone’s head.  It takes about 50 to 100 times the simultaneous image information to support the same resolution with a light field.  Not only can’t you afford to display all the information to support good resolution, it would take and insane level of computer processing. What ML is doing is a “shortcut” of multiple focus planes which is at least possible.  The “light wave display” is insane-squared, it requires the array of fibers to be in perfect sync among other issues.

ml-multi-displayC) Multiple Displays Driving the Waveguides

ML patents show passive waveguides with multiple displays (fiber scanning or conventional) driving them. It quickly becomes cost prohibitive to support multiple displays (2 to 6 as the patents show) all with the resolution required.

ml-vfe-compensation4) Variable Focus Optics on either side of the Waveguides

Several of their figures show electrically controlled variable focus elements (VFE) optics on either side of the waveguides with one set changing the focus of a frame sequential image plane compensating while a second set of VFE compensates so the  “real world” view remains in focus. There is zero probability of this working without horribly distorting the real world view.

What Magic Leap Is Highly Unlikely to Produce

multiplane-waveguideActive Switching Waveguides – ML patents applications show many variations they drawn attention from other articles. The complexity of making them and the resultant cost is one big issue.  There would likely be serious the degradation to the view all the layers and optical structures through to the real world.  Then you have the cost both in terms of displays and optics to get images routed to the various planes of the waveguide.  ML’s patent applications don’t really say how the switching would work other than saying they might use liquid crystal or lithium niobate but nothing so show they have really thought it through.   I put this in the “unlikely” category because companies such as DigiLens have built switchable Bragg Gratings.

Magic Leap “A Riddle Wrapped in an Enigma”

magic-leap-combiner-cropSo what is Magic Leap doing?  That is the $1.4 billion dollar question. I have been studying their patents as well as videos and articles about them and frankly a lot of it does not add up.   The “Hype Factor” is clearly off the chart with major and high tech news/video outlets covering them with a majors marketing machine spending part of the $1.4B, yet no device has been shown publicly, only a few “through the Magic Leap” online videos (6 months ago and 1 year ago).   Usually something this much over-hyped ends up like Segway (I’m not the first to make the Segway comparison to Magic Leap) or more recently Google Glass.

Magic Leap appears to be moving on many different technological fronts at once (high resolution fiber scanning display technology, multi-focus- combiner/light fields, and mega-processing to support the image processing required) which almost always is a losing strategy even for a large company no less a startup, albeit a well funded one. What’s more, and the primary subject of this article, they appear to be moving on many different fronts/technologies with respect to the multi-focus-combiner.

Based on the image above from Wired in April 2016 and other articles talking about a “photonic chip,” a marketing name for their combiner not used in any of their patent applications that I could find.   By definition, a photonic device would have some optical property that is altered electronically, but based on other comments made by Magic Leap and looking at the patents, the so called “chip” is just as likely a totally passive device.

ml-scanned-fiber-applicationIt is also well known that Magic Leap is working on piezo scanned laser fiber displays, a display technology initially developed by Magic Leap’s CTO while at the University of Washington (click left for a bigger image). Note that is projects a spiraling cone of light.

A single scanning display is relatively low resolution and so to achieve Magic Leaps resolution goals will require arrays of these scanning fibers as outlined in their US Application 2015/0268415.

Magic Leap is moving in so many different directions at the same time. I plan on covering the scanning fiber display in more detail much more detail in the near future.   

Background – Nvidia and Stanford Light Fields

A key concept running through everything about Magic Leap it is that their combiner supports at least multiple focus depths at the same time.   The term “Light Fields” is often used in connection with Magic Leap, but what they are doing is not classic light fields such as Nvidia has demonstrated (very good article and video is here).   Or even what Stanford’s Gordon Wetzstein work talks about with compressive light field displays (example here) and several of his YouTube videos, in particular this one that discusses light fields and the compressive display.   (More on this background at the end). 

A key think to understand about “light fields” and Magic Leaps multi-focus-planes is that they are based on controlling the angles of the rays of light as it controls the focus distance.   The rays of light that will make it through the eye’s pupil from a point on far away objects come in nearly parallel, whereas the rays from a nearby point have a wider range of angles.

Magic Leap Patents

Magic Leaps patents show a mix of related and very different types of waveguide combiners.   Most in-line with what Magic Leap talks about in the press and videos are the ones that include multi-plane waveguides and scanned laser fiber displays.   These include US patent applications US20150241705 (‘705) and the 490 page US20160026253 (‘253).  I have clipped out some of the key figures from each below (click on the images to see larger images).

ml-combiners-from-us-20150241705Fig. 8 from the ‘705 patent uses a multi-layer electrically switched diffraction grating waveguide (but they don’t say what technology they expect to use to cause the switching). In addition to switching each diffraction grating makes the image focus differently as shown in Fig. 9.  While this “fits” with the “photonic chip” language by Magic Leap, I’m less inclined to believe this is what Magic Leap is doing based on the evidence to date (although Digilens has developed switchable SBGs in their waveguides).

Fig. 6 likely comes closer to what Magic Leap seems to be working on, at least in the long term. In this case there is one or more laser scanning fiber displays for each layer of the diffraction grating (similar to Fig. 8 but passive/fixed). The gratings layers in this setup are passive and based on which display is “on” chooses the grating layer and thus chooses the focus.  Also note the ” collimation element 6” between the scanning fibers 602a-e and the waveguide 122. They take the cone of rays from the spiral scanning fiber and turns them into an array of parallel (collimated) rays. Below shows a prototype from the June 2016 “Wired” article with two each of red, green, blue fibers per eye (6 total)ml-combiners-from-us-20150346495which would support two simultaneous focus points (in future articles I plan on going into more about the scanning fiber displays).

wired-photo-croppedAbove I have put together a series of figures from Magic Leap’s US patent application 2015/0346495.  Most of these are difference approaches to accomplish essentially the same effect, namely to create 2 or more images in layers that appear to be in focus at different distances.  In some approaches they will generate the various focused images time sequentially and rely on the eye’s persistence of vision to fuse them (the Stanford Compressive Display works sequentially).  You may note that some of the combiner technologies shown above are not that flat including what is known as  “free form optics” (Fig. 22G above) that would be compatible with a panel (DLP, LCOS, or Micro-OLED display).

And Now for something completely different

ml-495-application

To the left patent application 2015/0346495 that shows a very different optical arrangement with a totally different set of inventors from the prior patents.   This device supports multiple focus effects via a Variable Focus Element (VFE).   What they do is generate a series of images sequentially and change the focus between images and use the persistence of the human visual system to fuse the various focused images.

This is a totally different approach to achieve the same effect.   It does requires a very fast image generating device which would tend to favor DLP and OLED over say LCOS as the display device.   I have questions as to how well the time sequential layers will work with a moving image and would there be temporal breakup-effect.

There are also a number of patents with totally different optical engines and totally different inventors (and not principles of Magic Leap) with free-form (very thick/non-flat) optics  20160011419 and 20160154245 which would fit with using an LCOS (or DLP) panel instead of the laser fiber scanning display.

I have heard from more than one source that at least some early prototypes by Magic Leap used DLPs.  This would suggest some form of time sequential focusing.

Problems I See with the “Photonic Chip” Magic Leap Showed in the June 2016 Wired picture

hololens-combiner-002-sm“Edge injection” waveguide – There needs to be an area to inject the light.  All the waveguide structures in Magic Leaps patents show use “side/edge” injection of the image.  Compare to the Microsoft’s Hololens (at right)which injects  the image
light in the face (highlighted with the green dots).   With a edge injected waveguide, the waveguide would need to be thicker for even a single layer, no less the multiple layers with multiple focus distances that Magic Leap is requires.

lumusLumus (at left) has series of exit prisms similar to a single layer of the Magic Leap ‘495 application Figs. 5H, 6A, 8A, and 10.  Lumus does edge injection but at roughly a 45 degree angle (see circled edge) which gives more area to inject the image and gets the light started at an angle sufficient for Total Internal Reflection (TIR).  There is nothing like this in the Magic Leap chip.

magic-leap-combiner-cropLooking at the Magic Leap chip” (right) there is not obvious place for light to be “injected”.  One would expect to see some discernible structure such as an angled edge or a some structure like in the ‘705 application Fig. 8 for injecting the light. Beyond this, what about the injecting multiple images for the various focus layers.  There is a “tab” at the top which would seem to be either for mounting or it could be a light injection area for a surface injection like Hololens, but then I would expect to see some blurring/color or other evidence of diffractive structure (like Hololens does) to cause the light to bend about 45 degrees for TIR in such a short distance.

Another concern is that you don’t see any structure other than some blurring/diffusion in the Magic Leap chip.  Notice in both the Lumus and Microsoft combiners you can see structures, a blurring/color change in the case of Hololens and the exit prisms in the case of Lumus.

Beyond this if they are using their piezo scanned laser fiber display, it generates a light spiral angular cone of light that has to be “columated” (make the light rays parallel which is shown in the patent applications) so they can make their focus effects work. There would need to be a structure for doing the columation.   If they are using a more conventional display such as DLP, LCOS, or MicroOLED they are going to need a larger light injection area.

My conclusion is that at best this Magic Leap chip shown is either part of their combiner (one layer) or just a mock-up of what they hope to make someday.   I haven’t had a chance to look at or through it and anyone that has is under NDA, but based on the evidence I have, it seems unlikely that what is shown is function.

Pupil/Eyebox

I’m curious to see how small/critical the pupil/eyebox will be for their combiner.   On the one hand they want light at a the right angles to create the focusing effects and on the other hand they will will diverse/diffused light to give a large enough pupil/eyebox which could be at a cross purpose.  I’m wondering how critical it will be to position the eye in precisely the right place.   This is a question and not a criticism per say.

What, Himax LCOS? Business Insider OCT 27, 2016 (“Magic Leap Lite”?)

I had been studying the various patents and articles for some time and then last week’s Business Insider (see: http://www.businessinsider.in/Magic-Leap-could-be-gearing-up-for-a-2017-launch/articleshow/55097808.cms) throws a big curve ball.  The article attributes KGI Securities analyst Ming-Chi Kuo as saying:

“the high cost of some of Magic Leap’s components, such as a micro projector from Himax that costs about $35 to $45 per unit.”

I have no idea as to whether this is true or not, but if true it suggests something very different.   Using a Himax LCOS device is inconsistent with about everything Magic Leap has filed patents on. Even the sequentially focusing display would at best be tough with the Himax LCOS as it has a significantly lower field sequential rate than DLP.

If true, it would suggest that Magic Leap going to put out a “Magic Leap Very Lite” product based around some of their developments. Maybe this will be more of a software, user interface, and developer device. But I don’t see how they get close to what they have talked about to date.  The highest resolution Himax production device is 1366×768.

More Observations on Stanford’s Compressive Display and Magic Leap

Both are based on greatly reducing the image content from the general/brute force case so that a feasible system might be possible.  The Stanford approach is different from what Magic Leap appears to be doing.  The Stanford System has a display panel and a “modulator” panel that selects the lights rays (via controlling the angle of light that gets through) from display panel.  In contrast Magic Leap generates multiple layers of images with different focus associated with each layer in an additive manner.   This should mean that there two approaches to things like “occlusion” where parts of an image hide something behind it will have to be different (it would seem to be more easily dealt with in the Stanford approach I would think).

A key point that Dr. Wetztein makes is that brute force light fields (ala Nvidia which hugely sacrifices resolution) are impractical (too much to display and too much to process) so you have to find ways to drastically reduce the display information.  Dr. Wetztein also comments (a passing comment in the video) the that the problems are greatly reduced if you can track the eye.  Reducing the necessary image content has to be at the hear the heart of Magic Leap as well.  In all the incarnations in the patent art and Magic Leap’s comments point to supporting simultaneously two or more focus points.   Eye tracking is another key point in Magic Leap’s patents.

One might wonder if you can eye track and if you can tell the focus point of the eyes, you could eliminate the need to the light field display altogether and generate an image that appears to be focused and blurred based on the focus point of the eye.  Dr. Wetztein points out that one of the big reasons for having light fields is to deal with the eyes focus not agreeing with where the two eyes are aimed

Conclusion

Summing it all up, I am skeptical that Magic Leap is going to live up to the hype, at least anytime soon.  $1.4B can buy a lot of marketing as well as technology development, but it looks to me that to accomplish what Magic Leap wants to do, is not going to be feasible for a long time. Assuming they can make it work (I wonder about the fiber scanning display), there is then the issue of feasibility (The Concord SST airplane was “possible” but it was not “feasible” for example).

If they do enter the market in 2017 as some have suggested, it is almost certainly going to be a small subset of what they plan to do. It could be like Apple’s Newton that arguably was too far ahead of its time to fulfill its vision or it could be the next SST/Segway.

Next time I am planning on writing about Magic Leap’s scanning fiber display.

AR/MR Optics for Combining Light for a See-Through Display (Part 1)

combiners-sample-cropIn general, people find the combining of an image with the real world somewhat magical; we see this with heads up displays (HUDs) as well as Augmented/Mixed Reality (AR/MR) headsets.   Unlike Starwars R2D2 projection into thin air which was pure movie magic (i.e. fake/impossible), light rays need something to bounce off to redirect them into a person’s eye from the image source.  We call this optical device that combines the computer image with the real world a “combiner.”

In effect, a combiner works like a partial mirror.  It reflects or redirects the display light to the eye while letting light through from the real world.  This is not, repeat not, a hologram which it is being mistakenly called by several companies today.  Over 99% people think or call “holograms” today are not, but rather simple optical combining (also known as the Pepper’s Ghost effect).

I’m only going to cover a few of the more popular/newer/more-interesting combiner examples.  For a more complete and more technical survey, I would highly recommend a presentation by Kessler Optics. My goal here is not to make anyone an optics expert but rather to gain insight into what companies are doing why.

With headsets, the display device(s) is too near for the human eye to focus and there are other issues such as making a big enough “pupil/eyebox” so the alignment of the display to the eye is not overly critical. With one exception (the Meta 2) there are separate optics  that move apparent focus point out (usually they try to put it in a person’s “far” vision as this is more comfortable when mixing with the real word”.  In the case of Magic Leap, they appear to be taking the focus issue to a new level with “light fields” that I plan to discuss the next article.

With combiners there is both the effect you want, i.e. redirecting the computer image into the person’s eye, with the potentially undesirable effects the combiner will cause in seeing through it to the real world.  A partial list of the issues includes:

  1. Dimming
  2. Distortion
  3. Double/ghost images
  4. Diffraction effects of color separation and blurring
  5. Seeing the edge of the combiner

In addition to the optical issues, the combiner adds weight, cost, and size.  Then there are aesthetic issues, particularly how they make the user’s eye look/or if they affect how others see the user’s eyes; humans are very sensitive to how other people’s eye look (see the EPSON BT-300 below as an example).

FOV and Combiner Size

There is a lot of desire to support a wide Field Of View (FOV) and for combiners a wide FOV means the combiner has to be big.  The wider the FOV and the farther the combiner is from the eye the bigger the combiner has to get (there is not way around this fact, it is a matter of physics).   One way companies “cheat” is to not support a person wearing their glasses at all (like Google Glass did).

The simple (not taking everything into effect) equation (in excel) to computer the minimum width of a combiner is =2*TAN(RADIANS(A1/2))*B1 where A1 is the FOV in degrees and and B1 is the distance to farthest part combiner.  Glasses are typically about 0.6 to 0.8 inches from the eye and the size of the glasses and the frames you want about 1.2 inches or more of eye relief. For a 40 degree wide FOV at 1.2 inches this translates to 0.9″, at 60 degrees 1.4″ and for 100 degrees it is 2.9″ which starts becoming impractical (typical lenses on glasses are about 2″ wide).

For, very wide FOV displays (over 100 degree), the combiner has to be so near your eye that supporting glasses becomes impossible. The formula above will let your try your own assumptions.

Popular/Recent Combiner Types (Part 1)

Below, I am going to go through the most common beam combiner options.  I’m going to start with the simpler/older combiner technologies and work my way to the “waveguide” beam splitters of some of the newest designs in Part 2.  I’m going to try and hit on the main types, but there are many big and small variations within a type

gg-combinerSolid Beam Splitter (Google Glass and Epson BT-300)

These are often used with a polarizing beam splitter polarized when using LCOS microdisplays, but they can also be simple mirrors.  They generally are small due to weight and cost issues such as with the Google Glass at left.  Due to their small size, the user will see the blurry edges of the beam splitter in their field of view which is considered highly undesirable.  bt-300Also as seen in the Epson BT-300 picture (at right), they can make a person’s eyes look strange.  As seen with both the Google Glass and Epson, they have been used with the projector engine(s) on the sides.

Google glass has only about a 13 degree FOV (and did not support using a person’s glasses) and about 1.21 arc-minutes/pixel angular resolution with is on the small end compared to most other headset displays.    The BT-300 about 23 degree (and has enough eye relief to supports most glasses) horizontally and has dual 1280×720 pixels per eye giving it a 1.1 arc-minutes/pixel angular resolution.  Clearly these are on the low end of what people are expecting in terms of FOV and the solid beam quickly becomes too large, heavy, and expensive at the FOV grows.  Interesting they are both are on the small end of their apparent pixel size.

meta-2-combiner-02bSpherical/Semi-Spherical Large Combiner (Meta 2)

While most of the AR/MR companies today are trying to make flatter combiners to support a wide FOV with small microdisplays for each eye, Meta has gone in the opposite direction with dual very large semi-spherical combiners with a single OLED flat panel to support an “almost 90 degree FOV”. Note in the picture of the Meta 2 device that there are essentially two hemispheres integrated together with a single large OLED flat panel above.

Meta 2 uses a 2560 by 1440 pixel display that is split between two eyes.  Allowing for some overlap there will be about 1200 pixel per eye to cover 90 degrees FOV resulting in a rather chunkylarge (similar to Oculus Rift) 4.5 arc-minutes/pixel which I find somewhat poor (a high resolution display would be closer to 1 a-m/pixel).

navdy-unitThe effect of the dual spherical combiners is to act as a magnifying mirror that also move the focus point out in space so the use can focus. The amount of magnification and the apparent focus point is a function of A) the distance from the display to the combiner, B) the distance from the eye to the combiner, and C) the curvature.   I’m pretty familiar with this optical arrangement since the optical design it did at Navdy had  similarly curved combiner, but because the distance from the display to the combiner and the eye to the combiner were so much more, the curvature was less (larger radius).

I wonder if their very low angular resolution was as a result of their design choice of the the large spherical combiner and the OLED display’s available that they could use.   To get the “focus” correct they would need a smaller (more curved) radius for the combiner which also increases the magnification and thus the big chunky pixels.  In theory they could swap out the display for something with higher resolution but it would take over doubling the horizontal resolution to have a decent angular resolution.

I would also be curious how well this large of a plastic combiner will keep its shape over time. It is a coated mirror and thus any minor perturbations are double.  Additionally and strain in the plastic (and there is always stress/strain in plasic) will cause polarization effect issues, say whenlink-ahmd viewing and LCD monitor through it.   It is interesting because it is so different, although the basic idea has been around for a number of years such as by a company called Link (see picture on the right).

Overall, Meta is bucking the trend toward smaller and lighter, and I find their angular resolution disappointing The image quality based on some on-line see-through videos (see for example this video) is reasonably good but you really can’t tell angular resolution from the video clips I have seen.  I do give them big props for showing REAL/TRUE video’s through they optics.

It should be noted that their system at $949 for a development kit is about 1/3 that of Hololens and the ODG R-7 with only 720p per eye but higher than the BT-300 at $750.   So at least on a relative basis, they look to be much more cost effective, if quite a bit larger.

odg-002-cropTilted Thin Flat or Slightly Curved (ODG)

With a wide FOV tilted combiner, the microdisplay and optics are locate above in a “brow” with the plate tilted (about 45 degrees) as shown at left on an Osterhout Design Group (ODG) model R-7 with 1280 by 720 pixel microdisplays per eye.   The R-7 has about a 37 degree FOV and a comparatively OK 1.7 arc-minutes/pixel angular resolution.

odg-rr-7-eyesTilted Plate combiners have the advantage of being the simplest and least expensive way to provide a large field of view while being relatively light weight.

The biggest drawback of the plate combiner is that it takes up a lot of volume/distance in front of the eye since the plate is tilted at about 45 degrees from front to back.  As the FOV gets bigger the volume/distance required also increase.
odg-horizons-50d-fovODG is now talking about a  next model called “Horizon” (early picture at left). Note in the picture at left how the Combiner (see red dots) has become much larger. They claim to have >50 degree FOV and with a 1920 x 1080 display per eyethis works out to an angular resolution of about 1.6 arc-minutes/pixel which is comparitively good.

Their combiner is bigger than absolutely necessary for the ~50 degree FOV.  Likely this is to get the edges of the combiner farther into a person’s peripheral vision to make them less noticeable.

The combiner is still tilted but it looks like it may have some curvature to it which will tend to act as a last stage of magnification and move the focus point out a bit.   The combiner in this picture is also darker than the one in the older R-7 combiner and may have additional coatings on it.

ODG has many years of experience and has done many different designs (for example, see this presentation on Linked-In).  They certainly know about the various forms of flat optical waveguides such as Microsoft’s Hololens is using that I am going to be talking about next time.  In fact,  that Microsoft’s licensed Patent from ODG for  about $150M US — see).

Today, flat or slightly curved thin combiners like ODG is using probably the best all around technology today in terms of size, weight, cost, and perhaps most importantly image quality.   Plate combiners don’t require the optical “gymnastics” and the level of technology and precision that the flat waveguides require.

Next time — High Tech Flat Waveguides

Flat waveguides using diffraction (DOE) and/or holographic optical elements (HOE) are what many think will be the future of combiners.  They certainly are the most technically sophisticated. They promise to make the optics thinner and lighter but the question is whether they have the optical quality and yield/cost to compete yet with simpler methods like what ODG is using on the R-7 and Horizon.

Microsoft and Magic Leap each are spending literally over $1B US each and both are going with some form of flat, thin waveguides. This is a subject to itself that I plan to cover next time.

 

Near Eye AR/VR and HUD Metrics For Resolution, FOV, Brightness, and Eyebox/Pupil

Image result for oculus riftI’m planning on following up on my earlier articles about AR/VR Head Mounted Displays
(HMD) that also relate to Heads Up Displays (HUD) with some more articles, but first I would like to get some basic Image result for hololenstechnical concepts out of the way.  It turns out that the metrics we care about for projectors while related don’t work for hud-renaultmeasuring HMD’s and HUDs.

I’m going to try and give some “working man’s” definitions rather than precise technical definitions.  I be giving a few some real world examples and calculations to show you some of the challenges.

Pixels versus Angular Resolution

Pixels are pretty well understood, at least with today’s displays that have physical pixels like LCDs, OLEDs, DLP, and LCOS.  Scanning displays like CRTs and laser beam scanning, generally have additional resolution losses due to imperfections in the scanning process and as my other articles have pointed out they have much lower resolution than the physical pixel devices.

When we get to HUDs and HMDs, we really want to consider the angular resolution, typically measured in “arc-minutes” which are 1/60th of a degree; simply put this is the angular size that a pixel covers from the viewing position. Consumers in general haven’t understood arc-minutes, and so many companies have in the past talked in terms of a certain size and resolution display viewed from a given distance; for example a 60-inch diagonal 1080P viewed at 6 feet, but since the size of the display, resolution and viewing distance are all variables its is hard to compare displays or what this even means with a near eye device.

A common “standard” for good resolution is 300 pixels per inch viewed at 12-inches (considered reading distance) which translates to about one-arc-minute per pixel.  People with very good vision can actually distinguish about twice this resolution or down to about 1/2 an arc-minute in their central vision, but for most purposes one-arc-minute is a reasonable goal.

One thing nice about the one-arc-minute per pixel goal is that the math is very simple.  Simply multiply the degrees in the FOV horizontally (or vertically) by 60 and you have the number of pixels required to meet the goal.  If you stray much below the goal, then you are into 1970’s era “chunky pixels”.

Field of View (FOV) and Resolution – Why 9,000 by 8100 pixels per eye are needed for a 150 degree horizontal FOV. 

As you probably know, the human eye’s retina has variable resolution.  The human eyes has roughly elliptical FOV of about 150 to 170 degrees horizontally by 135 to 150 degrees vertically, but the generally good discriminating FOV is only about 40 degree (+/-20 degrees) wide, with reasonably sharp vision, the macular, being about 17-20 degrees and the fovea with the very best resolution covers only about 3 degrees of the eye’s visual field.   The eye/brain processing is very complex, however, and the eye moves to aim the higher resolving part of the retina at a subject of interest; one would want the something on the order of the one-arc-minute goal in the central part of the display (and since having a variable resolution display would be a very complex matter, it end up being the goal for the whole display).

And going back to our 60″, 1080p display viewed from 6 feet, the pixel size in this example is ~1.16 arc-minutes and the horizontal field of view of view will be about 37 degrees or just about covering the generally good resolution part of the eye’s retina.

Image result for oculus rift

Image from Extreme Tech

Now lets consider the latest Oculus Rift VR display.  It spec’s 1200 x 1080 pixels with about a 94 horz. by 93 vertical FOV per eye or a very chunky ~4.7 arc-minutes per pixel; in terms of angular resolution is roughly like looking at a iPhone 6 or 7 from 5 feet away (or conversely like your iPhone pixels are 5X as big).   To get to the 1 arc-minute per pixel goal of say viewing today’s iPhones at reading distance (say you want to virtually simulate your iPhone), they would need a 5,640 by 5,580 display or a single OLED display with about 12,000 by 7,000 pixels (allowing for a gap between the eyes the optics)!!!  If they wanted to cover the 150 by 135 FOV, we are then talking 9,000 by 8,100 per eye or about a 20,000 by 9000 flat panel requirement.

Not as apparent but equally important is that the optical quality to support these types of resolutions would be if possible exceeding expensive.   You need extremely high precision optics to bring the image in focus from such short range.   You can forget about the lower cost and weight Fresnel optics (and issues with “God rays”) used in Oculus Rift.

We are into what I call “silly number territory” that will not be affordable for well beyond 10 years.  There are even questions if any know technology could achieve these resolutions in a size that could fit on a person’s head as there are a number of physical limits to the pixel size.

People in gaming are apparently living with this appallingly low (1970’s era TV game) angular resolution for games and videos (although the God rays can be very annoying based on the content), but clearly it not a replacement for a good high resolution display.

Now lets consider Microsoft’s Hololens, it most criticized issue is is smaller (relative to the VR headsets such as Oculus) FOV of about 30 by 17.5 degrees.  It has a 1268 by 720 pixel display per eye which translates into about 1.41 arc-minutes per pixel which while not horrible is short of the goal above.   If they had used 1920×1080 (full HD) microdisplay devices which are becoming available,  then they would have been very near the 1 arc-minute goal at this FOV.

Let’s understand here that it is not as simple as changing out the display, they will also have to upgrade the “light guide” that the use as an combiner to support the higher resolution.   Still this is all reasonably possible within the next few years.   Microsoft might even choose to grow the FOV to around 40 degrees horizontally rather and keep the lower angular resolution a 1080p display.  Most people will not seriously notice the 1.4X angular resolution different (but they will by about 2x).

Commentary on FOV

I know people want everything, but I really don’t understand the criticism of the FOV of Hololens.  What we can see here is a bit of “choose your poison.”  With existing affordable (or even not so affordable) technology you can’t support a wide field of view while simultaneously good angular resolution, it is simply not realistice.   One can imaging optics that would let you zoom between a wide FOV with lower angular resolution and a smaller FOV with higher angular resolution.  The control of this zooming function could perhaps be controlled by the content or feedback from the user’s eyes and/or brain activity.

Lumens versus Candelas/Meter2 (cd/m2 or nits)

With an HMD or HUD, what we care about is the light that reaches the eye.   In a typical front projector system, only an extremely small percentage of the light that goes out of the projector, reflects off the screen and makes it back to any person’s eye, the vast majority of the light goes to illuminating the room.   With a HMD or HUD all we care about is the light that makes it into the eye.

Projector lumens or luminous flux, simply put, are a measure of the total light output and for a projector is usually measure when outputting a solid white image.   To get the light that makes it to the eye we have to account for the light hits a screen, and then absorbed, scattered, and reflects back at an angle that will get back to the eye.  Only an exceeding small percentage (a small fraction of 1%) of the projected light will make it into the eye in a typical front projector setup.

With HMDs and HUDs we talk about brightness in terms candelas-per-Meter-Squared (cd/m2), also referred to as “nits” (while considered an obsolete term, it is still often used because it is easier to write and say).  Cd/m2 (or luminance) is measure of brightness in a given direction which tells us how bright the light appears to the eye looking in a particular direction.   For a good quick explanation of lumens, cd/m2 I would recommend a Compuphase article.

Image result for hololens

Hololens appears to be “luminosity challenged” (lacking in  cd/m2) and have resorted to putting sunglasses on outer shield even for indoor use.  The light blocking shield is clearly a crutch to make up for a lack of brightness in the display.   Even with the shield, it can’t compete with bright light outdoors which is 10 to 50 times brighter than a well lit indoor room.

This of course is not an issue for the VR headsets typified by Oculus Rift.  They totally block the outside light, but it is a serious issue for AR type headsets, people don’t normally wear sunglasses indoors.

Now lets consider a HUD display.  A common automotive spec for a HUD in sunlight is to have 15,000 cd/m2 whereas a typical smartphone is between 500 and 600 cd/m2 our about 1/30th the luminosity of what is needed.  When you are driving a car down the road, you may be driving in the direction of the sun so you need a very bright display in order to see it.

The way HUDs work, you have a “combiner” (which may be the car’s windshield) that combines the image being generated with the light from the real world.  A combiner typical only reflects about 20% to 30% of the light which means that the display before the combiner needs to have on the order of 30,000 to 50,000 cd/m2 to support the 15,000 cd/m2 as seen in the combiner.  When you consider that you smartphone or computer monitor only has about 400 to 600 cd/m2 , it gives you some idea of the optical tricks that must be played to get a display image that is bright enough.

phone-hudYou will see many “smartphone HUDs” that simply have a holder for a smarphone and combiner (semi-mirror) such as the one pictured at right on Amazon or on crowdfunding sites, but rest assured they will NOT work in bright sunlight and only marginal in typical daylight conditions. Even with combiners that block more than 50% of the daylight (not really much of a see-through display at this point) they don’t work in daylight.   There is a reason why companies are making purpose built HUDs.

The cd/m2 also is a big issue for outdoor head mount display use. Depending on the application, they may need 10,000 cd/m2 or more and this can become very challenging with some types of displays and keeping within the power and cooling budgets.

At the other extreme at night or dark indoors you might want the display to have less than 100 cd/m2 to avoid blinding the user to their surrounding.  Note the SMPTE spec for movie theaters is only about 50 cd/mso even at 100 cd/m2 you would be about 2X the brightness of a movie theater.  If the device much go from bright sunlight to night use, you could be talking over a 1,500 to 1 dynamic range which turns out to be a non-trivial challenge to do well with today’s LEDs or Lasers.

Eye-Box and Exit Pupil

Since AR HMDs and HUDs generate images for a user’s eye in a particular place, yet need to compete with the ambient light, the optical system is designed to concentrate light in the direction of the eye.  As a consequence, the image will only be visible in a given solid angle “eye-box” (with HUDs) or “pupil” (with near eye displays).   There is also a trade-off in making the eyebox or pupil bigger and the ease of use, as the bigger the eye-box or pupil, the easier it will be the use.

With HUD systems there can be a pretty simple trade-off in eye-box size and cd/m2 and the lumens that must be generated.   Using some optical tricks can help keep from needing an extremely bright and power hungry light source.   Conceptually a HUD is in some ways like a head mounted display but with very long eye relief. With such large eye relieve and the ability of the person to move their whole head, the eyebox for a HUD has significantly larger than the exit pupil of near eye optics.  Because the eyebox is so much larger a HUD is going to need much more light to work with.

For near eye optical design, getting a large exit pupil is a more complex issue as it comes with trade-offs in cost, brightness, optical complexity, size, weight, and eye-relief (how far the optics are from the viewer’s eye).

Too small a pupil and/or with more eye-relief, and a near eye device is difficult to use as any small movement of the device causes you to to not be able to see the whole image.  Most people’s first encounter with an exit pupil is with binoculars or a telescope and how the image cuts offs unless the optics are centered well on the user’s eye.

Conclusions

While I can see that people are excited about the possibilities of AR and VR technologies, I still have a hard time seeing how the numbers add up so to speak for having what I would consider to be a mass market product.  I see people being critical of Hololens’ lower FOV without being realistic about how they could go higher without drastically sacrificing angular resolution.

Clearly there can be product niches where the device could serve, but I think people have unrealistic expectations for how fast the field of view can grow for product like Hololens.   For “real work” I think the lower field of view and high angular resolution approach (as with Hololens) makes more sense for more applications.   Maybe game players in the VR space are more willing to accept 1970’s type angular resolution, but I wonder for how long.

I don’t see any technology that will be practical in high volume (or even very expensive at low volume) that is going to simultaneously solve the angular resolution and FOV that some people want. AR displays are often brightness challenged, particularly for outdoor use.  Layered on top of these issues are those size, weight, cost, and power consumption which we will have to save these issues for another day.

 

Wrist Projector Scams – Ritot, Cicret, the new eyeHand

ritot-cicret-eyehand-001Wrist Projectors are the crowdfund scams that keeps on giving with new ones cropping up every 6 months to a year. When I say scam, I mean that there is zero chance that they will ever deliver anything even remotely close what they are promising. They have obviously “Photoshopped”/Fake pictures to “show” projected images that are not even close to possible in the the real world and violate the laws of physics (are forever impossible). While I have pointed out in this blog where I believe that Microvision has lied and mislead investors and showed very fake images with the laser beam scanning technology, even they are not total scammers like Ritot, Cicret, and eyeHand.

According to Ritot’s Indiegogo campaign, they have taken in $1,401,510 from 8917 suckers (they call them “backers”).   Cicret according to their website has a haul of $625,000 from 10,618 gullible people.

Just when you think that Ritot and Cicret had found all the suckers for wrist projectors, now CrowdFunder reports that eyeHand has raised $585,000 from individuals and claims to have raised another $2,500,000 in equity from “investors” (if they are real then they are fools, if not, then it is just part of the scam). A million here, $500K there, pretty soon you are talking real money.

Apparently Dell’s marking is believing these scams (I would hope their technical people know better) and has show video Ads that showed a similar impossible projectors.  One thing I will give them is that they did a more convincing “simulation” (no projecting “black”) and they say in the Ads that these are “concepts” and not real products. See for example the following stills from their Dell’s videos (click to see larger image).  It looks to me like they combined a real projected image (with the projector off camera and perpendicular to the arm/hand) and then add fake projector rays to try and suggest it came from the dummy device on the arm): dell-ritots-three

Ritot was the first of these scams I was alerted to and I help contribute some technical content to the DropKicker article http://drop-kicker.com/2014/08/ritot-projection-watch/. I am the “Reader K” that they thanked in the author’s note at the beginning of the article.  A number of others have called out the Ritot and Cicret as being scams but that did not keep them from continuing to raise money nor has it stopped the new copycat eyeHand scam.

The some of key problems with the wrist projector:

  1. Very shallow angle of projection.  Projectors normally project on a surface that is perpendicular to the direction of projection, but the wrist projectors have to project onto a surface that is nearly parallel to the direction of projection.  Their concepts show a projector that is only a few (2 to 4) millimeters above the surface. When these scammers later show “prototypes” they radically change the projection distance and projection angle.
  2. Extremely short projection distance.  The near side of the projection is only a few millimeters away while the far side of the image could be 10X or 50X further away.  There is no optics or laser scanning technology on earth that can do this.  There is no way to get such a wide image at such a short distance from the projector.  As light falls off with the square of distance, this results in an impossible illumination problem (the far side being over 100X dimmer than the near side).
  3. Projecting in ambient light – All three of the scammers show concept images where the projected image is darker than the surrounding skin.  This is absolutely impossible and violates the laws of physics.   The “black” of the image is set by the ambient light and the skin, the projector can only add light, it is impossible to remove light with a projector.  This shows ignorance and/or a callous regard for the the truth by the scammers.
  4. The blocking of the image by hairs, veins, and muscles.  At such a shallow angle (per #1 above) everything is in the way.
  5. There is no projector small enough.  These projector engines with their electronics that exist are more than 20X bigger in volume than what would be required to fit.
  6. The size of the orifice through with the light emerges is too small to support the size of the image that they want to project
  7.  The battery required to make them daylight readable would be bigger than the whole projector that they show.  These scammers would have you believe that a projector could work off a trivially small battery.
  8. Cicret and eyeHand show “touch interfaces” that won’t work due to the shallow angle.  The shadows cast by fingers working the touch interface would block the light to the rest of the image and made “multi-touch” impossible.   This also goes back to the shallow angle issue #1 above.

The issues above hold true whether the projection technology uses DLP, LCOS, or Laser Beam Scanning.

Cicret and Ritot have both made “progress reports” showing stills and  videos using projectors more than 20 times bigger and much higher and farther away (to reduce the projection angle) than the sleek wrist watch models they show in their 3-D CAD models.   Even then they  keep off-camera much/most of the electronics and battery/power-supply necessary needed to drive the optics that the show.

The image below is from a Cicret “prototype” video Feb of 2015 where they simply strapped a Microvision ShowWX+ HMDI upside down to a person’s wrist (I wonder how many thousand dollars they used engineering this prototype). They goofed in the video and showed enough of the projector that I could identify (red oval) the underside of the Microvision projector (the video also shows the distinctive diagonal roll bar of a Microvision LBS projector).  I have show the rest of the projector roughly to scale in the image below that they cropped off when shooting the video.  What you can’t tell in this video is that the projector is also a couple of inches above the surface of the arm in order to project a reasonable image.

cicret-001b

So you might think Cicret was going to use laser beam scanning, but no, their October 2016 “prototype” is showing a panel (DLP or LCOS) projector.  Basically it looks like they are just clamping whatever projector they find to a person’s wrist, there is no technology they are developing.  In this latest case, it looks like what they have done is found a small production projector taken its guts out and put it in a 3-D printed case.  Note the top of the case is going to be approximately 2 inches above a person’s wrist and how far away the image is from the projector.

cicret-002e

Ritot also has made update to keep their suckers on the hook.   Apparently Indiegogo only rule is that you much keep lying to your “backers” (for more on the subject of how Indiegogo condones fraud click here).  These updates at best show how little these scammers understood projection technology.   I guess one could argue that they were too incompetent to know they were lying.  ritot-demo-2014

On the left is a “demo” Ritot shows in 2014 after raising over $1M.  It is simply an off the shelf development system projector and note there is no power supply.  Note they are showing it straight on/perpendicular to the wrist from several inches away.

ritot-2015By 2015 Rito had their own development system and some basic optics.  Notice how big the electronics board is relative to the optics and that even this does not show the power source.

By April 2016 they showed an optical engine (ONLY) strapped to a persons wrist.  ritot-2016-04-20-at-25sCut off in the picture is the all the video drive electronics (see the flex cable in the red oval) that is off camera and likely a driver board similar to the one in the 2015 update  and the power supplies/battery.

In the April 2016 you should notice how the person’s wrist is bent to make make it more perpendicular to the direction of the projected image.  Also not that the image is distorted and about the size of an Apple watch’s image.   I will also guarantee that you will not have a decent view-able image when used outdoors in daylight.

The eyeHand scam has not shown anything like a prototype, just a poorly faked (projecting black) image.  From the low angle they show in their fake image, the projected would be blocked by the base of the thumb even if the person hold their hand flat.  To make it work at all they would have to move the projector well up the person’s arm and then bend the wrist, but then the person could not view it very well unless they hold their arm at an uncomfortable angle.  Then you have the problem of keeping the person from moving/relaxing their wrist and loosing the projection surface.   And of course it would not be view-able outdoors in daylight.

It it not like others have been trying to point out that these projectors are scams.  Google search “Ritot scam” or “Cicret scam” and you will find a number of references.  As best I can find, this blog is the first to call out the eyeHand scam:

  • The most technically in depth article was by Drop-Kicker on the Ritot scam
  • Captain Delusional has a  comic take on the Cicret scam on YouTube – He has some good insights on the issue of touch control but also makes some technical mistakes such as his comments on laser beam scanning (you can’t remove the laser scanning roll-bar by syncing the camera — also laser scanning has the same fall-off in brightness due do the scanning process).
  • Geek Forever had an article on the Ritot Scam 
  • A video about the Ritot Scam on Youtube
  • KickScammed about Ritot from 2014

The problem with scam startups is that they tarnish all the other startups trying to find a way to get started.  Unfortunately, the best liars/swindlers often do the best with crowdfunding.  The more they are willing to lie/exaggerate, the better it makes their product sound.

Indiegogo has proven time and again to have extremely low standards (basically if the company keep posting lies, they are good to go – MANY people tried to tell Indiegogo about the Ritot Scam but to no avail before Ritot got the funds). Kickstarter has some standards but the bar is not that large but at least I have not see a wrist projector on Kickstarter yet. Since the crowdfunding sites get a cut of the action whether the project delivers or not, their financial incentives are on the side of the companies and the people funding. There is no bar for companies that go with direct websites, it is purely caveat emptor.

I suspect that since the wrist projector scam has worked at least three (3) times so far, we will see other using it.   At least with eyeHand you have a good idea of what it will look like in two years (hint – like Ritot and Cicret).

Laser Beam Scanning Versus Laser-LCOS Resolution Comparison

cen-img_9783-celluon-with-uo

Side By Side Center Patterns (click on image for full size picture)

I apologize for being away for so long.  The pictures above and below were taken over a year ago and I meant to format and publish them back then but some other business and life events got in the way.

The purpose of this article is to compare the resolution of the Celluon PicoPro Laser Beam Scanning (LBS) projector and the UO Smart Beam Laser LCOS projector.   This is not meant to be a full review of both products, although I will make a few comments here and there, but rather, it is to compare the resolution between the two products.  Both projectors claim to have 720P resolution but only one of them actually has that “native/real” resolution.

This is in a way a continuation of the serious I have written about the PicoPro with optics developed by Sony and the beam scanning mirror and control by Microvision and in particular articles http://wp.me/p20SKR-gY and http://wp.me/p20SKR-hf.  With this article I am now included some comparison pictures I took of the UO Smart Beam projector (https://www.amazon.com/UO-Smart-Beam-Laser-Projector-KDCUSA/dp/B014QZ4FLO).

As per my prior articles, the Celluon PicoPro has no where close to it stated 1920×720 (non-standard) nor even 1280×720 (720P) resolution.  The UO projector while not perfect, does demonstrate 720P resolution reasonably well, but it does suffer from chroma aberrations (color separation) at the top of the image due to optical 100% offset (this is to be expected to some extent).

Let me be up front, I worked on the LCOS panel used in the UO projector when I was at Syndiant but I had nothing to do with the UO projector itself.   Take that as bias if you want, but the pictures I think tell the story.  I did not have any contact with either UO (nor Celluon for that matter) in preparing this article.

I also want to be clear that both the UO projector and the Celluon PicoPro tested are now over 1 year old and there may have been improvements since then.  I saw serious problems with both products, in particular with the color balance, the Celluon is too red (“white” is pink) and the UO very red deficient (“whilte is significantly blue-green).   The color is so far off on the Celluon that it would be a show stopper for me ever wanting to buy one as a consumer (hopefully UO has or will fix this).   Frankly, I think both projectors have serious flaws (if you want to know more, ask and I will write a follow-up article).

The UO Smart Beam had the big advantage in that it has “100% offset” which means that when placed on table-top, it will project upward not hitting the table without any keystone.   The PicoPro has zero offset and shoots straight out.  If you put it flat on a table the lower half of the image will shoot into the tabletop. Celluon includes a cheap and rather silly monopod that you can used to have the projector “float” above the table surface and then you can tilt it up and get a keystone image.  To take the picture, I had to mount the PicoPro on a much taller tripod and then shoot over the Projector so the image would not be keystoned

I understand that the next generation of the Celluon and a similar Sony MPCL1 projector (which has a “kickstand) have “digital keystone correction” which is not as good a solution as 100% offset as it reduces the resolution of the image; this is the “cheap/poor” way out and they really should have 100% offset like the UO projector (interestingly, earlier Microvision ShowWX projector with lower resolution had 100% offset) .

For the record – I like the Celluon PicoPro flatter form factor better; I’m not a fan of the UO cube as hurts the ability to put the projector in one’s pocket or a typical carrying bag.

Both the PicoPro with laser scanning and the Smart Beam with lasers illuminating an LCOS microdisplay have no focus knob and have a wide focus range (from about 50cm/1.5 feet to infinity), although they are both less sharp at the closer range.  The PicoPro with LBS is a Class 3R laser product whereas the Smart Beam with laser “illumination” of LCOS is only Class 1.   The measure dbrightness of the PicoPro was about 32 Lumens as rated when cold but dropped under 30 when heated up.  The UO while rated at 60 lumens was about 48 lumens when cold and about 45 when warmed up or more significantly below its “spec.”

Now onto the main discussion of resolution.  The picture at the top of this article shows the center crop from 720P test pattern generated by both projectors with the Smart Beam image on the left and the PicoPro on the right.   There is also an inset of the Smart Beam’s 1 pixel wide text pattern near the PicoPro’s 1 pixel wide pattern for comparison This test pattern shows a series of 1 pixel, 2 pixel and 3 pixel wide horizontal and vertical lines.

What you should hopefully notice is that the UO clearly resolves even the 1 pixel wide lines and the black lines are black whereas the 1 pixel wide lines are at best blurry and the 2 and even 3 pixel wide lines doe get to a very good black level (as in the contrast is very poor).  And the center is the very best case for the Celluon LBS whereas for the UO with it 100% offset it is a medium case (the best case is lower center).

The worst case for both projectors is one of the upper corners and below is a similar comparison of their upper right corner.  As before, I have included and inset of the UO’s single pixel image.

ur-img_9783-celluon-with-uo-overlay

Side By Side Center Patterns (click on image for full size picture)

What you should notice is that while there are still distinct 1 pixel wide lines in both directions in the UO projector, 1 pixel wide lines in the case of the Celluon LBS are a blurry mess.  Clearly they can’t resolve 1 pixel wide lines at 720P.

Because of the 100% offset optics the best case for the UO projector is at the bottom of the image (this is true almost any 100% offset optics) and this case is not much different than the center case for Celluon projector (see below):

lcen-celluon-with-uo-overlay

Below is a side by side picture I took (click on it for a full size image). The camera’s “white point” was an average between the two projectors (Celluon is too red/blue&green deficient and the UO is red deficient). The image below is NOT what I used in the cropped test patterns above as the 1 pixel features were too near the resolution limit of the Canon 70D camera (5472 by 3648 pixels) for the 1 pixel features.  So I used individual shots from each projector to double “sample” by the camera of the projected images.

side-by-side-img_0339-celluon-uo

For the Celluon PicoPro image I used the picture below (originally taken in RAW but digital lens corrected, cropped, and later converted to JPG for posting – click on image for full size):

img_9783-celluon-with-uo-overlay

For the UO Smart Beam image, I use the following image (also taken in RAW, digital lens corrected, straighten slightly, cropped and later converted to JPG for posting):

img_0231-uo-test-chart

As is my usual practice, I am including the test pattern (in lossless PNG format below for anyone who wants to verify and/or challenge my results:

interlace res-chart-720P G100A

I promise I will publish any pictures by anyone that can show better results with the PicoPro or any other LBS projector (Or UO projector for that matter) with the test pattern (or similar) above (I went to considerable effect to take the best possible PicoPro image that I could with a Canon 70D Camera).

Celluon Laser Beam Scanning Power Consumption (Over 6 Watts at 32 Lumens)

Celluon Power MeasurementsOn the left are a series of power measurements I made on the Celluon PicoPro projector with an optical engine designed by Sony using a Microvision scanning mirror.  The power was calculated based on the voltage and current from current coming from the battery using the HDMI input.

The first 6 measurements were with a solid image of the black/white/color indicated.  For the last 3 measurements I did an image that was half black on the left and the other half white, an image that was top half black, and a screen of 1 pixel wide vertical stripes.    The reason for the various colors/patterns was to gain some additional insight into the power consumption (and will be covered in a future article).  In addition to the power (in Watts) added  a column with the delta power from the Black image.

Celluon PicoPro Battery IMG_8069

Picture of Celluon PicoPro Battery

The Celluon PicoPro consumes 2.57 Watts for a fully black image (there are color lines at the bottom, presumably for laser brightness calibration) and 6.14W for a 32 lumen full white image.   When you consider that a smart phone running with the GPS only consumes about 2.5W and a smart phone LCD on full brightness consumes about 1W to 1.5W, over 6W is a lot of power (Displaymate has and excellent article on smartphone displays that includes the power consumption).   The Celluon has a 3260mah / 12.3Wh battery which is bigger than what goes in even large smartphones (and fills most of the left side of the case).

So why does the Celluon unit not need a fan, the answer is A) it only outputs 32-lumens and B) it use a lot of thermal management build into the case to spread the heat from the projector.  In the picture below I have shown some of the key aspects of the thermal management.  I have flipped over the projector and indicated with dashed rectangles were the thermal pads (a light blue color) go to the projector unit.  In addition the cast aluminum body used to hold the lasers and the optics which acts as a heat sink to spread the heat, there is gray flexible heat spreading material lining the entire top and bottom of the case plus a more hidden, a heat sink amalgamation essentially dedicated to the lasers as well as aluminum fins around the sides of the case.

2015-07-22_Case Heat Sinking 003

The heat spreading material on the left (as view) top of the case is pretty much dedicated to the battery, but all the rest of the heat spreading, particularly along the bottom of the case goes to the projector.

The most interesting feature is that there is a dedicated heat path from the area where the lasers are held in the cast body to the a heat sink “hidden chamber” or what I have nicknamed “the thermal corset”.   You should notice that there are three (3) light blue heat pads on the right side of the case top and that the middle one is isolated from the other two.  This middle one is also thicker and goes through a hole in the main case body to a chamber that filled with a heat sink material and then covered with an outer case.   This also explains why the Cellouon unit looks like it is in two parts from the outside.

Don’t get me wrong, having a fanless projector is desirable, but it is not due to the “magic” of using lasers.  Quite to the contrary, the Celluon unit has comparitively poor lumens per Watt, about double the power of what a similar DLP projector would take for the same lumens.

You may want to notice in the table that if you add up the “delta” red, green, and blue it totals to a lot more than the delta white.  The reason for this is that the Celluon unit never puts out “pure” fully saturated primary colors.  It always mixes a significant amount of the other two colors (I have verified this with several methods including using color filters over the output and using a spectral-meter).    This has to be done (and is done with LED projectors as well) so that the colors called for by standard movies and pictures are not over-saturated (if you don’t do this, green grass, for example” will look like it is glowing).

Another interesting result is that the device consumes more power if I put up a pattern were the left half is black and the right half is white rather than having the top half black and the bottom half white.   This probably has something to do with laser heating and not getting a chance to cool down between lines.

I also put up a pattern with alternating 1 pixel wide vertical lines and it should be noted that the power is between that of the left/right half screen image and the full white image.

So what does this mean in actual use?   With “typical” movie content, the image is typically about 25% to 33% (depends on the movie) of full white so the projector will be consuming about 4 Watts per hour which with a 12.3Wh battery will go about 3 hours.   But if you are web browsing, the content is often more like 90% of full white so it will be consuming over 6W per hour or 4 to 6 times what a typical smartphone displays consumes.    Note this is before you add in the power consumed in getting and processing the data (say from the internet).

Conclusion

The Celluon projector may be fanless,  but not because it is efficient.  From a product perspective, it does do a good job with its “thermal corset” of hiding/managing the power.

This study works from the “top down” by measuring the power and seeing where the heat is going in the case, the next time I plan to work some “bottom’s up” numbers to help show what causes the high power consumption and how it might change in the future.