Archive for LCOS

Avegant “Light Field” Display – Magic Leap at 1/100th the Investment?

Surprised at CES 2017 – Avegant Focus Planes (“Light Field”)

While at CES 2017 I was invited to Avegant’s Suite and was expecting to see a new and improved and/or a lower cost version of the Avegant Glyph. The Glyph  was a hardly revolutionary; it is a DLP display based, non-see-through near eye display built into a set of headphones with reasonably good image quality. Based on what I was expecting, it seemed like a bit much to be signing an NDA just to see what they were doing next.

But what Avegant showed was essentially what Magic Leap (ML) has been claiming to do in terms of focus planes/”light-fields” with vergence & accommodation.  But Avegant had accomplished this with likely less than 1/100th the amount of money ML is reported to have raised (ML has raised to date about $1.4 billion). In one stroke they made ML more believable and at the same time raises the question why ML needed so much money.

What I saw – Technology Demonstrator

I was shown was a headset with two HDMI cables for video and USB cable for power and sensor data going to an external desktop computer all bundle together. A big plus for me was that there enough eye relief that I could wear my own glasses (I have severe astigmatism so just diopter adjustments don’t work for me). The picture at left is the same or similar prototype I wore. The headset was a bit bulkier than say Hololens, plus the bundle of cables coming out of it. Avegant made it clear that this was an engineering prototype and nowhere near a finished product.

The mixed reality/see-through headset merges the virtual world with the see-through real world. I was shown three (3) mixed reality (MR) demos, a moving Solar System complete with asteroids, a Fish Tank complete with fish swimming around objects in the room and a robot/avatar woman.

Avegant makes the point that the content was easily ported from Unity into their system with fish tank video model coming from the Monterrey Bay Aquarium and the woman and solar system being downloaded from the Unity community open source library.  The 3-D images were locked to the “real world” taking this from simple AR into be MR. The tracking was not all perfect, nor did I care, the point of the demo was the focal planes, lots of companies are working on tracking.

It is easy to believe that by “turning the crank” they can eliminate the bulky cables and  the tracking and locking to between the virtual and real world will improve. It was a technology capability demonstrator and on that basis it has succeeded.

What Made It Special – Multiple Focal Planes / “Light Fields”

What ups the game from say Hololens and takes it into the realm of Magic Leap is that it supported simultaneous focal planes, what Avegant call’s “Light Fields” (a bit different than true “light fields” to as I see it). The user could change what they were focusing in the depth of the image and bring things that were close or far into focus. In other words, they simultaneously present to the eye multiple focuses. You could also by shifting your eyes see behind objects a bit. This clearly is something optically well beyond Hololens which does simple stereoscopic 3-D and in no way presents multiple focus points to the eye at the same time.

In short, what I was seeing in terms of vergence and accommodation was everything Magic Leap has been claiming to do. But Avegant has clearly spent only very small fraction of the development cost and it was at least portable enough they had it set up in a hotel room and with optics that look to be economical to make.

Now it was not perfect nor was Avegant claiming it to be at this stage. I could see some artifacts, in particularly lots of what looked like faint diagonal lines. I’m not sure if these were a result of the multiple focal planes or some other issue such as a bug.

Unfortunately the only available “through the lens” video currently available is at about 1:01 in Avegant’s Introducing Avegant Light Field” Vimeo video. There are only a few seconds and it really does not demonstrate the focusing effects well.

Why Show Me?

So why were they more they were showing it to me, an engineer and known to be skeptical of demos? They knew of my blog and why I was invited to see the demo. Avegant was in some ways surprising open about what they were doing and answered most, but not all, of my technical questions. They appeared to be making an effort to make sure people understand it really works. It seems clear they wanted someone who would understand what they had done and could verify it it some something different.

What They Are Doing With the Display

While Avegant calls their technology “Light Fields” it is implemented with (directly quoting them) “a number of fixed digital focal planes, and then interpolate the planes in-between them.” Multiple focus planes have many of the same characteristics at classical light fields, but require much less image data be simultaneously presented to the eye and thus saving power on generating and displaying as much image data, much of which the eye will not “see”/use.

They are currently using a 720p DLP per eye for the display engine but they said they thought they could support other display technologies in the future. As per my discussion on Magic Leap from November 2016, DLP has a high enough field rate that they could support displaying multiple images with the focus changing between images if you can change the focus fast enough. If you are willing to play with (reduce) color depth, DLP could support a number of focus planes. Avegant would not confirm if they use time sequential focus planes, but I think it likely.

They are using “birdbath optics” per my prior article with a beam splitter and spherical semi-mirror /combiner (see picture at left). With a DLP illuminated by LEDs, they can afford the higher light losses of the birdbath design and support having a reasonable amount of transparency to the the real world. Note, waveguides also tend to lose/wast a large amount of light as well. Avegant said that the current system was 50% transparent to the real world but that the could make it more (by wasting more light).

Very importantly, a birdbath optical design can be very cheap (on the order of only a few dollars) whereas the waveguides can cost many tens of dollars (reportedly Hololen’s waveguides cost over $100 each). The birdbath optics also can support a very wide field of view (FOV), something generally very difficult/expensive to support with waveguides. The optical quality of a birdbath is generally much better than the best waveguides. The downside of the birdbath compared to waveguides that it is bulkier and does not look as much like ordinary glasses.

What they would not say – Exactly How It Works

The one key thing they would not say is how they are supporting the change in focus between focal planes. The obvious way to do it would with some kind of electromechanical device such as moving focus or a liquid filled lens (the obvious suspects). In a recent interview, they repeatedly said that there were no moving parts and that is was “economical to make.”

What They are NOT Doing (exactly) – Mechanical Focus and Eye/Pupil Tracking

After meeting with Avegant at CES I decided to check out their recent patent activity and found US 2016/0295202 (‘202). It show a birdbath optics system (but with a non-see through curved mirror). This configuration with a semi-mirror curved element would seem to do what I saw. In fact, it is very similar to what Magic Leap showed in their US application 2015/0346495.

Avegant’s ‘202 application uses a combination of a “tuning assembly 700” (some form of electro-mechanical focus).

It also uses eye tracking 500 to know where the pupil is aimed. Knowing where the pupil is aimed would, at least in theory, allow them to generate a focus plane for the where the eye is looking and then an out of focus plane for everything else. At least in theory that is how it would work, but this might be problematical (no fear, this is not what they are doing, remember).

I specifically asked Avegant about the ‘202 application and they said categorically that they were not using it and that the applications related to what they were using has not yet been published (I suspect it will be published soon, perhaps part of the reason they are announcing now). They categorically stated that there were “no moving parts” and that the “did not eye track” for the focal planes. They stated that the focusing effect would even work with say a camera (rather than an eye) and was in no way dependent on pupil tracking.

A lesson here is that even small companies file patents on concepts that they don’t use. But still this application gives insight into what Avegant was interested in doing and some clues has to how the might be doing it. Eliminate the eye tracking and substitute a non-mechanical focus mechanism that is rapid enough to support 3 to 6 focus planes and it might be close to what they are doing (my guess).

A Caution About “Demoware”

A big word of warning here about demoware. When seeing a demo, remember that you are being shown what makes the product look best and examples that might make it look not so good are not shown.

I was shown three short demos that they picked, I had no choice. I could not pick my own test cases.I also don’t know exactly the mechanism by which it works, which makes it hard to predict the failure mode, as in what type of content might cause artifacts. For example, everything I was shown was very slow moving. If they are using sequential focus planes, I would expect to see problems/artifacts with fast motion.

Avegant’s Plan for Further Development

Avegant is in the process of migrating away from requiring a big PC and onto mobile platforms such as smartphones. Part of this is continuing to address the computing requirement.

Clearly they are going to continue refining the mechanical design of the headset and will either get rid of or slim down the cables and have them go to a mobile computer.  They say that all the components are easily manufactureable and this I would tend to believe. I do wonder how much image data they have to send, but it appears they are able to do with just two HDMI cables (one per eye). It would seem they will be wire tethered to a (mobile) computing system. I’m more concerned about how the image quality might degrade with say fast moving content.

They say they are going to be looking at other (than the birdbath) combiner technology; one would assume a waveguide of some sort to make the optics thinner and lighter. But going to waveguides could hurt image quality and cost and may more limit the FOV.

Avegant is leveraging the openness of Unity to support getting a lot of content generation for their platform. They plan on a Unity SDK to support this migration.

They said they will be looking into alternatives for the DLP display, I would expect LCOS and OLED to be considered. They said that they had also thought about laser beam scanning but their engineers objected to trying for eye safety reasons; engineers are usually the first Guinea pigs for their own designs and a bug could be catastrophic. If they are using time sequential focal planes which is likely, then other technologies such as OLED, LCOS or Laser Beam Scanning cannot generate sequential planes fast enough to support that more than a few (1 to 3) focal planes per 1/60th of a second on a single device at maximum resolution.

How Important is Vergence/Accomodation (V/A)?

The simple answer is that it appears that Magic Leap raised $1.4B by demoing it. But as they say, “all that glitters is not gold.” The V/A conflict issue is real, but it mostly affects content that virtually appears “close”, say inside about 2 meters/6 feet.

Its not clear that for “everyday use” there might be simpler, less expensive and/or using less power ways to deal with V/A conflict such as pupil tracking. Maybe (don’t know) it would be enough to simply change the focus point when the user is doing close up work rather than have multiple focal planes presented to the eye simultaneously .

The business question is whether solving V/A alone will make AR/MR take off? I think the answer to this is clearly no, this is not the last puzzle piece to be solved before AR/MR will take off. It is one of a large number of issues yet to be solved. Additionally, while Avegant says they have solved it economically, what is economical is relative. It still has added weight, power, processing, and costs associated with it and it has negative impacts on the image quality; the classic “squeezing the balloon” problem.

Even if V/A added nothing and cost nothing extra, there are still many other human factor issues that severely limit the size of the market. At times like this, I like to remind people the the Artificial Intelligence boom in the 1980s (over 35 years ago) that it seemed all the big and many small companies were chasing as the next era of computing. There were lots of “breakthroughs” back then too, but the problem was bigger than all the smart people and money could solve.

BTW, it you want to know more about V/A and related issues, I highly recommend reading papers and watching videos by Gordon Wetzstein of Stanford. Particularly note his work on “compressive light field displays” which he started working on while at MIT. He does an excellent job of taking complex issues and making them understandable.

Generally Skeptical About The Near Term Market for AR/MR

I’m skeptical that with or without Avegant’s technology, the Mixed Reality (MR) market is really set to take off for at least 5 years (an likely more). I’ve participated in a lot of revolutionary markets (early video game chips, home/personal computers, graphics accelerators, the Synchronous DRAMs, as well as various display devices) and I’m not a Luddite/flat-earther, I simply understand the challenges still left unsolved and there are many major ones.

Most of the market forecasts for huge volumes in the next 5 years are written by people that don’t have a clue as to what is required, they are more science fiction writers than technologist. You can already see companies like Microsoft with Hololens and before them Google with Google Glass, retrenching/regrouping.

Where Does Avegant Go Business Wise With this Technology?

Avegant is not a big company. They were founding in in 2012. My sources tell me that they have raise about $25M and I have heard that they have only sold about $5M to $10M worth of their first product, the Avegant Glyph. I don’t see the Glyph ever as being a high volume product with a lot of profit to support R&D.

A related aside: I have yet to see a Glyph “in the wild” being using say on an airplane (where they would make the most sense). Even though the Glyph and other headsets exist, people given a choice still by vast percentages still prefer larger smartphones and tablets for watching media on the go. The Glyph sells for about $500 now and is very bulky to store, whereas a tablet easily slips into a backpack or other bag and the display is “free”/built in.

But then, here you have this perhaps “key technology” that works and that is doing something that Magic Leap has raised over $1.4 Billion dollars to try and do. It is possible (having not thoroughly tested either one), that Avegant’s is better than ML’s. Avegant’s technology is likely much more cost effective to make than ML’s, particularly if ML’s depends on using their complex waveguide.

Having not seen the details on either Avegant’s or ML’s method, I can’t say which is “best” both image wise and in terms of cost, nor whether from a patent perspective, whether Avegant’s is different from ML.

So Avegant could try and raise money to do it on their own, but they would have to raise a huge amount to last until the market matures and compete with much bigger companies working in the area. At best they have solved one (of many) interesting puzzle pieces.

It seems obvious (at least to me) that more likely good outcome for them would be as a takeover target by someone that has the deep pockets to invest in mixed reality for the long haul.

But this should certainly make the Magic Leap folks and their investors take notice. With less fanfare, and a heck of a lot less money, Avegant has as solution to the vergence/accommodation problem that ML has made such a big deal about.

Near-Eye Bird Bath Optics Pros and Cons – And IMMY’s Different Approach

Why Birdbaths Optics? Because the Alternative (Waveguides) Must Be Worse (and a teaser)

The idea for this article started when I was looking at the ODG R-9 optical design with OLED microdisplays. They combined an OLED microdisplay that is not very bright in terms of nits with a well known “birdbath” optical design that has very poor light throughput. It seems like a horrible combination. I’m fond of saying “when intelligent people chose a horrible design, the alternative must have seemed worse

I’m going to “beat up” so to speak the birdbath design by showing how some fundamental light throughput numbers multiply out and why the ODG R-9 I measured at CES blocks so much of the real world light. The R-9 also has a serious issue with reflections. This is the same design that a number of publications considered among the “best innovations” of CES; it seems to me that they must have only looked at the display superficially.

Flat waveguides such as used by Hololens, Vuzix. Wave Optics, and Lumus as well as expected from Magic Leap get most of the attention, but I see a much larger number of designs using what is known as a “birdbath” and similar optical designs. Waveguides are no secret these days and the fact that so many designs still use the birdbath optics tells you a lot about the issues with waveguides. Toward the end of this article, I’m going to talk a little about the IMMY design that replaces part of the birdbath design.

As a teaser, this article is to help prepare for an article on an interesting new headset I will be writing about next week.

Birdbath Optics (So Common It Has a Name)

The birdbath combines two main optical components, a spherical mirror/combiner (part-mirror) and a beam splitter. The name  “birdbath” comes from the spherical mirror/combiner looking like a typical birdbath. It is used because it generally is comparatively inexpensive to down right cheap while also being relatively small/compact while having  good overall image quality. The design fundamentally supports a very wide FOV, which are at best difficult to support with waveguides. The big downsides are light throughput and reflections.

A few words about Nits (Cd/m²) and Micro-OLEDs

I don’t have time here to get into a detailed explanation of nits (Cd/m²). Nits is the measure of light at a given angle whereas lumens is the total light output. The simplest analogy is to water hose with a nozzle (apropos here since we are talking about birdbaths). Consider two spray patterns, one with a tight jet of water and one with a wide fan pattern both outputting the exact same total amount of water per minute (lumens in this analogy). The one with the tight patter would have high water pressure (nits in this analogy) over a narrow angle where the fan spray would have lower water pressure (nits) over a wider angle.

Additionally, it would be relatively easy to put something in the way of the tight jet and turn it into a fan spray but there is no way to turn the fan spray into a jet. This applies to light as well, it is much easier to go from high nits over are narrow angle to lower nits over a wide angle (say with a diffuser) but you can’t go the other way easily.

Light from an OLED is like the fan spray only it covers a 180 degree hemisphere. This can be good for a large flat panel were you want a wide viewing angle but is a problem for a near eye display where you want to funnel all the light into the eye because so much of the light will miss pupil of the eye and is wasted. With an LED you have a relative small point of light that can be funneled/collimated into a tight “jet” of light to illuminate an LCOS or DLP microdisplay.

The combination of light output from LEDs and the ability to collimate the light means you can easily get tens of thousands of nits with an LCOS or DLP illuminated microdisplay were OLED microdisplays typically only have 200 to 300 nits. This is major reason why most see-through near eye displays use LCOS and DLP over OLEDs.

Basic Non-Polarizing Birdbath (example, ODG R-9)

The birdbath has two main optical components, a flat beam splitter and a spherical mirror. In the case a see-through designs, the the spherical mirror is a partial mirror so the spherical element acts as a combiner. The figure below is taken from an Osterhaut Design Group (ODG) patent which and shows simple birdbath using an OLED microdisplay such as their ODG R-9. Depending on various design requirements, the curvature of the mirror, and the distances, the lenses 16920 in the figure may not be necessary.

The light from the display device, in the case of the ODG R-9 is a OLED microdisplay, is first reflect away from the eye and perpendicular (on-axis) to the curved beam splitter so that a simple spherical combiner will uniformly magnify and move the apparent focus point of the image (if not “on axis” the image will be distorted and the magnification will vary across the image). The curved combiner (partial mirror) has minimal optical distortion on light passing through.

Light Losses (Multiplication is a Killer)

A big downside to the birdbath design is the loss of light. The image light must make two passes at the beam splitter, a reflective and transmissive, with a reflective (Br) and transmissive (Bt) percentages of light. The light making it through both passes is Lr x Lt.  A 50/50 beam splitter might be about 48% reflective and transmissive (with say a 4% combined loss), and the light throughput (Br x Bt) in this example is only 48% x 48%= ~23%. And “50/50” ratio is the best case; if we assume a nominally 80/20 beam splitter (with still 4% total loss) we get 78% x 18% = ~14% of the light making through the two passes.

Next we have the light loss of the spherical combiner. This is a trade-off of image light being reflected (Cr) versus being transmitted  (Ct) to the real world where Cr + Ct is less than 1 due to losses. Generally you want the Cr to be low so the Ct can be high so you can see out (otherwise it is not much of a see through display).

So lets say the combiner has Cr=11% and the Ct=75% with about 4% loss with the 50/50 beamsplitter. The net light throughput assuming a “50/50” beam splitter and a 75% transmissive combiner is Br x Cr X Bt = ~2.5% !!! These multiplicative losses lose all but a small percentage of the display’s light. And consider that the “real world” net light throughput is Ct x Bt which would be 48% x 75% = 36% which is not great and would be too dark for indoor use.

Now lets say you want the glasses to be at least 80% transmissive so they would be considered usable indoors. You might have the combiner Ct=90% making Cr=6% (with 4% loss) and then Bt=90% making Br=6%. This gives the real world transmissive about 90%x90% = 81%.  But then you go back and realize the display light equation (Br x Cr X Bt) becomes 6%x6%x90% = 0.3%. Yes, only 3/1000ths of the starting image light makes it through. 

Why the ODG R-9 Is Only About 4% to 5% “See-Through”

Ok, now back to the specific case of the ODG R-9. The ODG R-9 has an OLED microdisplay that most like has about 250 nits (200 to 250 nits is commonly available today) and they need to get about 50 nits (roughly) to the eye from the display to have a decent image brightness indoors in a dark room (or one where most of the real world light is blocked). This means they need a total throughput of 50/250=20%. The best you can do with two passes through a beam splitter (see above) is about 23%.  This forces the spherical combiner to be highly reflective with little transmission. You need something that reflects 20/23=~87% of the light and only about 9% transmissive. The real world light then making it through to the eye is then about 9% x 48% (Ct x Bt) or about 4.3%.

There are some other effects such as the amount of total magnification and I don’t know exactly what their OLED display is outputting display and exact nits at the eyepiece, but I believe my numbers are in the ballpark. My camera estimates for the ODG R-9 came in a between 4% and 5%. When you are blocking about 95% of the real world light, are you really much of a “see-through” display?

Note, all this is BEFORE you consider adding say optical shutters or something like Varilux® light blocking. Normally the birdbath design is used with non-see through designs (where you don’t have the see-through losses) or with DLP® or LCOS devices illuminated with much higher nits (can be in the 10’s of thousands) for see through designs so they can afford the high losses of light.

Seeing Double

There are also issues with getting a double image off of each face of plate beam splitter and other reflections. Depending on the quality of each face, a percentage of light is going to reflect or pass through that you don’t want. This light will be slightly displaced based on the thickness of the beamsplitter. And because the light makes two passes, there are two opportunities to cause double images. Any light that is reasonably “in focus” is going to show up as a ghost/double image (for good or evil, your eye has a wide dynamic range and can see even faint ghost images). Below is a picture I took with my iPhone camera of a white and clear menu through the ODG R-9. I counted at least 4 ghost images (see colored arrows).

As a sort of reference, you can see the double image effect of the beamsplitter going in the opposite direction to the image light with my badge and the word “Media” and its ghost (in the red oval).

Alternative Birdbath Using Polarized Light (Google Glass)

Google Glass used a different variation of the birdbath design. They were willing to accept a much smaller field of view and thus could reasonably embedded the optics in glass. It is interesting here to compare and contrast this design with the ODG one above.

First they started with an LCOS microdisplay that was illuminated by LEDs that can be very much brighter and more collimated light resulting in much higher (can be orders of magnitude) starting nits than an OLED microdisplay can output. The LED light is passed through a polarizing beam splitter than will pass about 45% P light to the LCOS device (245). Note a polarizing beam splitter passes one polarization and reflect the other unlike a the partially reflecting beam splitter in the ODG design above. The LCOS panel will rotate the light to be seen to S polarization so that it will reflect about 98% (with say 2% loss) of the S light.

The light then goes to a second polarizing beam splitter that is also acting as the “combiner” that the user sees the real world through. This beam splitter is set up to pass about 90% of the S light and reflect about 98% of the P light (they are usually much better/more-efficient in reflection). You should notice that they have a λ/4 (quarter wave = 45 degree rotation) film between the beam splitter and the spherical mirror which will rotate the light 90 degrees (turning it from S to P) after it passes through it twice. This  λ/4 “trick” is commonly used with polarized light. And since you don’t have to look through the mirror, it can be say 98% reflective with say another 3% loss for the λ/4.

With this design, about 45% (one pass through the beamsplitter) of the real world makes it through, but only light polarized the “right way” makes it through which makes looking at say LCD monitors problematical. By using the quarter wave film the design is pretty efficient AFTER you loose about 55% of the LED light in polarizing it initially. There are also less reflection issues because all the films and optics are embedded in glass so you don’t get these air to glass index mismatches of off two surfaces of a relatively thick plate that cause unwanted reflections/double images.

Google Glass design has a lot of downsides too. There is nothing you can do to get the light throughput of the real world much above 45% and there are always the problems of looking through a polarizer. But the biggest downside is that it cannot be scaled up for larger fields of view and/or more eye relief. As you scale this design up the block of glass becomes large, heavy and expensive as well as being very intrusive/distorting in looking through a big thick piece of glass.

Without getting too sidetracked, Lumus in effect takes the one thick beam splitter, and piece-wise cuts it into multiple smaller beam splitters to make the glass thinner. But this also means you can’t use the spherical mirror of a birdbath design with it and so you require optics before the beam splitting and the light losses of the the piece-wise beam splitting are much larger than a single beamsplitter.

Larger Designs

An alternative design would mix the polarizing beamsplitters of the Google Glass design above with the configuration of ODG design above.  And this has been done many times through the years with LCOS panels that use polarized light (an example can be found in this 2003 paper). The spherical mirror/combiner will be a partial non-polarizing mirror so you can see through it and a quarter waveplate is used between the spherical combiner and the polarizing beam splitter. You are then stuck with about 45% of the real world light times the light throughput of the spherical combiner.

A DLP with a “birdbath” would typically use the non-polarizing beam splitter with a design similar to the ODG R-9 but replacing the OLED microdisplay with a DLP and illumination. As an example, Magic Leap did this with a DLP but adding a variable focus lens to support focus planes.

BTW, by the time you polarized the light from an OLED or DLP microdisplay, there would not be much if any of an efficiency advantage sense to use polarizing beamsplitters. Additionally, the light from the OLED is so diffused (varied in angles) that it would likely not behave well going through the beam splitters.

IMMY – Eliminating the Beamsplitter

The biggest light efficiency killer in the birdbath design is the combined reflective/transmissive passes via the beamsplitter. IMMY effectively replaces the beamsplitter of the birdbath design with two small curved mirrors that he correct for the image being reflected off-axis from the larger curved combiner. I have not yet seen how well this design works in practice but at least the numbers would appear to work better. One can expect only a few percentage points of light being lost off of each of the two small mirrors so that maybe 95% of the light from the OLED display make it to the large combiner. Then you have the the combiner reflection percentage (Cr) multiplying by about 95% rather than the roughly 23% of the birdbath beam splitter.

The real world light also benefits as it only has to go through a single combiner transmissive loss (Ct) and no beamsplitter (Bt) loses. Taking the OGD R-9 example above and assuming we started with a 250 nit OLED and with 50 nits to the eye, we could get there with about an 75% transmissive combiner. The numbers are at least starting to get into the ballpark where improvements in OLED Microdisplays could fit at least for indoor use (outdoor designs without sunshading/shutters need on the order of 3,000 to 4,000 nits).

It should be noted that IMMY says they also have “Variable transmission outer lens with segmented addressability” to support outdoor use and variable occlusion. Once again this is their claim, I have not yet tried it out in practice so I don’t know the issues/limitations. My use of IMMY here is to contrast it with the classical birdbath designs above.

A possible downside to the the IMMY multi-mirror design is bulk/size has seen below. Also noticed the two adjustment wheel for each eye. One is for interpupillary distance to make sure the optics line up center with the pupils which varies from person to person. The other knob is a diopter (focus) adjustment which also suggests you can’t wear these over your normal glasses.

As I have said, I have not seen IMMY’s to see how it works and to see what faults it might have (nothing is perfect) so this is in no way an endorsement for their design. The design is so straight forward and a seemingly obvious solution to the beam splitter loss problem that it makes me wonder why nobody has been using it earlier; usually in these cases, there is a big flaw that is not so obvious.

See-Though AR Is Tough Particularly for OLED

As one person told me at CES, “Making a near eye display see-through generally more than double the cost” to which I would add, “it also has serious adverse affects on the image quality“.

The birdbath design wastes a lot of light as do every other see-through designs. Waveguide designs can be equally or more light wasteful than the birdbath. At least on paper, the IMMY design would appear to waste a less than most others. But to make a device say 90% see through, at best you start by throwing away over 90% of the image light/nits generated, and often more than 95%.

The most common solution to day is to start with LED illuminated LCOS or DLP microdisplay so you have a lot of nits to throw at the problem and just accept the light waste. OLEDs are still orders of magnitude in brightness/nits away from being able to compete with LCOS and DLP with brute force.

 

CES 2017 AR, What Problem Are They Trying To Solve?

Introduction

First off, this post is a few weeks late. I got sick on returning from CES and then got busy with some other pressing activities.

At left is a picture that caught me next to the Lumus Maximus demo at CES from Imagineality’s “CES 2017: Top 6 AR Tech Innovations“. Unfortunately they missed that in the Lumus booth at about the same time was a person from Magic Leap and Microsoft’s Hololens (it turned out we all knew each other from prior associations).

Among Imagineality’s top 6 “AR Innovations” were ODG’s R-8/R-9 Glasses (#1) and Lumus’s Maximus 55 degree FOV waveguide (#3). From what I heard at CES and saw in the writeups, ODG and Lumus did garner a lot of attention. But by necessity, theses type of lists are pretty shallow in their evaluations and I try to do on this blog is go a bit deeper into the technology and how it applies to the market.

Among the near eye display companies I looked at during CES include Lumus, ODG, Vuzix, Real Wear, Kopin, Wave Optics, Syndiant, Cremotech, QD Laser, Blaze (division of eMagin) plus several companies I met with privately. As interesting to me as their technologies was there different takes on the market.

For this article, I am mostly going to focus on the Industrial / Enterprise market. This is were most of the AR products are shipping today. In future articles, I plan to go into other markets and more of a deep dive on the the technology.

What Is the Problem They Are Trying to Solve?

I have had an number of people asked me what was the best or most interesting AR thing I saw at CES 2017, and I realized that this was at best an incomplete question. You first need to ask, “What problem are they trying to solve?” Which leads to “how well does it solve that problem?” and “how big is that market?

One big takeaway I had at CES having talked to a number of different company’s is that the various headset designs were, intentionally or not, often aimed at very different applications and use cases. Its pretty hard to compare a headset that almost totally blocks a user’s forward view but with a high resolution display to one that is a lightweight information device that is highly see-through but with a low resolution image.

Key Characteristics

AR means a lot of different things to different people. In talking to a number of companies, you found they were worried about different issues. Broadly you can separate into two classes:

  1. Mixed Reality – ex. Hololens
  2. Informational / “Data Snacking”- ex. Google Glass

For most of the companies were focused on industrial / enterprise / business uses at least for the near future and in this market the issues include:

  1. Cost
  2. Resolution/Contrast/Image Quality
  3. Weight/Comfort
  4. See-through and/or look over
  5. Peripheral vision blocking
  6. Field of view (small)
  7. Battery life per charge

For all the talk about mixed reality (ala Hololens and Magic Leap), most of the companies selling product today are focused on helping people “do a job.” This is where they see the biggest market for AR today. It will be “boring” to the people wanting the “world of the future” mixed reality being promised by Hololens and Magic Leap.

You have to step back and look at the market these companies are trying to serve. There are people working on a factory floor or maybe driving a truck where it would be dangerous to obscure a person’s vision of the real world. They want 85% or more transparency, very lightweight and highly comfortable so it can be worn for 8 hours straight, and almost no blocking of peripheral vision. If they want to fan out to a large market, they have to be cost effective which generally means they have to cost less than $1,000.

To meet the market requirements, they sacrifice field of view and image quality. In fact, they often want a narrow FOV so it does not interfere with the user’s normal vision. They are not trying to watch movies or play video games, they are trying to give necessary information for person doing a job than then get out of the way.

Looking In Different Places For the Information

I am often a hard audience. I’m not interested in the marketing spiel, I’m looking for what is the target market/application and what are the facts and figure and how is it being done. I wanting to measure things when the demos in the boths are all about trying to dazzle the audience.

As a case in point, let’s take ODG’s R-9 headset, most people were impressed with the image quality from ODG’s optics with a 1080p OLED display, which was reasonably good (they still had some serious image problems caused by their optics that I will get into in future articles).

But what struck me was how dark the see-through/real world was when viewed in the demos. From what I could calculate, they are blocking about 95% of the real world light in the demos. They also are too heavy and block too much of a person’s vision compared to other products; in short they are at best going after a totally different market.

Industrial Market

Vuzix is representative of the companies focused on industrial / enterprise applications. They are using with waveguides with about 87% transparency (although they often tint it or uses photochromic light sensitive tinting). Also the locate the image toward the outside of the use’s view so that even when an image it displayed (note in the image below-right that the exit port of the waveguide is on the outside and not in the center as it would be on say a Hololens).

The images at right were captured from a Robert Scoble interview with Paul Travers, CEO of Vuzix. BTW, the first ten minutes of the video are relatively interesting on how Vuzix waveguides work but after that there is a bunch of what I consider silly future talk and flights of fancy that I would take issue with. This video shows the “raw waveguides” and how they work.

Another approach to this category is Realwear. They have a “look-over” display that is not see through but their whole design is make to not block the rest of the users forward vision. The display is on a hinge so it can be totally swung out of the way when not in use.

Conclusion

What drew the attention of most of the media coverage of AR at CES was how “sexy” the technology was and this usually meant FOV, resolution, and image quality. But the companies that were actually selling products were more focused on their user’s needs which often don’t line up with what gets the most press and awards.

 

ODG R-8 and R-9 Optic with a OLED Microdisplays (Likely Sony’s)

ODG Announces R-8 and R-9 OLED Microdisplay Headsets at CES

It was not exactly a secret, but Osterhout Design Group (ODG) formally announce their new R-8 headset with dual 720p displays (one per eye) and R-9 headset with dual 1080p displays.  According to their news release, “R-9 will be priced around $1,799 with initial shipping targeted 2Q17, while R-8 will be less than $1,000 with developer units shipping 2H17.

Both devices use use OLED microdisplays but with different resolutions (the R-9 has twice the pixels). The R-8 has a 40 degree field of view (FOV) which is similar to Microsoft’s Hololens and the R-9 has about a 50 degree FOV.

The R-8 appears to be marketed more toward “consumer” uses with is lower price point and lack of an expansion port, while ODG is targeting the R-9 to more industrial uses with modular expansion. Among the expansion that ODG has discussed are various cameras and better real world tracking modules.

ODG R-7 Beam Splitter Kicks Image Toward Eye

With the announcement comes much better pictures of the headsets and I immediately noticed that their optics were significantly different than I previously thought. Most importantly, I noticed in the an ODG R-8 picture that the beam splitter is angled to kicks the light away from the eye whereas the prior ODG R-7 had a simple beam splitter that kicks the image toward the eye (see below).

ODG R-8 and R-8 Beam Splitter Kicks Image Away From Eye and Into A Curved Mirror

The ODG R-8 (and R-9 but it is harder to see on the available R-9 pictures) does not have a simple beam splitter but rather a beam splitter and curve mirror combination. The side view below (with my overlays of the outline of the optics including some that are not visible) that the beam splitter kicks the light away from the eye and toward partial curved mirror that acts as a “combiner.” This curve mirror will magnify and move the virtual focus point and then reflects the light back through the beam splitter to the eye.

On the left I have taken Figure 169 from ODG’s US Patent 9,494,800. Light from the “emissive display” (ala OLED) passes through two lenses before being reflected into the partial mirror. The combination of the lenses and the mirror act to adjust the size and virtual focus point of the displayed image. In the picture of the ODG R-8 above I have taken the optics from Figure 169 and overlaid them (in red).

According to the patent specification, this configuration “form(s) at wide field of view” while “The optics are folded to make the optics assembly more compact.”

At left I have cropped the image and removed the overlay so you can see the details of the beam splitter and curved mirror joint.  You hopefully can see the seam where the beam splitter appears to be glued to the curved mirror suggesting the interior between the curved mirror and beam splitter is hollow. Additionally there is a protective cover/light shade over the outside of the curved mirror with a small gap between them.

The combined splitter/mirror is hollow to save weight and cost. It is glued together to keep dust out.

ODG R-6 Used A Similar Splitter/Mirror

I could not find a picture of the R-8 or R-9 from the inside, but I did find a picture on the “hey Holo” blog that shows the inside of the R-6 that appears to use the same optical configuration as the R-8/R-9. The R-6 introduced in 2014 had dual 720p displays (one per eye) and was priced at $4,946 or about 5X the price of the R-8 with the same resolution and similar optical design.  Quite a price drop in just 2 years.

ODG R-6, R-8, and R-9 Likely Use Sony OLED Microdisplays

Interestingly, I could not find anywhere were ODG says what display technology they use in the 2014 R-6, but the most likely device is the Sony ECX332A 720p OLED microdisplay that Sony introduced in 2011. Following this trend it is likely that the ODG R-9 uses the newer Sony ECX335 1080p OLED microdisplay and the R-9 uses the ECE332 or a follow-on version. I don’t know any other company that has both a 720p and 1080p OLED microdisplays and the timing of the Sony and ODG products seems to fit. It is also very convenient for ODG that both panels are the same size and could use the same or very similar optics.

Sony had a 9.6 micron pixel on a 1024 by 768 OLED microdisplay back in 2011 so for Sony the pixel pitch has gone from 9.6 in 2011 to 8.2 microns on the 1080p device. This is among the smallest OLED microdisplay pixel pitches I have seen but still is more than 2x linearly and 4x in area bigger than the smallest LCOS (several companies have LCOS pixels pitches in the 4 micron or less range).

It appears that ODG used an OLED microdisplay for the R-6 then switched (likely for cost reasons) to LCOS and a simple beam splitter for the R7 and then back to OLEDs and the splitter/mirror optics for the R-8 and R-9.

Splitter/Combiner Is an Old Optic Trick

This “trick” of mixing lenses with a spherical combiner partial mirror is an old idea/trick. It often turns out that mixing refractive (lenses) with mirror optics can lead to a more compact and less expensive design.

I have seen a beam splitter/mirror used many times. The ODG design is a little different in that the beam splitter is sealed/mated to the curved mirror which with the pictures available earlier make it hard to see. Likely as not this has been done before too.

This configuration of beam splitter and curve mirror even showed up in Magic Leap applications such as Fig. 9 from 2015/0346495 shown at right. I think this is the optical configuration that Magic Leap used with some of their prototypes including the one seen by “The Information.

Conclusion/Trends – Turning the Crank

The ODG optical design while it may seem a bit more complex than a simple beam splitter, is actually probably simpler/easier to make than doing everything with lenses before the beam splitter. Likely they went to this technique to support a wider FOV.

Based on my experience, I would expect that ODG optical design will be cleaner/better than the waveguide designs of Microsoft’s Hololens. The use of OLED microdisplays should give ODG superior contrast which will further improve the perceived sharpness of the image. While not as apparent to the casual observer, but as I have discussed previously, OLEDs won’t work with diffractive/holographic waveguides such as Hololens and Magic Leap are using.

What is also interesting that in terms of resolution and basic optics, the R-8 with 720p is about 1/5th the price of the military/industrial grade 720p R-6 of about 2 years ago. While the R-9 in addition to having a 1080p display, has some modular expansion capability, one would expect there will be follow-on product with 1080p with a larger FOV and more sensors in a price range of the R-8 in the not too distant future and perhaps with integration of the features from one or more of the R-9’s add-on modules; this as we say in the electronics industry, “is just a matter of turning the crank.”

Everything VR & AR Podcast Interview with Karl Guttag About Magic Leap

With all the buzz surrounding Magic Leap and this blog’s technical findings about Magic Leap, I was asked to do an interview by the “Everything VR & AR Podcast” hosted by Kevin Harvell. The podcast is available on iTunes and by direct link to the interview here.

The interview starts with about 25 minutes of my background starting with my early days at Texas Instruments. So if you just want to hear about Magic Leap and AR you might want to skip ahead a bit. In the second part of the interview (about 40 minutes) we get into discussing how I went about figuring out what Magic Leap was doing. This includes discussing how the changes in the U.S. patent system signed into law in 2011 with the America Invents Act help make the information available for me to study.

There should be no great surprises for anyone that has followed this blog. It puts in words and summarizes a lot that I have written about in the last 2 months.

Update: I listen to the podcast and noticed that I misspoke a few times; it happens in live interviews.  An unfathomable mistake is that I talked about graduating college in 1972 but that was high school; I graduated from Bradley University with a B.S. in Electrical Engineering in 1976 and then received and MSEE from The University of Michigan in 1977 (and joined TI in 1977).  

I also think I greatly oversimplified the contribution of Mark Harward as a co-founder at Syndiant. Mark did much more than just have desigeners, he was the CEO, an investor, and and the company while I “played” with the technology, but I think Mark’s best skill was in hiring great people. Also, Josh Lund, Tupper Patnode, and Craig Waller were co-founders. 

 

Kopin Entering OLED Microdisplay Market

Kopin Making OLED Microdisplays

Kopin announced today that they are getting into the OLED Microdisplay business. This is particularly notable because Kopin has been a long time (since 1999) manufacture of transmissive LCD microdisplays used in camera viewfinders and near eye display devices. They also bought Forth Dimension Displays back in 2011, a maker of high resolution ferroelectric reflective LCOS used in higher end near eye products.

OLED Microdisplays Trending in AR/VR Market

With the rare exception of the large and bulky Meta 2, microdisplays, (LCOS, DLP, OLED, and transmissive LCD), dominate the AR/MR see-through market. They also are a significant factor in VR and other non-see-through near eye displays

Kopins entry seems to be part of what may be a trend toward OLED Microdisplays used in near eye products. ODG’s next generation “Horizon” AR glasses is switching from LCOS (used in the current R7) to OLED microdisplays. Epson which was a direct competitor to Kopin in transmissive LCD, switched to OLED microdisplays in their new Moverio BT-300 AR glasses announced back in February.

OLED Microdisplays Could Make VR and Non-See-Through Headsets Smaller/Lighter

Today most of the VR headsets are following Oculus’s use of large flat panels with simple optics. This leads to large bulky headsets, but the cost of OLED and LCD flat panels is so low compared to other microdisplays with their optics that they win out. OLED microdisplays have been far too expensive to compete on price with the larger flat panels, but this could change as there are more entrants into the OLED microdisplay market.

OLEDs Don’t Work With Waveguides As Used By Hololens and Magic Leap

It should be noted that the broad spectrum and diffuse light emitted by OLED is generally incompatible with the flat waveguide optics such as used by Hololens and is expected from Magic Leap (ML). So don’t expect to see these being used by Hololens and ML anytime soon unless they radically redesign their optics. Illuminated microdisplays like DLP and LCOS can be illuminated by narrower spectrum light sources such as LED and even lasers and the light can be highly collimated by the illumination optics.

Transmissive LCD Microdisplays Can’t Compete As Resolution Increases

If anything, this announcement from Kopin is the last nail in the coffin of the transmissive LCD microdisplay in the future. OLED Microdisplays have the advantages over transmissive Micro-LCD in the ability to go to higher resolution and smaller pixels to keep the overall display size down for a given resolution when compared to transmissive LCD. OLEDs consume less power for the same brightness than transmissive LCD. OLED also have much better contrast. As resolution increases transmissive LCDs cannot compete.

OLEDs Microdisplays More Of A Mixed Set of Pros and Cons Compared to LCOS and DLP.

There is a mix of pro’s and con’s when comparing OLED microdisplays with LCOS and DLP. The Pro’s for OLED over LCOS and DLP include:

  1. Significantly simpler optical path (illumination path not in the way). Enables optical solutions not possible with reflective microdisplays
  2. Lower power for a given brightness
  3. Separate RGB subpixels so there is no field sequential color breakup
  4. Higher contrast.

The advantages for LCOS and DLP reflective technologies over OLED microdisplays include:

  1. Smaller pixel equals a smaller display for a given resoluion. DLP and LCOS pixels are typically from 2 to 10 times smaller in area per pixel.
  2. Ability to use narrow band light sources which enable the use of waveguides (flat optical combiners).
  3. Higher brightness
  4. Longer lifetime
  5. Lower cost even including the extra optics and illumination

Up until recently, the cost of OLED microdisplays were so high that only defense contractors and other applications that could afford the high cost could consider them. But that seems to be changing. Also historically the brightness and lifetimes of OLED microdisplays were limited. But companies are making progress.

OLED Microdisplay Competition

Kopin is long from being the first and certainly is not the biggest entry in the OLED microdisplay market. But Kopin does have a history of selling volume into the microdisplay market. The list of known competitors includes:

  1. Sony appears to be the biggest player. They have been building OLED microdisplays for many years for use in camera viewfinders. They are starting to bring higher resolution products to the market and bring the costs down.
  2. eMagin is a 23-year-old “startup”. They have a lot of base technology and are a “pure play” stock wise. But they have failed to break through and are in danger of being outrun by big companies
  3. MicoOLED – Small France startup – not sure where they really stand.
  4. Samsung – nothing announced but they have all the technology necessary to make them. Update: Ron Mertens of OLED-Info.com informed me that I was rumored that the second generation of Google Glass was considering a Samsung OLED microdisplay and that Samsung had presented a paper going back to 2011.
  5.  LG – nothing announced but they have all the technology necessary to make them.

I included Samsung and LG above not because I have seen or heard of them working on them, but I would be amazed if they didn’t at least have a significant R&D effort given their sets of expertise and their extreme interest in this market.

For More Information:

For more complete information on the OLED microdisplay market, you might want go to OLED-info that has been following both large flat panel and small OLED microdisplay devices for many years. They also have two reports available, OLED Microdisplays Market Report and OLED for VR and AR Market Report.

For those who want to know more about Kopin’s manufacturing plan, Chris Chinnock of Insight Media has an interesting article outlining Kopin’s fabless development strategy.

Magic Leap: Focus Planes (Too) Are a Dead End

What Magic Leap Appears to be Doing

For this article I would like to dive down on the most likely display and optics Magic Leap (ML) is developing for their their Product Equivalent (PEQ). The PEQ was discussed in the “The Information” story “The Reality Behind Magic Leap.” As I explained in my  November 20, 2016 article Separating Magic and Reality (before the Dec 8th “The Information” story) the ML patent application US 2016/0327789 best fits the available evidence and if anything the “The Information” article reinforce that conclusion. Recapping the evidence:

  1. ML uses a “spatial light modulator” as stated in “The Information”
  2. Most likely an LCOS spatial light modulator and the Oct. 27th 2017 Inside Business citing “KGI Securities analyst Ming-Chi Kuo, who has a reputation for being tapped into the Asian consumer electronics supply chain” claims ML is using a Himax LCOS device.
  3. Focus planes to support vergence/accommodation per many ML presentations and their patent applications
  4. Uses waveguides which fit the description and pictures of what ML calls a “Photonics Chip”
  5. Does not have a separate focus mechanism as reported in the “The Information” article.
  6. Could fit the form factor as suggested in “The Information”
  7. Its the only patent that shows serious optical design that also uses what could be considered a “Photonics chip.”

I can’t say with certainty that the optical path is that of application 2016/0327789. It is just the only optical path in the ML patent applications that fits all the available evidence and and has a chance of working.

Field of View (FOV)

Rony Abovitz, ML CEO, is claiming a larger a larger FOV.  I would think ML would not want to be have lower angular resolution than Hololens. Keeping the same 1.7 arc minutes per pixel angular resolution as Hololens and ODG’s Horizon, this would give a horizontal FOV of about 54.4 degrees.

Note, there are rumors that Hololens is going to be moving to a 1080p device next year so ML may still not have an advantage by the time they actually have a product. There is a chance that ML will just use a 720p device, at least at first, and accept lower angular resolution of say 2.5 or greater to get into the 54+ FOV range. Supporting a larger FOV is not small trick with waveguides and is  one thing that ML might have over Hololoens; but then again Hololens is not standing still.

Sequential Focus Planes Domino Effect

The support of vergence/accommodation appears to be a paramount issue with ML. Light fields are woefully impractical for any reasonable resolution, so ML in their patent application and some of their demo videos show the concept of “focus planes.” But for every focus plane an image has to be generated and displayed.

The cost of having more than one display per eye including the optics to combine the multiple displays would be both very costly and physically large. So the only rational way ML could support focus planes is to use a single display device and sequentially display the focus planes. But as I will outline below, using sequential focus planes to address vergence/accommodation, comes at the cost of hurting other visual comfort issues.

Expect Field Sequential Color Breakup If Magic Leap Supports “Focus Planes”

Both high resolution LCOS and DLP displays use “field sequential color” where they have a single set of mirrors that display a single color plane at a time. To get the colors to fuse together in the eye they repeat the same colors multiple times per frame of an image. Where I have serious problems with ML using Himax LCOS is that instead of repeating colors to reduce the color breakup, they will be instead be showing different images to support Sequential Focus Planes. Even if they have just two focus planes as suggested in “The Information,” it means they will reduce the rate repeating of colors to help them fuse in the eye is cut in half.

The Hololens which also uses a field sequential color LCOS one can already detect breakup. Cutting the color update rate by 2 or more will make this problem significantly worse.

Another interesting factor is that field sequential color breakup tends to be more noticeable by people’s peripheral vision which is more motion/change sensitive. This means the problem will tend to get worse as the FOV increases.

I have worked many years with field sequential display devices, specifically LCOS. Based on this experience I expect that the human vision system  will do a poor job of “fusing” the colors at such slow color field update rates and I would expect people will see a lot of field sequential color breakup particularly when objects move.

In short, I expect a lot of color breakup to be noticeable if ML support focus planes with a field sequential color device (LCOS or DLP).

Focus Planes Hurt Latency/Lag and Will Cause Double Images

An important factor in human comfort is the latency/lag between any head movement and the display reacting can cause user discomfort. A web search will turn up thousands of references about this problem.

To support focus planes ML must use a display fast enough to support at least 120 frame per second. But to support just two focus planes it will take them 1/60th of a second to sequentially display both focus planes. Thus they have increase the total latency/lag from the time they sense movement until the display is updated by ~8.333 milliseconds and this is on top of any other processing latency. So really focus planes is trading off one discomfort issue, vergence/accommodation, for another, latency/lag.

Another issue which concerns me is how well sequential focus planes are doing to fuse in the eye. With fast movement the eye/brain visual system is takes its own asynchronous “snapshots” and tries to assemble the information and line it up. But as with field sequential color, it can put together time sequential information wrong, particularly if some objects in the image move and others don’t. The result will be double images, getting double images with sequential focus planes would be unavoidable with fast movement either in the virtual world or when a person moves their eyes. These problems will be compounded by color field sequential breakup.

Focus Planes Are a Dead End – Might Magic Leap Have Given Up On Them?

I don’t know all the behind the scenes issues with what ML told investors and maybe ML has been hemmed in by their own words and demos to investors. But as an engineer with most of my 37 years in the industry working with image generation and display, it looks to me that focus planes causes bigger problems than it solves.

What gets me is that they should have figured out that focus planes were hopeless in the first few months (much less if someone that knew what they were doing was there). Maybe they were ego driven and/or they built to much around the impression they made with their “Beast” demo system (big system using DLPs). Then maybe they hand waved away the problems sequential focus planes cause thinking they could fix them somehow or hoped that people won’t notice the problems. It would certainly not be the first time that a company committed to a direction and then felt that is had gone to far to change course. Then there is always the hope that “dumb consumers” won’t see the problems (in this case I think they will).

It is clear to me that like Fiber Scan Displays (FSD), focus planes are a dead end, period, full-stop. Vergence/accommodation is a real issue but only for objects that get reasonably close to the users. I think a much more rational way to address the issue is to use sensors to track the eyes/pupils and adjust the image accordingly as the eye’s focus changes relatively slowly it should be possible to keep up. In short, move the problem from the physical display and optics domain (that will remain costly and problematical), to the sensor and processing domain (that will more rapidly come down in cost).

If I’m at Hololens, ODG, or any other company working on an AR/MR systems and accept that vergence/accommodation is a problem needs to be to solve, I’m going to solve it with eye/pupil sensing and processing, not by screwing up everything else by doing it with optics and displays. ML’s competitors have had enough warning to already be well into developing solutions if they weren’t prior to ML making such a big deal about the already well known issue.

The question I’m left is if and when did Magic Leap figured this out and were they too committed by ego or what they told investors to focus planes to change at that point? I have not found evidence so far in their patent applications that they tried to changed course, but these patent applications will be about 18 months or more behind what they decided to do. But if they don’t use focus planes, they would have to admit that they are much closer to Hololens and other competitors than they would like the market to think.

Magic Leap – Fiber Scanning Display Follow UP

Some Newer Information On Fiber Scanning

Through some discussions and further searching I found some more information about Fiber Scanning Displays (FSD) that I wanted to share. If anything, this material further supports the contention that Magic Leap (ML) is not going to have a high resolution FSD anytime soon.

Most of the images available is about fiber scanning for use as a endoscope camera and not as a display device. The images are of things like body parts they really don’t show resolution or the amount of distortion in the image. Furthermore most of the images are from 2008 or older which gives quite a bit of time for improvement. I have found some information that was generated in the 2014 to 2015 time frame that I would like to share.

Ivan Yeoh’s 2015 PhD dissertation

2015-yeoh-laser-projection

In terms of more recent fiber scanning technology, Ivan Yeoh’s name seems to be a common link. Show at left is a laser projected image and the source test pattern from Ivan Yeoh’s 2015 PhD dissertation “Online Self-Calibrating Precision Scanning Fiber Technology with Piezoelectric Self-Sensing“at the University of Washington. It is the best quality image of a test pattern or known image that I have found of a FSD anywhere. The dissertation is about how to use feedback to control the piezoelectric drive of the fiber. While his paper is about the endoscope calibration, he nicely included this laser projected image.

The drive resulted in 180 spirals which would nominally be 360 pixels across at the equator of the image with a 50Hz frame rate. But based on the resolution chart, the effective resolution is about 1/8th of that or only ~40 pixels, but about half of this “loss” is due to resampling a rectilinear image onto the spiral. You should also note that there is considerably more distortion in the center of the image where the fiber will be moving more slowly.

2015-yeoh-endoscope-manual-calibrationYeoh also included some good images at right showing how had previously used a calibration setup to manually calibrate the endoscope before use as it would go out of calibration with various factors including temperature. These are camera images and based on the test charts they are able to resolve about 130 pixels across which is pretty close to the Nyquist sampling rate from a 360 samples across spiral. As expected the center of the image where the fiber is moving the slowest is the most distorted.

While a 360 pixel camera is still very low resolution by today’s standards, it is still 4 to 8 times better than the resolution of the laser projected image. Unfortunately Yeoh was concerned with distortion and does not really address resolution issues in his dissertation. My resolution comments are based on measurements I could make from the images he published and copied above.

Washington Patent Application Filed in 2014

uow-2016-fsd-applicationYeoh is also the lead inventor on the University of Washington patent application US 2016/0324403 filed in 2014 and published in June 2016. At left is Fig. 26 from that application. It is supposed to be of a checkerboard pattern which you may be able to make out. The figure is described as using a “spiral in and spiral out” process where the rather than having a retrace time, they just reverse the process. This applications appears to be related to Yeoh’s dissertation work. Yeoh is shown as living in Fort Lauderdale, FL on the application, near Magic Leap headquarters.   Yeoh is also listed as an inventor on the Magic Leap application US 2016/0328884 “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” that I discuss in my last article. It would appear that Yeoh is or has worked for Magic Leap.

2008 YouTube Video

ideal-versus-actually-spiral-scan

Additionally, I would like to include some images from a 2008 YouTube Video that kmanmx from the Reddit Magic Leap subreddit alerted me to. White this is old, it has a nice picture of the fiber scanning process both as a whole and with close-up image near the start of the spiral process.

For reference on the closeup image I have added the size of a “pixel” for a 250 spiral / 500 pixel image (red square) and what a 1080p pixel (green square) would be if you cropped the circle to a 16:9 aspect ratio. As you hopefully can see the spacing and jitter variations-error in the scan process are several 1080p pixels in size. While this information is from 2008, the more recent evidence above does not show a tremendous improvement in resolution.

Other Issues

So far I have mostly concentrated on the issue of resolution, but there are other serious issues that have to be overcome. What is interesting in the Magic Leap and University of Washington patent literature is the lack of patent activity to address the other issues associated with generating a fiber scanned image. If Magic Leap were serious and had solved these issues with FSD, one would expect to see patent activity in making FSD work at high resolution.

One major issue that may not be apparent to the casual observer is the the controlling/driving the lasers over an extremely large dynamic range. In addition to support the typical 256 (8-bits) per color and supporting overall brightness adjustment based on the ambient light, the speed of the scan varies by a large amount an they must compensate for this or end up with a very bright center where the scan is moving more slowly. When you combine it all together they would seem to need to control the lasers over a greater than 2000:1 dynamic range from a dim pixel at the center to a brightest pixel at the periphery.

Conclusion

Looking at all the evidence there is just nothing there to convince me that Magic Leap is anywhere close to having perfected a FSD to the point that it could be competitive with a conventional display device like LCOS, DLP or Micro-OLED, not less the 50 megapixel resolutions they talk about. Overall, there is reasons to doubt that a electromechanical scan process is going to in the long run compete with an all electronic method.

It very well could be that Magic Leap had hoped that FSD would work and/or it was just a good way to convince investors that they had a technology that would lead to super high resolution in the future. But there is zero evidence that have seriously improved on what the University of Washington has done. They may still be pursuing it as an R&D effort but there is no reason to believe that they will have it in a product anytime soon.

All roads point to ML using either LCOS (per Business Insider of October 2016) or a DLP based what I have heard is in some prototypes. This would mean they will likely have either 720p or 1080p resolution display, or the same as others such as Hololens (which will likely have a 1080p version soon).

The whole FSD is about trying to break through the physical pixel barrier of conventional technologies.  There are various physics (diffraction is becoming a serious issue) and material issues that will likely make it tough to make physical pixels much smaller than 3 micron.

Even if there was a display resolution breakthrough (which I doubt based on the evidence), there are issues as to whether this resolution could make it through the optics. As the resolution improves the optics have to also improve or else they will limit the resolution. This is a factor that particularly concerns me with the waveguide technologies I have seen to date that appear to be at the heart of Magic Leap optics.

Magic Leap – The Display Technology Used in their Videos

So, what display technology is Magic Leap (ML) using, at least in their posted videos?   I believe the videos rule out a number of the possible display devices, and by a process of elimination it leaves only one likely technology. Hint: it is NOT laser fiber scanning prominently shown number of ML patents and article about ML.

Qualifiers

Magic Leap, could be posting deliberately misleading videos that show technology and/or deliberately bad videos to throw off people analyzing them; but I doubt it. It is certainly possible that the display technology shown in the videos is a prototype that uses different technology from what they are going to use in their products.   I am hearing that ML has a number of different levels of systems.  So what is being shown in the videos may or may not what they go to production with.

A “Smoking Gun Frame” 

So with all the qualifiers out of the way, below is a frame capture from Magic Leaps “A New Morning” while they are panning the headset and camera. The panning actions cause temporal (time based) frame shutter artifact in the form of partial ghost images as a result of the camera and the display running asynchronously and/or different frame rates. This one frame along with other artifacts you don’t see when playing the video, tells a lot about the display technology used to generate the image.

ml-new-morning-text-images-pan-leftIf you look at the left red oval you will see at the green arrow a double/ghost image starting and continuing below that point.  This is where the camera caught the display in its display update process. Also if you look at the right side of the image you will notice that the lower 3 circular icons (in the red oval) have double images where the top one does not (the 2nd to the top has a faint ghost as it is at the top of the field transition). By comparison, there is not a double image of the real world’s lamp arm (see center red oval) verifying that the roll bar is from the ML image generation.

ml-new-morning-text-whole-frameUpdate 2016-11-10: I have upload for those that would want to look at it.   Click on the thumbnail at left to see the whole 1920×1080 frame capture (I left the highlighting ovals that I overlaid).

Update 2016-11-14 I found a better “smoking gun” frame below at 1:23 in the video.  In this frame you can see the transition from one frame to the next.  In playing the video the frame transition slowly moves up from frame to frame indicating that they are asynchronous but at almost the same frame rate (or an integer multiple thereof like 1/60 or 1/30th)

ml-smoking-gun-002

 In addition to the “Smoking Gun Frame” above, I have looked at the “A New Morning Video” as well the “ILMxLAB and ‘Lost Droids’ Mixed Reality Test” and the early “Magic Leap Demo” that are stated to be “Shot directly through Magic Leap technology . . . without use of special effects or compositing.”through the optics.”   I was looking for any other artifacts that would be indicative of the various possible technologies

Display Technologies it Can’t Be

Based on the image above and other video evidence, I think it save to rule out the following display technologies:

  1. Laser Fiber Scanning Display – either a single or multiple  fiber scanning display as shown in Magic Leaps patents and articles (and for which their CTO is famous for working on prior to joining ML).  A fiber scan display scans in a spiral (or if they are arrayed an array of spirals) with a “retrace/blanking” time to get back to the starting point.  This blanking would show up as diagonal black line(s) and/or flicker in the video (sort of like an old CRT would show up with a horizontal black retrace line).  Also, if it is laser fiber scanning, I would expect to see evidence of laser speckle which is not there. Laser speckle will come through even if the image is out of focus.  There is nothing to suggest in this image and its video that there is a scanning process with blanking or that lasers are being used at all.  Through my study of Laser Beam Scanning (and I am old enough to have photographed CRTs) there is nothing in the still frame nor videos that is indicative of a scanning processes that has a retrace.
  2. Field Sequential DLP or LCOS – There is absolutely no field sequential color rolling, flashing, or flickering in the video or in any still captures I have made. Field sequential displays, display only one color at a time very rapidly. When these rapid color field changes beat against the camera’s scanning/shutter process.  This will show up as color variances and/or flicker and not as a simple double image. This is particularly important because it has been reported that Himax which makes field sequential LCOS devices, is making projector engines for Magic Leap. So either they are not using Himax or they are changing technology for the actual product.  I have seen many years of DLP and LCOS displays both live and through many types of video and still cameras and I see nothing that suggest field sequential color is being used.
  3. Laser Beam Scanning with a mirror – As with CRTs and fiber scanning, there has to be a blanking/retrace period between frames will show up in the videos as roll bar (dark and/or light) and it would roll/move over time.  I’m including this just to be complete as this was never suggested anywhere with respect to ML.
UPDATE Nov 17, 2016

Based on other evidence that as recently come in, even though I have not found video evidence of Field Sequential Color artifacts in any of the Magic Leap Videos, I’m more open to thinking that it could be LCOS or (less likely) DLP and maybe the camera sensor is doing more to average out the color fields than other cameras I have used in the past.  

Display Technologies That it Could Be 

Below are a list of possible technologies that could generate video images consistent with what has been shown by Magic Leap to date including the still frame above:

  1. Mico-OLED (about 10 known companies) – Very small OLEDs on silicon or similar substrates. As list of the some of the known makers is given here at OLED-info (Epson has recently joined this list and I would bet that Samsung and others are working on them internally). Micro-OLEDs both A) are small enough toinject an image into a waveguide for a small headset and B) has the the display characteristics that behave the way the image in the video is behaving.
  2. Transmissive Color Filter HTPS (Epson) – While Epson was making transmissive color filter HTPS devices, their most recent headset has switch to a Micro-OLED panel suggesting they themselves are moving away.  Additionally while Meta first generation used Epson’s HTPS, they moved away to a large OLED (with a very large spherical reflective combiner).  This technology is challenged in going to high resolution and small size.
  3. Transmissive Color Filter LCOS (Kopin) – is the only company making Color Filter Transmissive LCOS but they have not been that active as of last as a component supplier and they have serious issues with a roadmap to higher resolution and size.
  4. Color Filter reflective LCOS– I’m putting this in here more for completeness as it is less likely.  While in theory it could produce the images, it generally has lower contrast (which would translate into lack of transparency and a milkiness to the image) and color saturation.   This would fit with Himax as a supplier as they have color filter LCOS devices.
  5. Large Panel LCD or OLED – This would suggest a large headset that is doing something similar to the Meta 2.   Would tend to rule this out because it would go against everything else Magic Leap shows in their patents and what they have said publicly.   It’s just that it could have generated the image in the video.
And the “Winner” is I believe . . . Micro-OLED (see update above) 

By a process of elimination including getting rid of the “possible but unlikely” ones from above, it strongly points to it being Micro-OLED display device. Let me say, I have no personal reason to favor it being Micro-OLED, one could argue it might be to my advantage based on my experience for it to be LCOS if anything.

Before I started any serious analysis, I didn’t have an opinion. I started out doubtful that it was field sequential or and scanning (fiber/beam) devices due to the lack of any indicative artifacts in the video, but it was the “smoking gun frame” that convince me that that if the camera was catching temporal artifacts, it should have been catching the other artifact.

I’m basing this conclusions on the facts as I see them.  Period, full stop.   I would be happy to discuss this conclusion (if asked rationally) in the comments section.

Disclosure . . . I Just Bought Some Stock Based on My Conclusion and My Reasoning for Doing So

The last time I played this game of “what’s inside” I was the first to identify that a Himax LCOS panel was inside Google Glass which resulted in their market cap going up almost $100M in a couple of hours.  I had zero shares of Hixmax when this happened, my technical conclusion now as it was then was based on what I saw.

Unlike my call on Himax in Google Glass I have no idea which company make the device Magic Leap appears to using nor if Magic Leap will change technologies for their production device.  I have zero inside information and am basing this entirely on the information I have given above (you have been warned).   Not only is the information public, but it is based on videos that are many months old.

I  looked at companie on the OLED Microdisplay List by www.oled-info.com (who has followed OLED for a long time).  It turned out all the companies were either part of a very large company or were private companies, except for one, namely eMagin.

I have know of eMagin since 1998 and they have been around since 1993.  They essentially mirror Microvision doing Laser Beam Scanning and was also founded in 1993, a time where you could go public without revenue.  eMagin has spent/loss a lot of shareholder money and is worth about 1/100th from their peak in March 2000.

I have NOT done any serious technical, due diligence, or other stock analysis of eMagin and I am not a stock expert. 

I’m NOT saying that eMagine is in Magic Leap. I’m NOT saying that Micro-OLED is necessarily better than any other technology.  All I am saying is that I think that someone’s Micro-OLED technology is being using the Magic Leap prototype and that Magic Leap is such hotly followed company that it might (or might not) affect the stock price of companies making Micro-OLEDs..

So, unlike the Google Glass and Himax case above, I decided to place a small “stock bet” (for me) on ability to identify the technology (but not the company) by buying some eMagin stock  on the open market at $2.40 this morning, 2016-11-09 (symbol EMAN). I’m just putting my money where my mouth is so to speak (and NOT, once again, stock advice) and playing a hunch.  I’m just making a full disclosure in letting you know what I have done.

My Plans for Next Time

I have some other significant conclusions I have drawn from looking at Magic Leap’s video about the waveguide/display technology that I plan to show and discuss next time.

Near Eye AR/VR and HUD Metrics For Resolution, FOV, Brightness, and Eyebox/Pupil

Image result for oculus riftI’m planning on following up on my earlier articles about AR/VR Head Mounted Displays
(HMD) that also relate to Heads Up Displays (HUD) with some more articles, but first I would like to get some basic Image result for hololenstechnical concepts out of the way.  It turns out that the metrics we care about for projectors while related don’t work for hud-renaultmeasuring HMD’s and HUDs.

I’m going to try and give some “working man’s” definitions rather than precise technical definitions.  I be giving a few some real world examples and calculations to show you some of the challenges.

Pixels versus Angular Resolution

Pixels are pretty well understood, at least with today’s displays that have physical pixels like LCDs, OLEDs, DLP, and LCOS.  Scanning displays like CRTs and laser beam scanning, generally have additional resolution losses due to imperfections in the scanning process and as my other articles have pointed out they have much lower resolution than the physical pixel devices.

When we get to HUDs and HMDs, we really want to consider the angular resolution, typically measured in “arc-minutes” which are 1/60th of a degree; simply put this is the angular size that a pixel covers from the viewing position. Consumers in general haven’t understood arc-minutes, and so many companies have in the past talked in terms of a certain size and resolution display viewed from a given distance; for example a 60-inch diagonal 1080P viewed at 6 feet, but since the size of the display, resolution and viewing distance are all variables its is hard to compare displays or what this even means with a near eye device.

A common “standard” for good resolution is 300 pixels per inch viewed at 12-inches (considered reading distance) which translates to about one-arc-minute per pixel.  People with very good vision can actually distinguish about twice this resolution or down to about 1/2 an arc-minute in their central vision, but for most purposes one-arc-minute is a reasonable goal.

One thing nice about the one-arc-minute per pixel goal is that the math is very simple.  Simply multiply the degrees in the FOV horizontally (or vertically) by 60 and you have the number of pixels required to meet the goal.  If you stray much below the goal, then you are into 1970’s era “chunky pixels”.

Field of View (FOV) and Resolution – Why 9,000 by 8100 pixels per eye are needed for a 150 degree horizontal FOV. 

As you probably know, the human eye’s retina has variable resolution.  The human eyes has roughly elliptical FOV of about 150 to 170 degrees horizontally by 135 to 150 degrees vertically, but the generally good discriminating FOV is only about 40 degree (+/-20 degrees) wide, with reasonably sharp vision, the macular, being about 17-20 degrees and the fovea with the very best resolution covers only about 3 degrees of the eye’s visual field.   The eye/brain processing is very complex, however, and the eye moves to aim the higher resolving part of the retina at a subject of interest; one would want the something on the order of the one-arc-minute goal in the central part of the display (and since having a variable resolution display would be a very complex matter, it end up being the goal for the whole display).

And going back to our 60″, 1080p display viewed from 6 feet, the pixel size in this example is ~1.16 arc-minutes and the horizontal field of view of view will be about 37 degrees or just about covering the generally good resolution part of the eye’s retina.

Image result for oculus rift

Image from Extreme Tech

Now lets consider the latest Oculus Rift VR display.  It spec’s 1200 x 1080 pixels with about a 94 horz. by 93 vertical FOV per eye or a very chunky ~4.7 arc-minutes per pixel; in terms of angular resolution is roughly like looking at a iPhone 6 or 7 from 5 feet away (or conversely like your iPhone pixels are 5X as big).   To get to the 1 arc-minute per pixel goal of say viewing today’s iPhones at reading distance (say you want to virtually simulate your iPhone), they would need a 5,640 by 5,580 display or a single OLED display with about 12,000 by 7,000 pixels (allowing for a gap between the eyes the optics)!!!  If they wanted to cover the 150 by 135 FOV, we are then talking 9,000 by 8,100 per eye or about a 20,000 by 9000 flat panel requirement.

Not as apparent but equally important is that the optical quality to support these types of resolutions would be if possible exceeding expensive.   You need extremely high precision optics to bring the image in focus from such short range.   You can forget about the lower cost and weight Fresnel optics (and issues with “God rays”) used in Oculus Rift.

We are into what I call “silly number territory” that will not be affordable for well beyond 10 years.  There are even questions if any know technology could achieve these resolutions in a size that could fit on a person’s head as there are a number of physical limits to the pixel size.

People in gaming are apparently living with this appallingly low (1970’s era TV game) angular resolution for games and videos (although the God rays can be very annoying based on the content), but clearly it not a replacement for a good high resolution display.

Now lets consider Microsoft’s Hololens, it most criticized issue is is smaller (relative to the VR headsets such as Oculus) FOV of about 30 by 17.5 degrees.  It has a 1268 by 720 pixel display per eye which translates into about 1.41 arc-minutes per pixel which while not horrible is short of the goal above.   If they had used 1920×1080 (full HD) microdisplay devices which are becoming available,  then they would have been very near the 1 arc-minute goal at this FOV.

Let’s understand here that it is not as simple as changing out the display, they will also have to upgrade the “light guide” that the use as an combiner to support the higher resolution.   Still this is all reasonably possible within the next few years.   Microsoft might even choose to grow the FOV to around 40 degrees horizontally rather and keep the lower angular resolution a 1080p display.  Most people will not seriously notice the 1.4X angular resolution different (but they will by about 2x).

Commentary on FOV

I know people want everything, but I really don’t understand the criticism of the FOV of Hololens.  What we can see here is a bit of “choose your poison.”  With existing affordable (or even not so affordable) technology you can’t support a wide field of view while simultaneously good angular resolution, it is simply not realistice.   One can imaging optics that would let you zoom between a wide FOV with lower angular resolution and a smaller FOV with higher angular resolution.  The control of this zooming function could perhaps be controlled by the content or feedback from the user’s eyes and/or brain activity.

Lumens versus Candelas/Meter2 (cd/m2 or nits)

With an HMD or HUD, what we care about is the light that reaches the eye.   In a typical front projector system, only an extremely small percentage of the light that goes out of the projector, reflects off the screen and makes it back to any person’s eye, the vast majority of the light goes to illuminating the room.   With a HMD or HUD all we care about is the light that makes it into the eye.

Projector lumens or luminous flux, simply put, are a measure of the total light output and for a projector is usually measure when outputting a solid white image.   To get the light that makes it to the eye we have to account for the light hits a screen, and then absorbed, scattered, and reflects back at an angle that will get back to the eye.  Only an exceeding small percentage (a small fraction of 1%) of the projected light will make it into the eye in a typical front projector setup.

With HMDs and HUDs we talk about brightness in terms candelas-per-Meter-Squared (cd/m2), also referred to as “nits” (while considered an obsolete term, it is still often used because it is easier to write and say).  Cd/m2 (or luminance) is measure of brightness in a given direction which tells us how bright the light appears to the eye looking in a particular direction.   For a good quick explanation of lumens, cd/m2 I would recommend a Compuphase article.

Image result for hololens

Hololens appears to be “luminosity challenged” (lacking in  cd/m2) and have resorted to putting sunglasses on outer shield even for indoor use.  The light blocking shield is clearly a crutch to make up for a lack of brightness in the display.   Even with the shield, it can’t compete with bright light outdoors which is 10 to 50 times brighter than a well lit indoor room.

This of course is not an issue for the VR headsets typified by Oculus Rift.  They totally block the outside light, but it is a serious issue for AR type headsets, people don’t normally wear sunglasses indoors.

Now lets consider a HUD display.  A common automotive spec for a HUD in sunlight is to have 15,000 cd/m2 whereas a typical smartphone is between 500 and 600 cd/m2 our about 1/30th the luminosity of what is needed.  When you are driving a car down the road, you may be driving in the direction of the sun so you need a very bright display in order to see it.

The way HUDs work, you have a “combiner” (which may be the car’s windshield) that combines the image being generated with the light from the real world.  A combiner typical only reflects about 20% to 30% of the light which means that the display before the combiner needs to have on the order of 30,000 to 50,000 cd/m2 to support the 15,000 cd/m2 as seen in the combiner.  When you consider that you smartphone or computer monitor only has about 400 to 600 cd/m2 , it gives you some idea of the optical tricks that must be played to get a display image that is bright enough.

phone-hudYou will see many “smartphone HUDs” that simply have a holder for a smarphone and combiner (semi-mirror) such as the one pictured at right on Amazon or on crowdfunding sites, but rest assured they will NOT work in bright sunlight and only marginal in typical daylight conditions. Even with combiners that block more than 50% of the daylight (not really much of a see-through display at this point) they don’t work in daylight.   There is a reason why companies are making purpose built HUDs.

The cd/m2 also is a big issue for outdoor head mount display use. Depending on the application, they may need 10,000 cd/m2 or more and this can become very challenging with some types of displays and keeping within the power and cooling budgets.

At the other extreme at night or dark indoors you might want the display to have less than 100 cd/m2 to avoid blinding the user to their surrounding.  Note the SMPTE spec for movie theaters is only about 50 cd/mso even at 100 cd/m2 you would be about 2X the brightness of a movie theater.  If the device much go from bright sunlight to night use, you could be talking over a 1,500 to 1 dynamic range which turns out to be a non-trivial challenge to do well with today’s LEDs or Lasers.

Eye-Box and Exit Pupil

Since AR HMDs and HUDs generate images for a user’s eye in a particular place, yet need to compete with the ambient light, the optical system is designed to concentrate light in the direction of the eye.  As a consequence, the image will only be visible in a given solid angle “eye-box” (with HUDs) or “pupil” (with near eye displays).   There is also a trade-off in making the eyebox or pupil bigger and the ease of use, as the bigger the eye-box or pupil, the easier it will be the use.

With HUD systems there can be a pretty simple trade-off in eye-box size and cd/m2 and the lumens that must be generated.   Using some optical tricks can help keep from needing an extremely bright and power hungry light source.   Conceptually a HUD is in some ways like a head mounted display but with very long eye relief. With such large eye relieve and the ability of the person to move their whole head, the eyebox for a HUD has significantly larger than the exit pupil of near eye optics.  Because the eyebox is so much larger a HUD is going to need much more light to work with.

For near eye optical design, getting a large exit pupil is a more complex issue as it comes with trade-offs in cost, brightness, optical complexity, size, weight, and eye-relief (how far the optics are from the viewer’s eye).

Too small a pupil and/or with more eye-relief, and a near eye device is difficult to use as any small movement of the device causes you to to not be able to see the whole image.  Most people’s first encounter with an exit pupil is with binoculars or a telescope and how the image cuts offs unless the optics are centered well on the user’s eye.

Conclusions

While I can see that people are excited about the possibilities of AR and VR technologies, I still have a hard time seeing how the numbers add up so to speak for having what I would consider to be a mass market product.  I see people being critical of Hololens’ lower FOV without being realistic about how they could go higher without drastically sacrificing angular resolution.

Clearly there can be product niches where the device could serve, but I think people have unrealistic expectations for how fast the field of view can grow for product like Hololens.   For “real work” I think the lower field of view and high angular resolution approach (as with Hololens) makes more sense for more applications.   Maybe game players in the VR space are more willing to accept 1970’s type angular resolution, but I wonder for how long.

I don’t see any technology that will be practical in high volume (or even very expensive at low volume) that is going to simultaneously solve the angular resolution and FOV that some people want. AR displays are often brightness challenged, particularly for outdoor use.  Layered on top of these issues are those size, weight, cost, and power consumption which we will have to save these issues for another day.