Tag Archive for Osterhaut Design Group

Near Eye Displays (NEDs): Gaps In Pixel Sizes

I get a lot of questions to the effect of “what is the best technology for a near eye display (NED).” There really is no “best” as every technology has its strengths and weaknesses. I plan to right a few articles on this subject as it is way too big for a single article.

Update 2017-06-09I added the Sony Z5 Premium 4K Cell Phone size LCD to the table. Their “pixel” is about 71% the linear dimension of the Samsung S8 or about half the area but still much larger than any of the microdisplay pixels. But one thing I should add is that most cell phone makers are “cheating” on what they call a pixel. The Sony Z5 Premium’s “pixel” really only has 2/3rds of an R, G, and B per pixel it counts. It also has them in a strange 4 pixel zigzag that causes beat frequency artifacts when displaying full resolution 4K content (GSMARENA’s Close Up Pixtures show of the Z5 Premium fails the show the full resolution in both directions). Note similarly Samsung goes with RGBG type patterns that only have 2/3rd the full pixels in the way they count resolution as well. These “tricks in counting are OK when viewed with the naked eye at beyond 300 “pixels” per inch, but become more problematical/dubious when used with optics to support VR. 

Today I want to start with the issue of pixel size as shown in the table at the top (you may want to pop the table out into a separate window as you follow this article). To give some context, I have also included a few major direct view categories of displays as well. I have grouped the technologies into the colored bands in the table. I have given the pixel pitch (distance between pixel centers) as well as the pixel area (the square of the pixel pitch assuming square pixels. Then to give some context for comparison I have compared the pitch and area relative to a 4.27-micron (µm) pixel pitch which is about the smallest being made in large volume. Finally there are columns showing how big the pixel would be in arcminutes when view from 25cm (250mm =~9.84inches) which is the commonly accepted near focus point. Finally there is a column showing how much the pixel would have to be magnified to equal 1-arcminute at 25cm which gives some idea about the optics required.

In the table, I tried to use smallest available pixel in a given technology that was being produced with the exception of “micro-iLED” for which I could not get solid information (thus the “?”). In the case of LCOS, the smallest field sequential color (FSC) pixel I know of is the 4.27µm one by my old company Syndiant used in their new 1080p device. For the OLED, I used the eMagin 9.3 pixel and for the DLP, their 5.4 micron pico pixel. I used the LCOS/smallest pixel as the baseline to give some relative comparisons.

One thing that jumps out in the table are the fairly large gaps in pixel sizes between the microdisplays versus the other technologies. For example you can fit over 100 4.27µm LCOS pixels in the area of a single Samsung S8 OLED pixel or 170 LCOS pixels in the area of a the pixel used in the Oculus CV1. Or to be more extreme you can fit over 5,500 LCOS pixels in one pixel of a 55-inch TV pixel.

Big Gap In Near Eye Displays (NEDs)

The main point of comparison for today are the microdisplay pixels which range from about 4.27µm to about 9.6µm in pitch to the direct view OLED and LCD displays in 40µm to 60µm that have been adapted with optics to be used in VR headsets (NEDs). Roughly we are looking at one order of magnitude in pixel pitch and two orders of magnitude in area. Perhaps the most direct comparison is the microdisplay OLED pixel at 9.3 microns versus the Samsung S8 at 4.8X linear and a 23x area difference.

So why is there this huge gap? It comes down to making the active matrix array circuitry to drive the technology. Microdisplays are made on semiconductor integrated circuits while direct view displays are made on glass and plastic substrates using comparatively huge and not very good transistor. The table below based on one in an article from 2006 by Mingxia Gu while at Kent State University (it is a little out of date, but gives lists the various transistors used in display devices).

The difference in transistors largely explains the gap. With the microdisplays using transistors made in I.C. fabs whereas direct view displays fabricate their larger and less conductive transistors on top of glass or plastic substrates at much lower temperatures.

Microdisplays

Within the world of I.C.’s, microdisplays used very old/large transistors often using nearly obsolete semiconductor processes. This is both an effort to keep the cost down and the fact that most display technologies need higher voltages than would be supported by smaller transistor sizes.

There are both display physics and optical diffraction reasons which limit making microdisplay pixels much smaller than 4µm. Additionally, as the pixel size gets below about 6 microns, the optical cost of enlarging the pixel to be seen by the human start to escalate so headset optics makers want 6+ micron pixels which are much more expensive to make. To a first order, microdisplay costs in volume are a function of area of the display so smaller pixels means less expensive devices for the same resolution.

The problem for microdisplays is even using old I.C. fabs, the cost per square millimeter is extremely high compared to TFT on glass/plastic, and yields drop as the size of the device grows so doubling the pixel pitch could result in an 8X or more increase in cost. While is sounds good to be using old/depreciated I.C. fabs, it may also mean they may not have the best/newest/highest yielding equipment or worse yet, they close down the facilities as being obsolete.

The net result is that microdisplays are no where near cost competitive with “re-purposed” cell phone technology for VR if you don’t care about size and weight. They are the only way to do a small lightweight headsets and really the only way to do AR/see through displays (save the huge Meta 2 bug-eye bubble).

I hope to pick up this subject more in some future articles (as each display type could be a long article in and of itself. But for now, I want to get onto the VR systems with larger flat panels.

Direct View Displays Adapted for VR

Direct View VR (ex. Oculus, HTC Vive, and Google Cardboard) have leveraged direct view display technologies developed for cell phones. They then put simple optics in front of the display so that people can focus the image when the display is put so near the eye.

The accepted standard for human “near vision” is 25cm/250mm/9.84-inches. This is about as close as a person can focus and is used for comparing effective magnification. With simple (single/few lens) optics you are not so much making the image bigger per say, but rather moving the display closer to the eye and then using the optics to enable the eye to focus. A typical headset uses a roughly 40mm focal length lens and then put the display at the focal lens or less (e.g. 40mm or less) from the lens.  Putting the display at the focal length of the lens makes the image focus at infinity/far away.

Without getting into all the math (which can be found on the web) the result is that with a 40mm focal length nets an angular magnification (relative to viewing at 25cm) of about 6X. So for example looking back at the table at the top, the Oculus pixel (similar in size to the HTC Vive) which would be about 0.77 arcminutes at 25cm end up appearing to cover about 4.7 arcminutes (which are VERY large/chunky pixels) and about a 95 degree FOV (depends on how close the eye gets to the lens — for a great explanation of this subject and other optical issues with the Oculus CV1 and HTC Vive see this Doc-Ok.org article).

Improving VR Resolution  – Series of Roadblocks

For reference, 1 arcminute per pixel is consider near the limit of human vision and most “good resolution” devices try to be under 2 arcminutes per pixel and preferably under 1.5. So let’s say we want to keep the ~95 FOV but improve the angular resolution by 3x linearly to about 1.5 arcminutes, we have several (bad) options:

  1. Get someone to make a pixel that is 3X smaller linearly or 9X smaller in area. But nobody makes a pixel this size that can support about 3,000 pixels on a side. A microdisplay (I.C. based) will cost a fortune (like over $10,000/eye if it could be made at all) and nobody makes transistors that a cheap and compatible with displays that are small enough. But let’s for a second assume someone figures out a cost effective display, then you have the problem that you need optics that can support this resolution and not the cheap low resolution optics with terrible chroma aberrations, god rays, and astigmatism that you can get away with 4.7 arcminute pixels
  2. Use say the Samsung S8 pixel size (a little smaller) and make two 3K by 3K displays (one for each eye). Each display will be about 134mm or about 5.26 inches on a side and the width of the two displays plus the gap between them will end up at about 12 inches wide. So thing in terms of strapping an large iPad Pro in front of your face only, it now has to be about 100mm (~4 inches) in front of the optics (or about 2.5X as far away at on the current headsets). Hopefully you are starting to get the picture, this thing is going to huge and unwieldy and you will probably need shoulder bracing in addition to head straps. Not to mention that the displays will cost a small fortune along with the optics to go with them.
  3. Some combination of 1 and 2 above.
The Future Does Not Follow a Straight Path

I’m trying to outline above the top level issue (there are many more). Even if/when you solve the display cost/resolution problem, lurking behind that is a massive optical problem to sustain that resolution. These are the problems “straight line futurists” just don’t get; they assume everything will just keep improving at the same rate it has in the past not realizing they are starting to bump up against some very non-linear problems.

When I hear about “Moore’s Law” being applied to displays I just roll my eyes and say that they obviously don’t understand Moore’s Laws and the issued behind it (and why it kept slowing down over time). Back in November 2016 Oculus Chief Scientist Michael Abrash made some “bold predictions” that by 2021 we would have 4K (by 4K) per eye and 140 degree FOV with 2 arcminutes per pixel. He upped my example above by 1.33x more pixels and upped the FOV by almost 1.5X which introduces some serious optical challenges.

At times like this I like to point out the Super Sonic Transport or SST of the 1960’s. The SST seemed inevitable for passenger trave, after all in less than 50 years passenger aircraft when from nothing to the jet age; yet today, over 50 years later, passenger aircraft still fly at about the same speed. Oh by the way, in the 1960’s they were predicting that we would be vacationing on the moon by now and having regular fights to Mars (heck, we made it to the moon in less than 10 years). We certainly could have 4K by 4K displays per eye and 140 degree FOV by 2021 in a head mounted display (it could be done today if you don’t care how big it is), but expect it to be more like the cost of flying supersonic and not a consumer product.

It is easy to play arm chair futurist and assume “things will just happened because I want them to happen. The vastly harder part is to figure out how it can happen. I lived through I.C. development in the late 1970’s through the mid 1990’s so I “get” learning curves and rates of progress.

One More Thing – Micro-iLED

I included in the table at the top Micro Inorganic LEDs, also known as just Micro-LEDs (I’m using iLED to make it clear these are not OLEDs). They are getting a lot of attention lately, particularly after Apple bought LuxVue and Oculus bought InfiniLED. These essentially use very small “normal/conventional” LEDs that are mounted (essentially printed) on a substrate. The fundamental issue is that red requires a very different crystal from blue and green (and even they have different levels of impurities). So they have to make individual LEDs and then combine them (or maybe someday grow the dissimilar crystals on the common substrate).

The allure is that iLEDs have some optics properties that are superior to OLEDs. They have tighter color spectrum, more power efficient, can be driven much brighter, less issues with burn in, and in some cases have less diffuse (better collimated) light.

These Micro-iLEDs are being used in two ways, one to make very large displays by companies such as Sony, Samsung, and NanoLumens or supposedly very small displays (LuxVue and InfiniLED). I understand how the big display approach works, there is lots of room for the LED and these displays are very expensive per pixel.

With the small display approach, they seem to have to double issue of being able to cut very small LEDs and effectively “print” the LEDs on a TFT substrate similar to say OLEDs. What I don’t understand is how these are supposed to be smaller than say OLEDs which would seem to be at least as easy to make on similar TFT or similar transistor substrates. They don’t seem to “fit” in near eye, but maybe there is something I am missing at this point in time.

Near-Eye Bird Bath Optics Pros and Cons – And IMMY’s Different Approach

Why Birdbaths Optics? Because the Alternative (Waveguides) Must Be Worse (and a teaser)

The idea for this article started when I was looking at the ODG R-9 optical design with OLED microdisplays. They combined an OLED microdisplay that is not very bright in terms of nits with a well known “birdbath” optical design that has very poor light throughput. It seems like a horrible combination. I’m fond of saying “when intelligent people chose a horrible design, the alternative must have seemed worse

I’m going to “beat up” so to speak the birdbath design by showing how some fundamental light throughput numbers multiply out and why the ODG R-9 I measured at CES blocks so much of the real world light. The R-9 also has a serious issue with reflections. This is the same design that a number of publications considered among the “best innovations” of CES; it seems to me that they must have only looked at the display superficially.

Flat waveguides such as used by Hololens, Vuzix. Wave Optics, and Lumus as well as expected from Magic Leap get most of the attention, but I see a much larger number of designs using what is known as a “birdbath” and similar optical designs. Waveguides are no secret these days and the fact that so many designs still use the birdbath optics tells you a lot about the issues with waveguides. Toward the end of this article, I’m going to talk a little about the IMMY design that replaces part of the birdbath design.

As a teaser, this article is to help prepare for an article on an interesting new headset I will be writing about next week.

Birdbath Optics (So Common It Has a Name)

The birdbath combines two main optical components, a spherical mirror/combiner (part-mirror) and a beam splitter. The name  “birdbath” comes from the spherical mirror/combiner looking like a typical birdbath. It is used because it generally is comparatively inexpensive to down right cheap while also being relatively small/compact while having  good overall image quality. The design fundamentally supports a very wide FOV, which are at best difficult to support with waveguides. The big downsides are light throughput and reflections.

A few words about Nits (Cd/m²) and Micro-OLEDs

I don’t have time here to get into a detailed explanation of nits (Cd/m²). Nits is the measure of light at a given angle whereas lumens is the total light output. The simplest analogy is to water hose with a nozzle (apropos here since we are talking about birdbaths). Consider two spray patterns, one with a tight jet of water and one with a wide fan pattern both outputting the exact same total amount of water per minute (lumens in this analogy). The one with the tight patter would have high water pressure (nits in this analogy) over a narrow angle where the fan spray would have lower water pressure (nits) over a wider angle.

Additionally, it would be relatively easy to put something in the way of the tight jet and turn it into a fan spray but there is no way to turn the fan spray into a jet. This applies to light as well, it is much easier to go from high nits over are narrow angle to lower nits over a wide angle (say with a diffuser) but you can’t go the other way easily.

Light from an OLED is like the fan spray only it covers a 180 degree hemisphere. This can be good for a large flat panel were you want a wide viewing angle but is a problem for a near eye display where you want to funnel all the light into the eye because so much of the light will miss pupil of the eye and is wasted. With an LED you have a relative small point of light that can be funneled/collimated into a tight “jet” of light to illuminate an LCOS or DLP microdisplay.

The combination of light output from LEDs and the ability to collimate the light means you can easily get tens of thousands of nits with an LCOS or DLP illuminated microdisplay were OLED microdisplays typically only have 200 to 300 nits. This is major reason why most see-through near eye displays use LCOS and DLP over OLEDs.

Basic Non-Polarizing Birdbath (example, ODG R-9)

The birdbath has two main optical components, a flat beam splitter and a spherical mirror. In the case a see-through designs, the the spherical mirror is a partial mirror so the spherical element acts as a combiner. The figure below is taken from an Osterhaut Design Group (ODG) patent which and shows simple birdbath using an OLED microdisplay such as their ODG R-9. Depending on various design requirements, the curvature of the mirror, and the distances, the lenses 16920 in the figure may not be necessary.

The light from the display device, in the case of the ODG R-9 is a OLED microdisplay, is first reflect away from the eye and perpendicular (on-axis) to the curved beam splitter so that a simple spherical combiner will uniformly magnify and move the apparent focus point of the image (if not “on axis” the image will be distorted and the magnification will vary across the image). The curved combiner (partial mirror) has minimal optical distortion on light passing through.

Light Losses (Multiplication is a Killer)

A big downside to the birdbath design is the loss of light. The image light must make two passes at the beam splitter, a reflective and transmissive, with a reflective (Br) and transmissive (Bt) percentages of light. The light making it through both passes is Lr x Lt.  A 50/50 beam splitter might be about 48% reflective and transmissive (with say a 4% combined loss), and the light throughput (Br x Bt) in this example is only 48% x 48%= ~23%. And “50/50” ratio is the best case; if we assume a nominally 80/20 beam splitter (with still 4% total loss) we get 78% x 18% = ~14% of the light making through the two passes.

Next we have the light loss of the spherical combiner. This is a trade-off of image light being reflected (Cr) versus being transmitted  (Ct) to the real world where Cr + Ct is less than 1 due to losses. Generally you want the Cr to be low so the Ct can be high so you can see out (otherwise it is not much of a see through display).

So lets say the combiner has Cr=11% and the Ct=75% with about 4% loss with the 50/50 beamsplitter. The net light throughput assuming a “50/50” beam splitter and a 75% transmissive combiner is Br x Cr X Bt = ~2.5% !!! These multiplicative losses lose all but a small percentage of the display’s light. And consider that the “real world” net light throughput is Ct x Bt which would be 48% x 75% = 36% which is not great and would be too dark for indoor use.

Now lets say you want the glasses to be at least 80% transmissive so they would be considered usable indoors. You might have the combiner Ct=90% making Cr=6% (with 4% loss) and then Bt=90% making Br=6%. This gives the real world transmissive about 90%x90% = 81%.  But then you go back and realize the display light equation (Br x Cr X Bt) becomes 6%x6%x90% = 0.3%. Yes, only 3/1000ths of the starting image light makes it through. 

Why the ODG R-9 Is Only About 4% to 5% “See-Through”

Ok, now back to the specific case of the ODG R-9. The ODG R-9 has an OLED microdisplay that most like has about 250 nits (200 to 250 nits is commonly available today) and they need to get about 50 nits (roughly) to the eye from the display to have a decent image brightness indoors in a dark room (or one where most of the real world light is blocked). This means they need a total throughput of 50/250=20%. The best you can do with two passes through a beam splitter (see above) is about 23%.  This forces the spherical combiner to be highly reflective with little transmission. You need something that reflects 20/23=~87% of the light and only about 9% transmissive. The real world light then making it through to the eye is then about 9% x 48% (Ct x Bt) or about 4.3%.

There are some other effects such as the amount of total magnification and I don’t know exactly what their OLED display is outputting display and exact nits at the eyepiece, but I believe my numbers are in the ballpark. My camera estimates for the ODG R-9 came in a between 4% and 5%. When you are blocking about 95% of the real world light, are you really much of a “see-through” display?

Note, all this is BEFORE you consider adding say optical shutters or something like Varilux® light blocking. Normally the birdbath design is used with non-see through designs (where you don’t have the see-through losses) or with DLP® or LCOS devices illuminated with much higher nits (can be in the 10’s of thousands) for see through designs so they can afford the high losses of light.

Seeing Double

There are also issues with getting a double image off of each face of plate beam splitter and other reflections. Depending on the quality of each face, a percentage of light is going to reflect or pass through that you don’t want. This light will be slightly displaced based on the thickness of the beamsplitter. And because the light makes two passes, there are two opportunities to cause double images. Any light that is reasonably “in focus” is going to show up as a ghost/double image (for good or evil, your eye has a wide dynamic range and can see even faint ghost images). Below is a picture I took with my iPhone camera of a white and clear menu through the ODG R-9. I counted at least 4 ghost images (see colored arrows).

As a sort of reference, you can see the double image effect of the beamsplitter going in the opposite direction to the image light with my badge and the word “Media” and its ghost (in the red oval).

Alternative Birdbath Using Polarized Light (Google Glass)

Google Glass used a different variation of the birdbath design. They were willing to accept a much smaller field of view and thus could reasonably embedded the optics in glass. It is interesting here to compare and contrast this design with the ODG one above.

First they started with an LCOS microdisplay that was illuminated by LEDs that can be very much brighter and more collimated light resulting in much higher (can be orders of magnitude) starting nits than an OLED microdisplay can output. The LED light is passed through a polarizing beam splitter than will pass about 45% P light to the LCOS device (245). Note a polarizing beam splitter passes one polarization and reflect the other unlike a the partially reflecting beam splitter in the ODG design above. The LCOS panel will rotate the light to be seen to S polarization so that it will reflect about 98% (with say 2% loss) of the S light.

The light then goes to a second polarizing beam splitter that is also acting as the “combiner” that the user sees the real world through. This beam splitter is set up to pass about 90% of the S light and reflect about 98% of the P light (they are usually much better/more-efficient in reflection). You should notice that they have a λ/4 (quarter wave = 45 degree rotation) film between the beam splitter and the spherical mirror which will rotate the light 90 degrees (turning it from S to P) after it passes through it twice. This  λ/4 “trick” is commonly used with polarized light. And since you don’t have to look through the mirror, it can be say 98% reflective with say another 3% loss for the λ/4.

With this design, about 45% (one pass through the beamsplitter) of the real world makes it through, but only light polarized the “right way” makes it through which makes looking at say LCD monitors problematical. By using the quarter wave film the design is pretty efficient AFTER you loose about 55% of the LED light in polarizing it initially. There are also less reflection issues because all the films and optics are embedded in glass so you don’t get these air to glass index mismatches of off two surfaces of a relatively thick plate that cause unwanted reflections/double images.

Google Glass design has a lot of downsides too. There is nothing you can do to get the light throughput of the real world much above 45% and there are always the problems of looking through a polarizer. But the biggest downside is that it cannot be scaled up for larger fields of view and/or more eye relief. As you scale this design up the block of glass becomes large, heavy and expensive as well as being very intrusive/distorting in looking through a big thick piece of glass.

Without getting too sidetracked, Lumus in effect takes the one thick beam splitter, and piece-wise cuts it into multiple smaller beam splitters to make the glass thinner. But this also means you can’t use the spherical mirror of a birdbath design with it and so you require optics before the beam splitting and the light losses of the the piece-wise beam splitting are much larger than a single beamsplitter.

Larger Designs

An alternative design would mix the polarizing beamsplitters of the Google Glass design above with the configuration of ODG design above.  And this has been done many times through the years with LCOS panels that use polarized light (an example can be found in this 2003 paper). The spherical mirror/combiner will be a partial non-polarizing mirror so you can see through it and a quarter waveplate is used between the spherical combiner and the polarizing beam splitter. You are then stuck with about 45% of the real world light times the light throughput of the spherical combiner.

A DLP with a “birdbath” would typically use the non-polarizing beam splitter with a design similar to the ODG R-9 but replacing the OLED microdisplay with a DLP and illumination. As an example, Magic Leap did this with a DLP but adding a variable focus lens to support focus planes.

BTW, by the time you polarized the light from an OLED or DLP microdisplay, there would not be much if any of an efficiency advantage sense to use polarizing beamsplitters. Additionally, the light from the OLED is so diffused (varied in angles) that it would likely not behave well going through the beam splitters.

IMMY – Eliminating the Beamsplitter

The biggest light efficiency killer in the birdbath design is the combined reflective/transmissive passes via the beamsplitter. IMMY effectively replaces the beamsplitter of the birdbath design with two small curved mirrors that he correct for the image being reflected off-axis from the larger curved combiner. I have not yet seen how well this design works in practice but at least the numbers would appear to work better. One can expect only a few percentage points of light being lost off of each of the two small mirrors so that maybe 95% of the light from the OLED display make it to the large combiner. Then you have the the combiner reflection percentage (Cr) multiplying by about 95% rather than the roughly 23% of the birdbath beam splitter.

The real world light also benefits as it only has to go through a single combiner transmissive loss (Ct) and no beamsplitter (Bt) loses. Taking the OGD R-9 example above and assuming we started with a 250 nit OLED and with 50 nits to the eye, we could get there with about an 75% transmissive combiner. The numbers are at least starting to get into the ballpark where improvements in OLED Microdisplays could fit at least for indoor use (outdoor designs without sunshading/shutters need on the order of 3,000 to 4,000 nits).

It should be noted that IMMY says they also have “Variable transmission outer lens with segmented addressability” to support outdoor use and variable occlusion. Once again this is their claim, I have not yet tried it out in practice so I don’t know the issues/limitations. My use of IMMY here is to contrast it with the classical birdbath designs above.

A possible downside to the the IMMY multi-mirror design is bulk/size has seen below. Also noticed the two adjustment wheel for each eye. One is for interpupillary distance to make sure the optics line up center with the pupils which varies from person to person. The other knob is a diopter (focus) adjustment which also suggests you can’t wear these over your normal glasses.

As I have said, I have not seen IMMY’s to see how it works and to see what faults it might have (nothing is perfect) so this is in no way an endorsement for their design. The design is so straight forward and a seemingly obvious solution to the beam splitter loss problem that it makes me wonder why nobody has been using it earlier; usually in these cases, there is a big flaw that is not so obvious.

See-Though AR Is Tough Particularly for OLED

As one person told me at CES, “Making a near eye display see-through generally more than double the cost” to which I would add, “it also has serious adverse affects on the image quality“.

The birdbath design wastes a lot of light as do every other see-through designs. Waveguide designs can be equally or more light wasteful than the birdbath. At least on paper, the IMMY design would appear to waste a less than most others. But to make a device say 90% see through, at best you start by throwing away over 90% of the image light/nits generated, and often more than 95%.

The most common solution to day is to start with LED illuminated LCOS or DLP microdisplay so you have a lot of nits to throw at the problem and just accept the light waste. OLEDs are still orders of magnitude in brightness/nits away from being able to compete with LCOS and DLP with brute force.

 

CES 2017 AR, What Problem Are They Trying To Solve?

Introduction

First off, this post is a few weeks late. I got sick on returning from CES and then got busy with some other pressing activities.

At left is a picture that caught me next to the Lumus Maximus demo at CES from Imagineality’s “CES 2017: Top 6 AR Tech Innovations“. Unfortunately they missed that in the Lumus booth at about the same time was a person from Magic Leap and Microsoft’s Hololens (it turned out we all knew each other from prior associations).

Among Imagineality’s top 6 “AR Innovations” were ODG’s R-8/R-9 Glasses (#1) and Lumus’s Maximus 55 degree FOV waveguide (#3). From what I heard at CES and saw in the writeups, ODG and Lumus did garner a lot of attention. But by necessity, theses type of lists are pretty shallow in their evaluations and I try to do on this blog is go a bit deeper into the technology and how it applies to the market.

Among the near eye display companies I looked at during CES include Lumus, ODG, Vuzix, Real Wear, Kopin, Wave Optics, Syndiant, Cremotech, QD Laser, Blaze (division of eMagin) plus several companies I met with privately. As interesting to me as their technologies was there different takes on the market.

For this article, I am mostly going to focus on the Industrial / Enterprise market. This is were most of the AR products are shipping today. In future articles, I plan to go into other markets and more of a deep dive on the the technology.

What Is the Problem They Are Trying to Solve?

I have had an number of people asked me what was the best or most interesting AR thing I saw at CES 2017, and I realized that this was at best an incomplete question. You first need to ask, “What problem are they trying to solve?” Which leads to “how well does it solve that problem?” and “how big is that market?

One big takeaway I had at CES having talked to a number of different company’s is that the various headset designs were, intentionally or not, often aimed at very different applications and use cases. Its pretty hard to compare a headset that almost totally blocks a user’s forward view but with a high resolution display to one that is a lightweight information device that is highly see-through but with a low resolution image.

Key Characteristics

AR means a lot of different things to different people. In talking to a number of companies, you found they were worried about different issues. Broadly you can separate into two classes:

  1. Mixed Reality – ex. Hololens
  2. Informational / “Data Snacking”- ex. Google Glass

For most of the companies were focused on industrial / enterprise / business uses at least for the near future and in this market the issues include:

  1. Cost
  2. Resolution/Contrast/Image Quality
  3. Weight/Comfort
  4. See-through and/or look over
  5. Peripheral vision blocking
  6. Field of view (small)
  7. Battery life per charge

For all the talk about mixed reality (ala Hololens and Magic Leap), most of the companies selling product today are focused on helping people “do a job.” This is where they see the biggest market for AR today. It will be “boring” to the people wanting the “world of the future” mixed reality being promised by Hololens and Magic Leap.

You have to step back and look at the market these companies are trying to serve. There are people working on a factory floor or maybe driving a truck where it would be dangerous to obscure a person’s vision of the real world. They want 85% or more transparency, very lightweight and highly comfortable so it can be worn for 8 hours straight, and almost no blocking of peripheral vision. If they want to fan out to a large market, they have to be cost effective which generally means they have to cost less than $1,000.

To meet the market requirements, they sacrifice field of view and image quality. In fact, they often want a narrow FOV so it does not interfere with the user’s normal vision. They are not trying to watch movies or play video games, they are trying to give necessary information for person doing a job than then get out of the way.

Looking In Different Places For the Information

I am often a hard audience. I’m not interested in the marketing spiel, I’m looking for what is the target market/application and what are the facts and figure and how is it being done. I wanting to measure things when the demos in the boths are all about trying to dazzle the audience.

As a case in point, let’s take ODG’s R-9 headset, most people were impressed with the image quality from ODG’s optics with a 1080p OLED display, which was reasonably good (they still had some serious image problems caused by their optics that I will get into in future articles).

But what struck me was how dark the see-through/real world was when viewed in the demos. From what I could calculate, they are blocking about 95% of the real world light in the demos. They also are too heavy and block too much of a person’s vision compared to other products; in short they are at best going after a totally different market.

Industrial Market

Vuzix is representative of the companies focused on industrial / enterprise applications. They are using with waveguides with about 87% transparency (although they often tint it or uses photochromic light sensitive tinting). Also the locate the image toward the outside of the use’s view so that even when an image it displayed (note in the image below-right that the exit port of the waveguide is on the outside and not in the center as it would be on say a Hololens).

The images at right were captured from a Robert Scoble interview with Paul Travers, CEO of Vuzix. BTW, the first ten minutes of the video are relatively interesting on how Vuzix waveguides work but after that there is a bunch of what I consider silly future talk and flights of fancy that I would take issue with. This video shows the “raw waveguides” and how they work.

Another approach to this category is Realwear. They have a “look-over” display that is not see through but their whole design is make to not block the rest of the users forward vision. The display is on a hinge so it can be totally swung out of the way when not in use.

Conclusion

What drew the attention of most of the media coverage of AR at CES was how “sexy” the technology was and this usually meant FOV, resolution, and image quality. But the companies that were actually selling products were more focused on their user’s needs which often don’t line up with what gets the most press and awards.

 

ODG R-8 and R-9 Optic with a OLED Microdisplays (Likely Sony’s)

ODG Announces R-8 and R-9 OLED Microdisplay Headsets at CES

It was not exactly a secret, but Osterhout Design Group (ODG) formally announce their new R-8 headset with dual 720p displays (one per eye) and R-9 headset with dual 1080p displays.  According to their news release, “R-9 will be priced around $1,799 with initial shipping targeted 2Q17, while R-8 will be less than $1,000 with developer units shipping 2H17.

Both devices use use OLED microdisplays but with different resolutions (the R-9 has twice the pixels). The R-8 has a 40 degree field of view (FOV) which is similar to Microsoft’s Hololens and the R-9 has about a 50 degree FOV.

The R-8 appears to be marketed more toward “consumer” uses with is lower price point and lack of an expansion port, while ODG is targeting the R-9 to more industrial uses with modular expansion. Among the expansion that ODG has discussed are various cameras and better real world tracking modules.

ODG R-7 Beam Splitter Kicks Image Toward Eye

With the announcement comes much better pictures of the headsets and I immediately noticed that their optics were significantly different than I previously thought. Most importantly, I noticed in the an ODG R-8 picture that the beam splitter is angled to kicks the light away from the eye whereas the prior ODG R-7 had a simple beam splitter that kicks the image toward the eye (see below).

ODG R-8 and R-8 Beam Splitter Kicks Image Away From Eye and Into A Curved Mirror

The ODG R-8 (and R-9 but it is harder to see on the available R-9 pictures) does not have a simple beam splitter but rather a beam splitter and curve mirror combination. The side view below (with my overlays of the outline of the optics including some that are not visible) that the beam splitter kicks the light away from the eye and toward partial curved mirror that acts as a “combiner.” This curve mirror will magnify and move the virtual focus point and then reflects the light back through the beam splitter to the eye.

On the left I have taken Figure 169 from ODG’s US Patent 9,494,800. Light from the “emissive display” (ala OLED) passes through two lenses before being reflected into the partial mirror. The combination of the lenses and the mirror act to adjust the size and virtual focus point of the displayed image. In the picture of the ODG R-8 above I have taken the optics from Figure 169 and overlaid them (in red).

According to the patent specification, this configuration “form(s) at wide field of view” while “The optics are folded to make the optics assembly more compact.”

At left I have cropped the image and removed the overlay so you can see the details of the beam splitter and curved mirror joint.  You hopefully can see the seam where the beam splitter appears to be glued to the curved mirror suggesting the interior between the curved mirror and beam splitter is hollow. Additionally there is a protective cover/light shade over the outside of the curved mirror with a small gap between them.

The combined splitter/mirror is hollow to save weight and cost. It is glued together to keep dust out.

ODG R-6 Used A Similar Splitter/Mirror

I could not find a picture of the R-8 or R-9 from the inside, but I did find a picture on the “hey Holo” blog that shows the inside of the R-6 that appears to use the same optical configuration as the R-8/R-9. The R-6 introduced in 2014 had dual 720p displays (one per eye) and was priced at $4,946 or about 5X the price of the R-8 with the same resolution and similar optical design.  Quite a price drop in just 2 years.

ODG R-6, R-8, and R-9 Likely Use Sony OLED Microdisplays

Interestingly, I could not find anywhere were ODG says what display technology they use in the 2014 R-6, but the most likely device is the Sony ECX332A 720p OLED microdisplay that Sony introduced in 2011. Following this trend it is likely that the ODG R-9 uses the newer Sony ECX335 1080p OLED microdisplay and the R-9 uses the ECE332 or a follow-on version. I don’t know any other company that has both a 720p and 1080p OLED microdisplays and the timing of the Sony and ODG products seems to fit. It is also very convenient for ODG that both panels are the same size and could use the same or very similar optics.

Sony had a 9.6 micron pixel on a 1024 by 768 OLED microdisplay back in 2011 so for Sony the pixel pitch has gone from 9.6 in 2011 to 8.2 microns on the 1080p device. This is among the smallest OLED microdisplay pixel pitches I have seen but still is more than 2x linearly and 4x in area bigger than the smallest LCOS (several companies have LCOS pixels pitches in the 4 micron or less range).

It appears that ODG used an OLED microdisplay for the R-6 then switched (likely for cost reasons) to LCOS and a simple beam splitter for the R7 and then back to OLEDs and the splitter/mirror optics for the R-8 and R-9.

Splitter/Combiner Is an Old Optic Trick

This “trick” of mixing lenses with a spherical combiner partial mirror is an old idea/trick. It often turns out that mixing refractive (lenses) with mirror optics can lead to a more compact and less expensive design.

I have seen a beam splitter/mirror used many times. The ODG design is a little different in that the beam splitter is sealed/mated to the curved mirror which with the pictures available earlier make it hard to see. Likely as not this has been done before too.

This configuration of beam splitter and curve mirror even showed up in Magic Leap applications such as Fig. 9 from 2015/0346495 shown at right. I think this is the optical configuration that Magic Leap used with some of their prototypes including the one seen by “The Information.

Conclusion/Trends – Turning the Crank

The ODG optical design while it may seem a bit more complex than a simple beam splitter, is actually probably simpler/easier to make than doing everything with lenses before the beam splitter. Likely they went to this technique to support a wider FOV.

Based on my experience, I would expect that ODG optical design will be cleaner/better than the waveguide designs of Microsoft’s Hololens. The use of OLED microdisplays should give ODG superior contrast which will further improve the perceived sharpness of the image. While not as apparent to the casual observer, but as I have discussed previously, OLEDs won’t work with diffractive/holographic waveguides such as Hololens and Magic Leap are using.

What is also interesting that in terms of resolution and basic optics, the R-8 with 720p is about 1/5th the price of the military/industrial grade 720p R-6 of about 2 years ago. While the R-9 in addition to having a 1080p display, has some modular expansion capability, one would expect there will be follow-on product with 1080p with a larger FOV and more sensors in a price range of the R-8 in the not too distant future and perhaps with integration of the features from one or more of the R-9’s add-on modules; this as we say in the electronics industry, “is just a matter of turning the crank.”

ODG R-9: A Peak Behind the Video Curtain

Introduction

With all the hype about Hololens and Magic Leap (ML), Osterhout Design Group (ODG) often gets overlooked. ODG has not spent as much (but still spending 10’s of millions).  ODG has many more years working in field albeit primarily in the military/industrial market.

I don’t know about all the tracking, image generation, wireless, and other features, but ODG should have the best image quality of the three (ODG, Hololens, and ML).  Their image quality was reasonably well demonstrated in a short “through the optics” video ODG made (above and below are a couple crops from frames of that video). While you can only tell so much from a YouTube video (which limits the image quality), they are not afraid to show reasonably small text and large white areas (both of which would show up problems with lesser quality displays).

Update 2016-12-26: A reader “Paul” wrote that he has seen the “cars and ball” demo live. That while the display was locked down, the cubes were movable in the demo. Paul did not know where the computing was done and it could have been done on a separate computer. So it is possible that I got the dividing line between what was “real” and preplanned a bit off. I certainly don’t think that they detected that there was a clear and a black cube, and much of the demo had to have been pre-planned/staged. Certainly it is not a demonstration of what would happen if you were wearing the headset. 

Drawn To Contradictions

As I wrote last time, I’m not a fan of marketing hyperbole and I think calling their 1080p per eye a “4K experience” is at best deliberately confusing. I also had a problem with what Jame Mackie (independent) reporter said about the section of the video starting at 2:29 with the cars and balls in it and linked to here. What I was seeing was not what he was describing.

The sequence starts with a title slide saying, “Shot through ODG smart-glasses with an iPhone 6” which I think is true as far as it what is written. But the commentary by Jame Mackie was inaccurate and misleading:

So now for a real look at how the Holograms appear, as you can see the spatial and geometric tracking is very good. What really strikes me is the accuracy and positioning.  Look how these real life objects {referring to the blocks} sit so effortlessly with the Holograms

I don’t know what ODG told the reporter or if he just made it up, but at best the description is very misleading. I don’t believe there is any tracking being done and all the image rendering  was generated off-line.

What Real Virtual Reality Looks Like

Before getting into detail on the “fake” part of the video, it is instructive to look at a “real” clip. In another part of the video there is a sequence showing replacing the tape in a label maker (starting at 3:25).

In this case, they hand-held the camera rig with the glasses. In the first picture below you can see on the phone that that they are inserting virtual a virtual object, circled in green on the phone, and missing in the “real world”. 

As the handheld rig moves around the virtual elements moves and track with the camera movement reasonably well.  There is every indication that what you are seeing is what they can actually with tracking in an image generation. The virtual elements in three clips from the video are circled in green below.

The virtual elements are in the real demonstration are simple with no lighting effects or reflections off the table. Jame Mackie in the video talks as if he actually tried this demonstrations rather than just describing what he thinks the video shows.

First Clue – Camera Locked Down

The first clue that Cars and Balls video was setup/staged video is that the camera/headset never moves. If the tracking and everything was so good, why not prove it by moving rig with the headset and camera.

Locking the camera down makes it vastly easier to match up pre-recorded/drawn material. As soon as you see the camera locked down with a headset, you should be suspicious of whether some or all of the video has been faked.

Second Clue – Black Cube Highlights Disappeared

Take a look at the black cube below showing the camera rig setup and particularly the two edges of the black cube inside the orange ovals I added. Notice the highlight on the bottom half of each edge and how it looks like the front edge of the clear plastic cube. It looks to me like the black cube was made from a clear cube with the inside colored black. 

Now look at the crop at left from the first frames showing the through the iPhone and optics view. The highlight on the clear cube is still there but strangely the highlights on the black cube have disappeared. Either they switched out the cube or the highlights were taking out in post processing. It is hard to tell because the lighting is so dim.

Third Clue – Looks Too Good – Can’t Be Real Time

2016-12-16 Update: After thinking about it some more, the rending might be in real time. They probably knew there would be a clear and black  box and rendered accordingly with simpler rendering than ray tracing. Unknown is whether the headset or another computer did the rendering. 

According to comments by “Paul” he has seen the the system running. The Headset was locked-down which is a clue that is some “cheating” going on, but he said the blocks were not in a fixed location. 

Looking “too good” is a big giveaway. The cars in the video with all their reflections were clearly using much more complex ray-tracing that was computed off-line. Look at all the reflections of the cars at left. There are both cars reflecting off the table and off the clear cube the flashing light on the police car also acts like a light source in the way it reflect off the cube.

4th Clue: How Did The Headset Know The Cube Was Clear?

One of the first things that I noticed was the clear cube. How are the cameras and sensors going to know it is clear and how it will reflect/refract light? That would be a lot of expensive sensing and processing to figure this out just to deal with this case.

5th Clue: Black Cube Misaligned

On the right is a crop from a frame where the reflection of the car is wrong. From prior frames, I have outlined the black cube with red lines. But the yellow care is visible when it should be hidden by the black cube. There also a reflection in the side of the cube around where the render image is expecting the black cube to be (orange line shows the reflection point).

How It Was Done

2016-12-26 Updates (in blue): Based on the available evidence, the video is uses some amount of misdirection. The video was pre-rendered using a ray tracing computer model with a clear cube and a perfect black shiny cube on a shiny black table being modeled.  They knew that a clear and black cube would be in the scene and locked down the camera. They may have use the sensors to detect where the blocks are to know how to rendering the image. 

They either didn’t have the sensing and tracking ability or the the rendering ability to allow the camera to move.

Likely the grids you see in the video are NOT the headset detecting the scene but exactly the opposite; they are guides to the person setting up the “live” shot as to where to place the real cubes to match where they where in the model. They got the black cube in slightly the wrong place.

The final video was shot through the optics, but the cars and balls where running around the a clear and black cubes assuming they would be there when the video was rendered. No tracking, surface detection, or complex rendering was required, just the ability to playback a pre-recorded video.

Comments

I’m not trying to pick on ODG. Their hype so far less than what I have seen from Hololens and Magic Leap.  I don’t mind companies “simulating” what images will look like provided they indicate they are simulated effects. I certainly understand that through the optics videos and pictures will not look as good as simulated images. But when they jump back and forth between real and simulated effects and other tricks, you start to wonder what is “real.”

ODG R-9 (Horizon): 1080p Per Eye, Yes Really

Lazy Reporting – The Marketing Hyperbole’s Friend

While I have not ODG’s R-9 in person yet, I fully expect that it will look a lot better than Microsoft’s Hololens. I even think it will look better in terms of image quality than what I think ML is working on. But that is not the key point of this article.

But there is also a layer of marketing hyperbole and misreporting going on that I wanted to clear up. I’m just playing referee hear and calling it like a see them.

ODG 4K “Experience” with 2K (1080p) Per Eye


2016-12-28 Update – It appears I was a bit behind on the marketing hype vernacular being used in VR. Most VR displays today, such as Oculus, take a single flat panel and split it between two eyes. So each eye sees less than half (some pixels are cut off) of the pixels. Since bigger is better in marketing, VR makers like to quote the whole flat panel size and not the resolution per eye. 

ODG “marketing problem” is that historically a person working with near eye displays would talk in in terms of “resolution per eye” but this would not be as big by 2X as the flat panel based VR companies market. Rather than being at a marketing hype disadvantage, ODG apparently has adopted the VR flat panel vernacular, however misleading it might be. 


I have not met Jame Mackie nor have I watched a lot of his videos, but he obviously does not understand display technology well and I would take anything he says about video quality with a grain of salt. If should have understood that ODG’s R-9 has is not “4K” as in the title of his YouTube video: ODG 4K Augmented Reality Review, better than HoloLens ?. And specifically he should of asked questions when the ODG employee stated at about 2:22, “it’s two 1080p displays to each eye, so it is offering a 4K experience.

What the ODG marketing person was I think trying to say was that somehow having 1080p (also known as 2K) for each eye was like having a 2 times 2K or “4K equivalent” it is not. In stumbling to try and make the “4K equivalent” statement, the ODG person simply tripped over his own tongue to and said that there were two 1080p devices per eye, when he meant to say there were two 1080p devices in the glasses (one per eye). Unfortunately Jame Mackie didn’t know the difference and did not realize that this would have been impossible in the R-9’s form factor and didn’t follow up with a question. So the  false information got copied into the title of the video and was left as if it was true.

VRMA’s Micah Blumberg Asks The Right Questions and Get The Right Answer – 1080p Per Eye

This can be cleared up in the following video interview with Nima Shams, ODG’s VP of Headworn: “Project Horizon” AR VR Headset by VRMA Virtual Reality Media“. When asked by Micah Blumberg starting at about 3:50 into the video, “So this is the 4K headset” to which Nima Sham responds, “so it is 1080p to each eye” to which Blumberg astutely makes sure to clarify with, “so we’re seeing 1080p right now and not 4K” to which Nima Sham responds, “okay, yeah, you are seeing 2K to each eye independently“.  And they even added an overlay in the video “confirmed 2K per eye.” (see inside the read circle I added).

A Single 1080p OLED Microdisplay Per Eye

Even with “only” 1080p OLED microdisplay per eye with a simple optical path the ODG R-9 should have superior image quality compared to Hololens:

  1. OLEDs should give better contrast than Hololens’ Himax LCOS device
  2. There will be no field sequential color breakup with head or image movment as there can be with Hololens
  3. They have about the same pixels per arc-minute at Hololens but with more pixels they increase FOV from about 37 degrees to about 50 degrees.
  4. Using a simple plate combiner rather than the torturous path of Hololens’ waveguide, I would expect the pixels to be sharper and with little visible chroma aberrations and no “waveguide glow” (out of focus light around bright objects). So even though the angular resolution of the two is roughly the same, I would expect the R-9 to look sharper/higher resolution.

The known downsides compared to Hololens:

  1. The ODG R-9 does not appear to have enough “eye relief” to support wearing glasses.
  2. The device puts a lot of weight on the nose and ears of the user.

I’m not clear about the level of tracking but ODG’s R-9 does not appear to have the number of cameras and sensors that Hololens has for mapping/locking the real world. We will have to wait and see for more testing on this issue. I also don’t have information on how comparable the level of image and other processing is done by the ODG relative to Horizon.

Conclusion

Micah Blumberg showed the difference between just repeating what he is told and knowing enough to ask the right followup question. He knew that ODG had a 4K marketing message was confusing and that what he was being told was at odds with what he was being told so he made sure to clarify it. Unfortunately while James Makie got the “scoop” on the R-9 being the product name for Horizon, he totally misreported the resolution and other things in his report (more on that later).

Lazy and ill informed reporters are the friend and amplifier of marketing hyperbole. It appears that ODG is trying to equate dual 1080p displays per eye with being something like “4K” which is really is not. You need 1080p (also known as 2K) per eye to do stereo 1080p, but that is not the same as “4K” which which is defined as 3840×2060 resolution or 4 times the spatial resolution of 1080p. Beyond this, qualifiers of like “4K “Experience” which has no real meaning are easily dropped and ill informed reporters will report it as “4K” which does have a real meaning.

Also, my point is not meant to pick on ODG, they just happen to be the case at hand. Unfortunately, most of the display market is “liars poker.” Companies are fudging on display specs all the time. I rarely see a projector that meets or exceeds it “spec” lumens. Resolutions are often spec’ed in misleading ways (such as specifying the input rather than the “native” resolution). Contrast is another place were “creative marketing” is heavily used. The problem is that because “everyone is doing it” people feel they have to just to keep up.

The problem for me comes when I have to deal with people that have read false or misleading information. It gets hard to separate truth from marketing exaggeration.

This also goes back to why I didn’t put much stock in the magazine reports about Magic Leap looked. These reports were made by people that were easy to impress and likely not knowledgeable about display devices. They probably could not tell the display resolution by 2X in each direction or would notice even moderately severe image problems. If they were shown a flashy looking demo they would assume it was high resolution.

One More Thing – Misleading/Fake “True Video”

It will take a while to explain (maybe next time), I believe the James Makie video also falsely indicates at 2:29 in the video (the part with the cars and the metal balls on the table), that what is being shown is how the ODG R-9 works.

In fact, while the images of the cars and balls are generated by the R-9, there tracking of the real world and the reflections off the surfaces are a well orchestrated FAKE. Basically they were playing a pre-rendered video though the glasses (so that part is likely real). But clear and black boxes on the table where props there to “sell the viewer” that this was being rendered on the fly.  There also appears to be some post-processing in the video. Most notably, it looks like the black box was modified in post production. There are several clues in the video that will take a while to explain.

To be fair to ODG, the video does not claim to not be fake/processed, but the way it is presented within Jame Makie’s video is extremely misleading to say the least. It could be that the video was taken out of context.

For the record, I do believe the video starting at 4:02 which I have analyze before is a genuine through the optics video and is correctly so identified on the video. I’m not sure about the “tape replacement” video at 3:23, I think it may be genuine or it could be some cleaver orchestrating.

Kopin Entering OLED Microdisplay Market

Kopin Making OLED Microdisplays

Kopin announced today that they are getting into the OLED Microdisplay business. This is particularly notable because Kopin has been a long time (since 1999) manufacture of transmissive LCD microdisplays used in camera viewfinders and near eye display devices. They also bought Forth Dimension Displays back in 2011, a maker of high resolution ferroelectric reflective LCOS used in higher end near eye products.

OLED Microdisplays Trending in AR/VR Market

With the rare exception of the large and bulky Meta 2, microdisplays, (LCOS, DLP, OLED, and transmissive LCD), dominate the AR/MR see-through market. They also are a significant factor in VR and other non-see-through near eye displays

Kopins entry seems to be part of what may be a trend toward OLED Microdisplays used in near eye products. ODG’s next generation “Horizon” AR glasses is switching from LCOS (used in the current R7) to OLED microdisplays. Epson which was a direct competitor to Kopin in transmissive LCD, switched to OLED microdisplays in their new Moverio BT-300 AR glasses announced back in February.

OLED Microdisplays Could Make VR and Non-See-Through Headsets Smaller/Lighter

Today most of the VR headsets are following Oculus’s use of large flat panels with simple optics. This leads to large bulky headsets, but the cost of OLED and LCD flat panels is so low compared to other microdisplays with their optics that they win out. OLED microdisplays have been far too expensive to compete on price with the larger flat panels, but this could change as there are more entrants into the OLED microdisplay market.

OLEDs Don’t Work With Waveguides As Used By Hololens and Magic Leap

It should be noted that the broad spectrum and diffuse light emitted by OLED is generally incompatible with the flat waveguide optics such as used by Hololens and is expected from Magic Leap (ML). So don’t expect to see these being used by Hololens and ML anytime soon unless they radically redesign their optics. Illuminated microdisplays like DLP and LCOS can be illuminated by narrower spectrum light sources such as LED and even lasers and the light can be highly collimated by the illumination optics.

Transmissive LCD Microdisplays Can’t Compete As Resolution Increases

If anything, this announcement from Kopin is the last nail in the coffin of the transmissive LCD microdisplay in the future. OLED Microdisplays have the advantages over transmissive Micro-LCD in the ability to go to higher resolution and smaller pixels to keep the overall display size down for a given resolution when compared to transmissive LCD. OLEDs consume less power for the same brightness than transmissive LCD. OLED also have much better contrast. As resolution increases transmissive LCDs cannot compete.

OLEDs Microdisplays More Of A Mixed Set of Pros and Cons Compared to LCOS and DLP.

There is a mix of pro’s and con’s when comparing OLED microdisplays with LCOS and DLP. The Pro’s for OLED over LCOS and DLP include:

  1. Significantly simpler optical path (illumination path not in the way). Enables optical solutions not possible with reflective microdisplays
  2. Lower power for a given brightness
  3. Separate RGB subpixels so there is no field sequential color breakup
  4. Higher contrast.

The advantages for LCOS and DLP reflective technologies over OLED microdisplays include:

  1. Smaller pixel equals a smaller display for a given resoluion. DLP and LCOS pixels are typically from 2 to 10 times smaller in area per pixel.
  2. Ability to use narrow band light sources which enable the use of waveguides (flat optical combiners).
  3. Higher brightness
  4. Longer lifetime
  5. Lower cost even including the extra optics and illumination

Up until recently, the cost of OLED microdisplays were so high that only defense contractors and other applications that could afford the high cost could consider them. But that seems to be changing. Also historically the brightness and lifetimes of OLED microdisplays were limited. But companies are making progress.

OLED Microdisplay Competition

Kopin is long from being the first and certainly is not the biggest entry in the OLED microdisplay market. But Kopin does have a history of selling volume into the microdisplay market. The list of known competitors includes:

  1. Sony appears to be the biggest player. They have been building OLED microdisplays for many years for use in camera viewfinders. They are starting to bring higher resolution products to the market and bring the costs down.
  2. eMagin is a 23-year-old “startup”. They have a lot of base technology and are a “pure play” stock wise. But they have failed to break through and are in danger of being outrun by big companies
  3. MicoOLED – Small France startup – not sure where they really stand.
  4. Samsung – nothing announced but they have all the technology necessary to make them. Update: Ron Mertens of OLED-Info.com informed me that I was rumored that the second generation of Google Glass was considering a Samsung OLED microdisplay and that Samsung had presented a paper going back to 2011.
  5.  LG – nothing announced but they have all the technology necessary to make them.

I included Samsung and LG above not because I have seen or heard of them working on them, but I would be amazed if they didn’t at least have a significant R&D effort given their sets of expertise and their extreme interest in this market.

For More Information:

For more complete information on the OLED microdisplay market, you might want go to OLED-info that has been following both large flat panel and small OLED microdisplay devices for many years. They also have two reports available, OLED Microdisplays Market Report and OLED for VR and AR Market Report.

For those who want to know more about Kopin’s manufacturing plan, Chris Chinnock of Insight Media has an interesting article outlining Kopin’s fabless development strategy.

Magic Leap: “The Information” Article

The Information: The Reality Behind Magic Leap

the-information-magic-leap-dec-8-2016-coverThe online news magazine “The Information” released the article “The Reality Behind Magic Leap” on Dec. 8th, 2016, by Reed Albergotti and in the story gave a link to this blog. So you may be a new reader.  The article appears to be well researched and I understand that “The Information” has a reputation as a reliable news source. The article also dovetails nicely on the business side with what I have been discussing with this blog on the technical side. The magazine is a paid publication but there is a summary on The Verge along with their added commentary and a lot of the text from the article has shown up in discussion forums about Magic Leap (ML).

For this blog post, I am going to try put 2+2 together between what I have figured out on the technical side and what Mr. Albergotti reported on the business side. Note, I have not seen what he as seen so I am reading between the lines somewhat but hopefully it will give a more complete picture.

The Magic Leap Prototypes

The article states “Magic Leap CEO Rony Abovitz acknowledged that the prototypes used different technology.” This blog has identified that the early prototypes as:

  1. ml-495-applicationA DLP based prototype that uses a variable focus lens to produce “focus planes” by generating different images for different distances and changing the focus between images and supported maybe 3 to 6 focus planes. This is probably their earliest one and is what the article calls “The Beast” and described as the “size of a refrigerator.”
  2. One or more OLED base variations once again using an electrically controlled focus element where ML made a smaller helmet version. The article discussed only one version, dubbed “WD3” but I suspect that they had variations of this one with different capabilities (as in maybe a WD1, WD2, WD3 and maybe more). I believe based on the video evidence a version that could only change focus was used for their Oct. 14, 2015 through the technology” video.  Their later “A New Morning” and “Lost Droids” videos appear to use an Mico-OLED based optics that supported at least two simultaneous focus planes by running the OLED at 120hz to generate two 60hz sequential “focus plane” images and changing the focus be each one.
  3. ml-slm-beam-splitter-lcos-type-optics-colorThe LCOS version that is using their “Photonic Chip” and supports about 2 focus planes with no moving focusing optics (according to the article); what the article dubbed the “PEQ” prototype.

If you want to get more into the gory technical details on how the above work, I would suggest one of my earlier articles titled “Magic Leap – Separating Magic and Reality“. And if you really want to get dirty, read the ML patent applications they reference but be prepared for a long read as they they cover a lot of totally different concepts.

As this blog has been reporting (and for which I have gotten criticism on some of the on-line discussion forms), the must discussed “fiber scanning display” (FSD) has not been perfected and with it any chance of making the “light field display” ML has talked so much about. Quoting the article,”Magic Leap relegated the fiber scanning display to a long-term research project in hopes that it might one day work, and significantly pared back on its light field display idea.

Possible Scenario – A Little Story

Based on my startup and big company experiences, I think I understand roughly how it went down. Please take the rest of this section as reasonable speculation and reading between the lines of known information. So I am going to play Columbo (old TV series reference) below to give my theory of how it went down.

Startups have sometimes been described as “Jumping out of a plane and sewing a parachute on the way down.” This appears to be the case with Magic Leap. They had a rough idea of what they wanted to do and were able to build an impressive demo system and with some good hand waving convince investors they could reduce it to a consumer headset.

They found Brian Schowengerdt, co-founder and Chief Scientist, who worked on the fiber scanning display (FSD) technology and the issue of vergence and accomodation at the University of Washington to join. Mr. Schowengerdt is clearly a smart person that added a lot of credibility to Rony Abovitz’s dreams. The problem with “university types” is that they often don’t appreciate what it takes to go from R&D to a real high volume product.

The “new optical people” built “The Beast” prototype using DLP’s and electrical controlled focusing lenses to support multiple focus plane, to address the vergence and accommodation issue. They then used the “Jedi Hand Wave Mind Trick” (ordinary hand waving may not be enough) to show the DLP engine, the crude low resolution FSD display from the U of W, some non-functional waveguides, and a mock-up of how wonderful it would be someday with a simple application of money and people (if you can dream it you can build it, right?).

This got them their “big fish,” Google who was attuned to the future of near eye displays with their investment in Google Glass and all the big noise with Oculus Rift. There is phenomenal FoMO (Fear of Missing Out) going on with AR/VR/MR  The fact they got a lot of money from a big name company became it own publicity and fund raising engine. ML then got showered with money and that they hoped could cover the bet. Have Google invest publicly also became its own shield against any question of whether it would work.

All the money gave them a lot of altitude to try and build the proverbial parachute on the way down. But sometimes the problem is harder than all the smart people and money can solve. As I have pointed out on this blog, making the fiber scan display work at high resolution is no small task if not impossible. They came to realize this at some point, probably early on, that FSD were not going to happen in a meaningful time frame.

So “plan B” became to use an existing working display technology to give a similar visual effect, even if much reduced in resolution. The beast is way to big and expensive to cost reduce and then need to have more demo systems that are easier to make.

So then they make the WDx based on OLEDs. But there is fatal flaw with using OLEDs (and it tripped me up at first when looking at the videos). While OLED make the design much easier and smaller the don’t work due to the nature of the they put out with the wonderfully flat waveguides (what ML calls their “Photonics Chip”) that ML has convince investors are part of their secret sauce.

So if they couldn’t use the Photonics Chip with OLEDs and the FSD is a no-go, what do you tell investors, both of your secret sauces are a bust? So in parallel they are working on plan “C” which is to use LCOS panels with LED light sources that will work with some type of waveguide which they will dub the “Photonics Chip”.

But then there is a fly in the ointment. Microsoft starts going public with their Hololens system making Magic Leap look like they are way behind the giant Microsoft that can spend even more money than ML can raise. They need to show something to stay relevant. They start with totally fake videos and get called on the carpet for being obviously fake. So they need a “Magic Leap Technology” (but not the optics they are actually planning on using) demo.

The “Beast System” with it DLP’s and field sequential color will not video well. The camera will reveal to any knowledgeable expert what they are using. So for the video they press into service the WDx OLED systems that will video better. By cleaver editing and only showing short clips, they can demonstrate some focus effects while not showing the limitations of the WDx prototypes. These videos then make ML seem more “real” and keep people from asking too many embarrassing questions.

A problem jhere is that LCOS is much slower than DLP’s and thus they may only be able to support about 2 focus planes. I also believe from 16 years working with LCOS that this likely to look like crap to the eye due to color field breakup; but reapplying the Jedi Mind Trick, maybe two focus planes will work and people won’t notice the color field breakup. And thus you have the PEQ which still does not work well or would be demoing with it rather than the helmet sized WD3.

I suspect that Reed Albergotti from “The Information” had gotten the drop on ML by doing some good investigative journalism work. He told them he was going to run with the story and ML decided to try see if they could do damage control and invited him in. But apparently he was prepared and still saw the holes in their story.

Epilogue: It sounds like Mr. Schowengerdt has been put off to the side having served is usefulness in raising money. They used the money to hire other optical experts who knew how to design the optics they would actually be using. He may be still playing around the FSD to keep the dream alive of a super high resolution display someday and maybe the the next to impossible high resolution light fields (I would suggest reading “The Horse Will Talk Fable” to gain insight into why they would keep doing this as an “R&D” program).

I’m probably a little off in the details, but it probably went down something like the above. If not, hopefully you found it an amusing story. BTW, if you want to make a book and or movie out of this original story please consider it my copyrighted work (c) 2016 (my father was and two brothers are Patent Lawyers and I learned about copyright as a small child at my fathers knee).

Lessons Learned

In my experience startups that succeed in building their product have more than a vague idea of what they want to do and HOW they are going to do it. They realize that money and smart people can’t cure all ills. Most importantly they understand where they have risk and then only have at most A SINGLE serious risk. They then focus on making sure they covering that risk. In the case of Magic Leap, they had multiple major risks in many different areas. You can’t focus on the key risk because there so many and it is a prescription for product failure no matter how much money is applied.

Its even possible the “smart money” that invested realized that ML realized that they were unlikely to totally succeed but thought with money and smart people they might spin out some valuable technology and/or patents. The “equation works” if they multiply a hoped by $100B/year market by even a small chance of success. If a big name places what is for them a small bet, it is surprising how much money will follow along assuming the big name investor had done all the hard work of due diligence.

Even if they get paste the basic technology risk get the PEQ running. We they will then have the problem of building a high volume product, worse yet they are building their own factory. And then we have the 90/90 rule which states, “it takes 90% of the effort to get 90% of the way there and then another 90% to solve the last 10%.” When you have a fully working prototype that behaves well (which by the reports in ML has NOT achieved yet) you have just made it to the starting line; then you have to make it manufacturable at a reasonable cost and yield. Other have said it is really 90/90/90 where there is a third 90%. This is where many a Kickstarter company has spun their wheels.

Magic Leap & Hololens: Waveguide Ego Trip?

ml-and-hololens-combiner-cropThe Dark Side of Waveguides

Flat and thin waveguides are certainly impressive optical devices. It is almost magical how you can put light into what looks a lot like thin plates of glass and an small image will go on one side and then with total internal reflection (TIR) inside the glass, the image comes out in a different place. They are coveted by R&D people for their scientific sophistication and loved by Industrial Designers because they look so much like ordinary glass.

But there is a “dark side” to waveguides, at least every one that I have seen. To made them work, the light follows a torturous path and often has to be bent at about 45 degrees to couple into the waveguide and then by roughly 45 degrees to couple out in addition to rattling of the two surfaces while it TIRs. The image is just never the same quality when it goes through all this torture. Some of the light does not make all the turns and bends correctly and it come out in the wrong places which degrade the image quality. A major effect I have seen in every diffractive/holographic waveguid  is I have come to call “waveguide glow.”

Part of the problem is that when you bend light either by refraction or using diffraction or holograms, the various colors of light bend slightly differently based on wavelength. The diffraction/holograms are tuned for each color but invariably they have some effect on the other color; this is particularly problem is if the colors don’t have a narrow spectrum that is exactly match by the waveguide. Even microscopic defects cause some light to follow the wrong path and invariably a grating/hologram meant to bend say green, will also affect the direction of say blue. Worse yet, some of the  light gets scattered, and causes the waveguide glow.

hololens-through-the-lens-waveguide-glowTo the right is a still frame from a “Through the lens” video” taken through the a Hololens headset. Note, this is actually through the optics and NOT the video feed that Microsoft and most other people show. What you should notice is a violet colored “glow” beneath the white circle. There is usually also a tendency to have a glow or halo around any high contrast object/text, but it is most noticeable when there is a large bright area.

For these waveguides to work at all, they require very high quality manufacturing which tends to make them expensive. I have heard several reports that Hololens has very low yields of their waveguide.

I haven’t, nor have most people that have visited Magic Leap (ML), seen though ML’s waveguide. What  ML leap shows most if not all their visitors are prototype systems that use non-waveguide optics has I discussed last time. Maybe ML has solved all the problems with waveguides, if they have, they will be the first.

I have nothing personally against waveguides. They are marvels of optical science and require very intelligent people to make them and very high precision manufacturing to make. It is just that they always seem to hurt image quality and they tend to be expensive.

Hololens – How Did Waveguides Reduce the Size?

Microsoft acquired their waveguide technology from Nokia. It looks almost like they found this great bit of technology that Nokia had developed and decided to build a product around it. hololensBut then when you look at Hololens (left) there is this the shield to protect the lenses (often tinted but I picked a clear shield so you could see the waveguides). On top of this there is all the other electronic and frame to mount it on the user’s head.

The space savings from the using waveguides over much simpler flat combiner  is a drop in the bucket.

ODG Same Basic Design for LCOS and OLED

I’m picking Osterhout Design Group’s for comparison below because because they demonstrate a simpler, more flexible, and better image quality alternative to using a waveguide. I think it makes a point. Most probably have not heard of them, but I have know of them for about 8 or 9 years (I have no relationship with them at this time). They have done mostly military headsets in the past and burst onto the public scene when Microsoft paid them about $150 million dollars for a license to their I.P. Beyond this they just raised another $58 million from V.C.’s. Still this is chump change compared to what Hololens and Magic Leap are spending.

Below is the ODG R7 LCOS based glasses (with the one of the protective covers removed). Note, the very simple flat combiner. It is extremely low tech and much lower cost compared to the Hololens waveguide. To be fair, the R7 does not have as much in the way of sensors and processing as the as Hololens.

odg-r-with-a-cover-removed

The point here is that by the time you put the shield on the Hololens what difference does having a flat waveguide make to the overall size? Worse yet, the image quality from the simple combiner is much better.

Next, below is ODG’s next generation Horizon glasses that use a 1080p Micro-OLED display. It appears to have somewhat larger combiner (I can’t tell if it is flat or slightly curved from the available pictures of it) to support the wider FOV and a larger outer cover, but pretty much the same design. The remarkable thing is that they can use the a similar optical design with the OLEDs and the whole thing is about the same size where as the Hololens waveguide won’t work at all with OLEDs due broad bandwidth colors OLEDs generate.

odg-horizons-50d-fov

ODG put up a short video clip through their optics of the Micro-OLED based Horizon (they don’t come out and say that it is, but the frame is from the Horizon and the image motion artifacts are from an OLED). The image quality appears to be (you can’t be too quantitative from a YouTube video) much better than anything I have seen from waveguide optics. There is not of the “waveguide glow”. odg-oled-through-the-optics-002

They even were willing to show text image with both clear and white backgrounds that looks reasonably good (see below). It looks more like a monitor image except for the fact that is translucent. This is the hard content display because you know what it is supposed to look like so you know when something is wrong. Also, that large white area would glow like mad on any waveguide optics I have seen. odg-oled-text-screen-002

The clear text on white background is a little hard to read at small size because it is translucent, but that is a fundamental issue will all  see-though displays. The “black” is what ever is in the background and the “white” is the combination of the light from the image and the real world background.  See through displays are never going as good as an opaque displays in this regards.

Hololens and Magic Leap – Cart Before the Horse

It looks to me like Hololens and Magic Leap both started with a waveguide display as a given and then built everything else around it. They overlooked that they were building a system. Additionally, they needed get it in many developers hands as soon as possible to work out the myriad of other sensor, software, and human factors issues. The waveguide became a bottleneck, and from what I can see from Hololens was an unnecessary burden. As my fellow TI Fellow Gene Frantz and I used to say when we where on TI’s patent committeed, “it is often the great new invention that causes the product to fail.”

I (and few/nobody outside of Magic Leap) has seen an image through ML’s production combiner, maybe they will be the first to make one that looks as good as simpler combiner solution (I tend to doubt it, but it not impossible). But what has leaked out is that they have had problems getting systems to their own internal developers. According the Business Insider’s Oct. 24th article (with my added highlighting):

“Court filings reveal new secrets about the company, including a west coast software team in disarray, insufficient hardware for testing, and a secret skunkworks team devoted to getting patents and designing new prototypes — before its first product has even hit the market.”

From what I can tell of what Magic Leap is trying to do, namely focus planes to support vergence/accommodation, they could have achieved this faster with more conventional optics. It might not have been as sleek or “magical” as the final product, but it would have done the job, shown the advantage (assuming it is compelling) and got their internal developers up and running sooner.

It is even more obvious for Hololens. Using a simple combiner would have added trivially to the the design size while reducing the cost and getting the the SDK’s in more developer’s hands sooner.

Summary

It looks to me that both Hololens and likely Magic Leap put too much emphasis on the using waveguides which had a domino effect in other decisions rather than making a holistic system decision. The way I see it:

  1. The waveguide did not dramatically make Hololens smaller (the case is still out for Magic Leap – maybe they will pull a rabbit out of the hat). Look at ODG’s designs, they are every bit as small.
  2. The image quality is worse with waveguides than simpler combiner designs.
  3. Using waveguides boxed them in to using only display devices that were compatible with their waveguides. Most notably they can’t use OLED or other display technology that emit broader spectrum light.
  4. Even if it was smaller, it is more important to get more SDKs in developers (internal and/or external hand) sooner rather than later.

Hololens and Magic Leap appear to be banking on getting waveguides into volume production in order to solve all the image quality and cost problems with them. But it will depend on a lot of factors, some of which are not in their control, namely, how hard it is to make them well and at a price that people can afford. Even if they solve all the issues with waveguides, it is only a small piece of their puzzle.

Right now ODG seems to be taking more the of the original Apple/Wozniak approach; they are finding elegance in a simpler design. I still have issues with what they are doing, but in the area of combining the light and image quality, they seem to be way ahead.

AR/MR Combiners Part 2 – Hololens

hololens-combiner-with-patent

Microsoft’s Hololens is perhaps the most most well known device using flat “waveguide” optics to “combine” the real world with computer graphics. Note there are no actual “holograms” anywhere in Hololens by the scientific definition.

At left is a picture from the Verge Teardown of a Hololens SDK engine and a from a US Patent Application 2016/0231568 I have added some red and green dots to the “waveguides” in the Verge picture to help you see their outlines.

diffraction grating is a type of Diffractive Optical Element (DOE) and has a series of very fine linear structures with a period/repeated spacing on the order of the wavelengths of light (as in extremely small). hololens-diffraction-gratingA diffraction grating acts like a lens/prism to bend the light and as an unwanted side effect the light also is split separated by wavelength (see top figure at left) as well has affecting the polarization of the light. If it were a simple grating, the light would symmetrically split the light in two directions (top figure at left) but as the patent points out if the structure is tilted then more of the light will go in the desired direction (bottom figure at left).   This is a very small structure (on the order of the wavelength of the light) must be formed on the surface of the flat waveguide.

Optical waveguides use the fact that once light enters glass or clear plastics at a certain angle or shallower, it is will totally reflect, what is known as Total Internal Reflection or TIR.  The TIR critical angle is around 45 degrees for the typical glass and plastics with their coatings used in optics.

hololens-patent-sideviewHololens use the diffraction grating (52 in Fig 3B above) to bend or “incouple” the light or the light so that it will TIR (see figure at right).   The light then TIR’s off of the flat surfaces around within the glass and hits off a triangular “fold zone” (in Fig. 3B above) which causes light to turn ~90 degrees down to the “exit zone” DOE (16 in Fig. 3B).  The exit zone DOE causes the angle of the light to be reduced so it will no longer TIR so it can exit the glass toward the eye.

Another function of the waveguides, particularly the exit waveguide 16 is to perform “pupil expansion” or slightly diffusing the light so that the image can be viewed from a wider angle.   Additionally, it is waveguide 16 that the user sees the real world through and invariably it has to have some negative effect from seeing the world through a slightly diffuse diffraction grating.

Hololens is far from the first to use DOE’s to enter and exit a flat waveguide (there are many examples) and they appear to have acquired the basic technology from Nokia’s efforts of about 10 years ago.   Other’s have used holographic optical elements (HOE) which perform similar functions to DOEs and still others have use more prismatic structure in the waveguides, but each of these alternatives solves some issues as the expense of others.

A big issue for the flat combiners I have seen to date has been chroma aberrations, the breaking up of white light into colors and out of focus and haze effects.   In bending the light at about 45 degrees is like going through a “prism” and the color separate, follow slightly different paths through the waveguide and are put back together by the exit grating.  The process is not perfect and thus there is some error/haze/blur that can be multiple pixels wide. Additionally as pointed out earlier, the user is invariably looking  at the real world through the structure meant to cause the light to exit the from the waveguide toward the eye and it has to have at least some negative effect.

There is a nice short 2013 article on flat combiners by (one author being a Google employee) that discusses some of the issues with various combiners including the Nokia one on which Hololens is base.  In particular they stated:

“The main problems of such architecture are the complexity of the master fabrication and mass replication as well as the small angular bandwidth (related to the resulting FOV). In order to mimic the holographic Bragg effect, sub-wavelength tilted structures with a high aspect ratio are needed, difficult to mass replicate for low cost volume production”  

Base on what I have heard from a couple of sources, the yield is indeed currently low and thus the manufacturing cost is high in making the Hololens combiner.   This may or may not be a solvable (in terms of meeting a consumer acceptable price) problem with volume production.

hololens-odg-comparisonWhile the Hololens combiner is a marvel of optical technology, one has to go back and try and understand why they wanted a thin flat combiner rather than say the vastly simpler (and less expensive maybe by over 10X) tilted flat combiner that say Osterhout Design Group (ODG), for example, is currently using.   Maybe it is for some planned greater advantage in the long term, but when you look at the current Hololens flat combiner, the size/width of the combiner would seem to have little effect on the overall size of the resulting device.  Interestingly, Microsoft has spent about $150 million in licensing fees to ODG.

Conclusions

Now step back and look at the size of the whole Hololens structure with the concentric bands going around the users head.  There is inner band to grip the user’s head while the electronics is held in the outer band.  There is a large nose bridge to distribute the weight on the persons nose and a big curve shield (usually dark tinted) in front of the combiner.  You have to ask, did the flat optical combiner make a difference?

I don’t know reasons/rational/advantages of why Hololens has gone with a vastly more complex combiner structure.   Clearly at the present, it does not give a significant (if any) size advantage.   It almost looks like they had this high tech combiner technology and decided to use it regardless (maybe it was the starting point of the whole program).

Microsoft is likely investing several billion dollars into Hololens. Google likely spent over $1 billion on the comparatively very simple Google Glass (not to mention their investment in Magic Leap). Closely realated, Facebook spent $2b to acquire Oculus Rift. Certainly big money is being thrown around, but is it being spent wisely?

Side Comments: No Holograms Anywhere to be Found

What Microsoft calls “Holograms” are the marketing name Microsoft has given to Mixed Reality (MR).   It is rather funny to see technical people that know better stumble around saying things like “holograms, but not really holograms, . . .”  Unfortunately due to the size and marketing clout of Microsoft others such as Metavision has started calling what they are doing “holograms” too (but this does not make is true).

Then again probably over 99% of what the public thinks are “holograms” are not.  Usually they are simple optical combiner effects cause by partial reflections off of glass or plastic.

Perhaps ironically, while Microsoft talks of holograms and the product as the “Hololens” there are as best I can find no holograms used even static ones that could have been used in the waveguide optics (they use diffraction gratings instead).

Also interestingly, the patent application is assigned to Microsoft Technology Licensing, LLC., a recently separated company from Microsoft Inc.  This would appear to be in anticipation of future patent licensing/litigation (see for example).

Next Time on Combiners

Next time on this subject, I plan on discussing Magic Leap the $1.4 Billion invested “startup” and what it looks like they may be doing.   I was originally planning on covering it with Hololens, but it became clear that it was too much to try and cover in one article.