Near-Eye Bird Bath Optics Pros and Cons – And IMMY’s Different Approach

Why Birdbaths Optics? Because the Alternative (Waveguides) Must Be Worse (and a teaser)

The idea for this article started when I was looking at the ODG R-9 optical design with OLED microdisplays. They combined an OLED microdisplay that is not very bright in terms of nits with a well known “birdbath” optical design that has very poor light throughput. It seems like a horrible combination. I’m fond of saying “when intelligent people chose a horrible design, the alternative must have seemed worse

I’m going to “beat up” so to speak the birdbath design by showing how some fundamental light throughput numbers multiply out and why the ODG R-9 I measured at CES blocks so much of the real world light. The R-9 also has a serious issue with reflections. This is the same design that a number of publications considered among the “best innovations” of CES; it seems to me that they must have only looked at the display superficially.

Flat waveguides such as used by Hololens, Vuzix. Wave Optics, and Lumus as well as expected from Magic Leap get most of the attention, but I see a much larger number of designs using what is known as a “birdbath” and similar optical designs. Waveguides are no secret these days and the fact that so many designs still use the birdbath optics tells you a lot about the issues with waveguides. Toward the end of this article, I’m going to talk a little about the IMMY design that replaces part of the birdbath design.

As a teaser, this article is to help prepare for an article on an interesting new headset I will be writing about next week.

Birdbath Optics (So Common It Has a Name)

The birdbath combines two main optical components, a spherical mirror/combiner (part-mirror) and a beam splitter. The name  “birdbath” comes from the spherical mirror/combiner looking like a typical birdbath. It is used because it generally is comparatively inexpensive to down right cheap while also being relatively small/compact while having  good overall image quality. The design fundamentally supports a very wide FOV, which are at best difficult to support with waveguides. The big downsides are light throughput and reflections.

A few words about Nits (Cd/m²) and Micro-OLEDs

I don’t have time here to get into a detailed explanation of nits (Cd/m²). Nits is the measure of light at a given angle whereas lumens is the total light output. The simplest analogy is to water hose with a nozzle (apropos here since we are talking about birdbaths). Consider two spray patterns, one with a tight jet of water and one with a wide fan pattern both outputting the exact same total amount of water per minute (lumens in this analogy). The one with the tight patter would have high water pressure (nits in this analogy) over a narrow angle where the fan spray would have lower water pressure (nits) over a wider angle.

Additionally, it would be relatively easy to put something in the way of the tight jet and turn it into a fan spray but there is no way to turn the fan spray into a jet. This applies to light as well, it is much easier to go from high nits over are narrow angle to lower nits over a wide angle (say with a diffuser) but you can’t go the other way easily.

Light from an OLED is like the fan spray only it covers a 180 degree hemisphere. This can be good for a large flat panel were you want a wide viewing angle but is a problem for a near eye display where you want to funnel all the light into the eye because so much of the light will miss pupil of the eye and is wasted. With an LED you have a relative small point of light that can be funneled/collimated into a tight “jet” of light to illuminate an LCOS or DLP microdisplay.

The combination of light output from LEDs and the ability to collimate the light means you can easily get tens of thousands of nits with an LCOS or DLP illuminated microdisplay were OLED microdisplays typically only have 200 to 300 nits. This is major reason why most see-through near eye displays use LCOS and DLP over OLEDs.

Basic Non-Polarizing Birdbath (example, ODG R-9)

The birdbath has two main optical components, a flat beam splitter and a spherical mirror. In the case a see-through designs, the the spherical mirror is a partial mirror so the spherical element acts as a combiner. The figure below is taken from an Osterhaut Design Group (ODG) patent which and shows simple birdbath using an OLED microdisplay such as their ODG R-9. Depending on various design requirements, the curvature of the mirror, and the distances, the lenses 16920 in the figure may not be necessary.

The light from the display device, in the case of the ODG R-9 is a OLED microdisplay, is first reflect away from the eye and perpendicular (on-axis) to the curved beam splitter so that a simple spherical combiner will uniformly magnify and move the apparent focus point of the image (if not “on axis” the image will be distorted and the magnification will vary across the image). The curved combiner (partial mirror) has minimal optical distortion on light passing through.

Light Losses (Multiplication is a Killer)

A big downside to the birdbath design is the loss of light. The image light must make two passes at the beam splitter, a reflective and transmissive, with a reflective (Br) and transmissive (Bt) percentages of light. The light making it through both passes is Lr x Lt.  A 50/50 beam splitter might be about 48% reflective and transmissive (with say a 4% combined loss), and the light throughput (Br x Bt) in this example is only 48% x 48%= ~23%. And “50/50” ratio is the best case; if we assume a nominally 80/20 beam splitter (with still 4% total loss) we get 78% x 18% = ~14% of the light making through the two passes.

Next we have the light loss of the spherical combiner. This is a trade-off of image light being reflected (Cr) versus being transmitted  (Ct) to the real world where Cr + Ct is less than 1 due to losses. Generally you want the Cr to be low so the Ct can be high so you can see out (otherwise it is not much of a see through display).

So lets say the combiner has Cr=11% and the Ct=75% with about 4% loss with the 50/50 beamsplitter. The net light throughput assuming a “50/50” beam splitter and a 75% transmissive combiner is Br x Cr X Bt = ~2.5% !!! These multiplicative losses lose all but a small percentage of the display’s light. And consider that the “real world” net light throughput is Ct x Bt which would be 48% x 75% = 36% which is not great and would be too dark for indoor use.

Now lets say you want the glasses to be at least 80% transmissive so they would be considered usable indoors. You might have the combiner Ct=90% making Cr=6% (with 4% loss) and then Bt=90% making Br=6%. This gives the real world transmissive about 90%x90% = 81%.  But then you go back and realize the display light equation (Br x Cr X Bt) becomes 6%x6%x90% = 0.3%. Yes, only 3/1000ths of the starting image light makes it through. 

Why the ODG R-9 Is Only About 4% to 5% “See-Through”

Ok, now back to the specific case of the ODG R-9. The ODG R-9 has an OLED microdisplay that most like has about 250 nits (200 to 250 nits is commonly available today) and they need to get about 50 nits (roughly) to the eye from the display to have a decent image brightness indoors in a dark room (or one where most of the real world light is blocked). This means they need a total throughput of 50/250=20%. The best you can do with two passes through a beam splitter (see above) is about 23%.  This forces the spherical combiner to be highly reflective with little transmission. You need something that reflects 20/23=~87% of the light and only about 9% transmissive. The real world light then making it through to the eye is then about 9% x 48% (Ct x Bt) or about 4.3%.

There are some other effects such as the amount of total magnification and I don’t know exactly what their OLED display is outputting display and exact nits at the eyepiece, but I believe my numbers are in the ballpark. My camera estimates for the ODG R-9 came in a between 4% and 5%. When you are blocking about 95% of the real world light, are you really much of a “see-through” display?

Note, all this is BEFORE you consider adding say optical shutters or something like Varilux® light blocking. Normally the birdbath design is used with non-see through designs (where you don’t have the see-through losses) or with DLP® or LCOS devices illuminated with much higher nits (can be in the 10’s of thousands) for see through designs so they can afford the high losses of light.

Seeing Double

There are also issues with getting a double image off of each face of plate beam splitter and other reflections. Depending on the quality of each face, a percentage of light is going to reflect or pass through that you don’t want. This light will be slightly displaced based on the thickness of the beamsplitter. And because the light makes two passes, there are two opportunities to cause double images. Any light that is reasonably “in focus” is going to show up as a ghost/double image (for good or evil, your eye has a wide dynamic range and can see even faint ghost images). Below is a picture I took with my iPhone camera of a white and clear menu through the ODG R-9. I counted at least 4 ghost images (see colored arrows).

As a sort of reference, you can see the double image effect of the beamsplitter going in the opposite direction to the image light with my badge and the word “Media” and its ghost (in the red oval).

Alternative Birdbath Using Polarized Light (Google Glass)

Google Glass used a different variation of the birdbath design. They were willing to accept a much smaller field of view and thus could reasonably embedded the optics in glass. It is interesting here to compare and contrast this design with the ODG one above.

First they started with an LCOS microdisplay that was illuminated by LEDs that can be very much brighter and more collimated light resulting in much higher (can be orders of magnitude) starting nits than an OLED microdisplay can output. The LED light is passed through a polarizing beam splitter than will pass about 45% P light to the LCOS device (245). Note a polarizing beam splitter passes one polarization and reflect the other unlike a the partially reflecting beam splitter in the ODG design above. The LCOS panel will rotate the light to be seen to S polarization so that it will reflect about 98% (with say 2% loss) of the S light.

The light then goes to a second polarizing beam splitter that is also acting as the “combiner” that the user sees the real world through. This beam splitter is set up to pass about 90% of the S light and reflect about 98% of the P light (they are usually much better/more-efficient in reflection). You should notice that they have a λ/4 (quarter wave = 45 degree rotation) film between the beam splitter and the spherical mirror which will rotate the light 90 degrees (turning it from S to P) after it passes through it twice. This  λ/4 “trick” is commonly used with polarized light. And since you don’t have to look through the mirror, it can be say 98% reflective with say another 3% loss for the λ/4.

With this design, about 45% (one pass through the beamsplitter) of the real world makes it through, but only light polarized the “right way” makes it through which makes looking at say LCD monitors problematical. By using the quarter wave film the design is pretty efficient AFTER you loose about 55% of the LED light in polarizing it initially. There are also less reflection issues because all the films and optics are embedded in glass so you don’t get these air to glass index mismatches of off two surfaces of a relatively thick plate that cause unwanted reflections/double images.

Google Glass design has a lot of downsides too. There is nothing you can do to get the light throughput of the real world much above 45% and there are always the problems of looking through a polarizer. But the biggest downside is that it cannot be scaled up for larger fields of view and/or more eye relief. As you scale this design up the block of glass becomes large, heavy and expensive as well as being very intrusive/distorting in looking through a big thick piece of glass.

Without getting too sidetracked, Lumus in effect takes the one thick beam splitter, and piece-wise cuts it into multiple smaller beam splitters to make the glass thinner. But this also means you can’t use the spherical mirror of a birdbath design with it and so you require optics before the beam splitting and the light losses of the the piece-wise beam splitting are much larger than a single beamsplitter.

Larger Designs

An alternative design would mix the polarizing beamsplitters of the Google Glass design above with the configuration of ODG design above.  And this has been done many times through the years with LCOS panels that use polarized light (an example can be found in this 2003 paper). The spherical mirror/combiner will be a partial non-polarizing mirror so you can see through it and a quarter waveplate is used between the spherical combiner and the polarizing beam splitter. You are then stuck with about 45% of the real world light times the light throughput of the spherical combiner.

A DLP with a “birdbath” would typically use the non-polarizing beam splitter with a design similar to the ODG R-9 but replacing the OLED microdisplay with a DLP and illumination. As an example, Magic Leap did this with a DLP but adding a variable focus lens to support focus planes.

BTW, by the time you polarized the light from an OLED or DLP microdisplay, there would not be much if any of an efficiency advantage sense to use polarizing beamsplitters. Additionally, the light from the OLED is so diffused (varied in angles) that it would likely not behave well going through the beam splitters.

IMMY – Eliminating the Beamsplitter

The biggest light efficiency killer in the birdbath design is the combined reflective/transmissive passes via the beamsplitter. IMMY effectively replaces the beamsplitter of the birdbath design with two small curved mirrors that he correct for the image being reflected off-axis from the larger curved combiner. I have not yet seen how well this design works in practice but at least the numbers would appear to work better. One can expect only a few percentage points of light being lost off of each of the two small mirrors so that maybe 95% of the light from the OLED display make it to the large combiner. Then you have the the combiner reflection percentage (Cr) multiplying by about 95% rather than the roughly 23% of the birdbath beam splitter.

The real world light also benefits as it only has to go through a single combiner transmissive loss (Ct) and no beamsplitter (Bt) loses. Taking the OGD R-9 example above and assuming we started with a 250 nit OLED and with 50 nits to the eye, we could get there with about an 75% transmissive combiner. The numbers are at least starting to get into the ballpark where improvements in OLED Microdisplays could fit at least for indoor use (outdoor designs without sunshading/shutters need on the order of 3,000 to 4,000 nits).

It should be noted that IMMY says they also have “Variable transmission outer lens with segmented addressability” to support outdoor use and variable occlusion. Once again this is their claim, I have not yet tried it out in practice so I don’t know the issues/limitations. My use of IMMY here is to contrast it with the classical birdbath designs above.

A possible downside to the the IMMY multi-mirror design is bulk/size has seen below. Also noticed the two adjustment wheel for each eye. One is for interpupillary distance to make sure the optics line up center with the pupils which varies from person to person. The other knob is a diopter (focus) adjustment which also suggests you can’t wear these over your normal glasses.

As I have said, I have not seen IMMY’s to see how it works and to see what faults it might have (nothing is perfect) so this is in no way an endorsement for their design. The design is so straight forward and a seemingly obvious solution to the beam splitter loss problem that it makes me wonder why nobody has been using it earlier; usually in these cases, there is a big flaw that is not so obvious.

See-Though AR Is Tough Particularly for OLED

As one person told me at CES, “Making a near eye display see-through generally more than double the cost” to which I would add, “it also has serious adverse affects on the image quality“.

The birdbath design wastes a lot of light as do every other see-through designs. Waveguide designs can be equally or more light wasteful than the birdbath. At least on paper, the IMMY design would appear to waste a less than most others. But to make a device say 90% see through, at best you start by throwing away over 90% of the image light/nits generated, and often more than 95%.

The most common solution to day is to start with LED illuminated LCOS or DLP microdisplay so you have a lot of nits to throw at the problem and just accept the light waste. OLEDs are still orders of magnitude in brightness/nits away from being able to compete with LCOS and DLP with brute force.



  1. Except that eMagin has much higher brighter OLED micro displays .

    June 02, 2016

    Immediately following the presentation of the paper, eMagin demonstrated for the first time in public a direct patterned OLED microdisplay that can reach a maximum luminance of 4,500 nits with vivid colors and in full video mode.

    Who do you suppose Zeiss might have been referring to ? Sony ?


    “The display’s shortcomings in bright light should be helped, if not entirely remedied, by much brighter OLED displays coming later this year. ”

    1. From what I could gather at CES, the eMagin “2,000 nit and 4,500 nit” are “lab prototypes” that don’t have a lifetime specification. The Sony 200 to 250 nit displays are what you can buy.

      1. I guess we shall see .

        Interesting that eMagin PPS & volume began spiking on monday 3/6/17 just days after this article about IMMY on saturday 3/4/17 . Did someone get some scoop at GDC ?

        GDC 2017: Eyes-On with IMMY’s 60-Degree AR + VR Optics

        IMMY – Dual XGA (1024×768) OLED’s

        eMagin 3Q 10Q :

        We are completing qualification of a new 0.48 inch diagonal lfull color XGA format microdisplay utilizing the same proven 9.6-micron color pixel used in its WUXGA and SXGA096 product lines. This new product is targeted at industrial and commercial markets looking for a cost effective medium resolution microdisplay. Deliveries are scheduled to begin in the first quarter of 2017.

        From Upload VR article :

        A consumer version is planned for the future, but IMMY sees their solution more for industrial enterprise uses in the near term with consumer applications coming shortly after that.

        1. I highly suspect IMMY, is using the Sony OLED Microdisplays just like almost everyone else I have talked to using OLED microdisplays. The only product I know with eMagin in is is the “Blaze” night-vision headsets and they are a division of eMagin who also makes the IR sensor.

          Sony’s ECX331DB-6 has up to 500nits which brighter than Sony’s 1080p’s 300 nits (see the table at: Sony has had this device in production since at least 2011 (see: and it has been used in camera viewfinders which gives them some volume.

          1. I could very well be that IMMY was doing a prototype with eMagin about a year ago and then things may have changed or maybe Mr Naids put 2+2 together wrong. Note that IMMY specifically says they are making an XGA (1024×768) display and eMagin only lists having SVGA (800×600) and SXGA (1280×1024) but no XGA. Whereas Sony has been making an XGA OLED Microdisplay for about 6 years.

          2. Maybe I just can’t read black and white but it sure looks to me like eMagin is delivering XGA first quarter 2017 .

            eMagin 3Q 10Q :

            We are completing qualification of a new 0.48 inch diagonal full color XGA format microdisplay utilizing the same proven 9.6-micron color pixel used in its WUXGA and SXGA096 product lines. This new product is targeted at industrial and commercial markets looking for a cost effective medium resolution microdisplay. Deliveries are scheduled to begin in the first quarter of 2017.

          3. You reading is correct 🙂 (Oops, I missed you have provided this before). They appear to be mapping to a slightly smaller pixel than Sony’s XGA (which has about 9.9 micron versus eMagin 9.6 micron pixel pitch), but probably close enough to be somewhat interchangeable (although the electronics would be different. Based on what I have heard from multiple sources, it is hard to believe that eMagin is going to be cost and production competitive with Sony which has been in volume production for 5+ years on the product. Don’t get me wrong, I would root for eMagin being a small company, but I am hearing that the vast majority of companies working on near eye OLEDs are going with Sony.

            Both Sony and eMagin have been hard to work with but for different reasons. eMagin “rap” is that they are expensive and Sony’s “rap” is that it is hard to get through to someone at the company that will work with you (give you spec’s and sell development kits).

    2. Regarding Zeiss comments:
      They are only talking (at least in the article) about a 17 degree FOV so they don’t need many pixels and with lower resolution they can go brighter. With the “Fresnel” optics they are obviously not making the passes through a beam splitter. But it looks like they will take the next generation of brightness just to get to a decent place at 17 degrees FOV. My guess they are still not talking outdoors use where you want 3,000 to 4,000 nits and if you want it to be 90% transparent then you need to start with 30,000 to 40,000 nits or about 100X brighter than today’s OLED microdisplay and not just 2X or even 10X.

        1. From what I could find, those appeared to all be “camera feeds” and not through the optics. It is very hard to do though the lens with dynamic lighting conditions as the eye deals with high dynamic range differently than a camera.

          The Elbit optical train (per the patent) looks somewhat similar to IMMY only they combine reflective and refractory optics so they don’t loose so much light as with the “birdbath”. They appear to be assuming outdoor sunglasses that could be say 80%-reflective 20% transmissive. The upper end of what I am hearing OLEDs getting to in production with OLEDs is about 1000 nits which would be delivering say about 150 nits to the eye (after all losses). At about 20% transmissive you can cut the ambient to a range where it where it would be workable at least if you are not looking the “wrong” direction.

          Allowing to block say 80% of the ambient light is application specific. Some might be able to tolerate it but not others. Some companies can have different combiners that will reflect/transmit different amount of light so that they can adapt for indoor/outdoor use.

          Starting with 30,000+ Nits is so you can work outdoors without requiring some pretty severe blocking of the ambient light.

          1. According to this article sunglasses should be designed to block 75-90 % of ambient light .

            “For comfortable vision on sunny days, sunglasses should block 75 to 90 percent of visible light.”


            I also found a video of Everysight Raptor with brief shot through the lens glimpse and have requested more from the company .


            Sorry if I’m not understanding what you are trying to say but I see professional bikers , who need to have great visibility, using an AR device during all types of daylight conditions who don’t seem to have any problems reading the display while at the same time riding their bikes in all types of traffic , however according to your calculations the 1,000 nit (or possibly less) OLED micro display is inadequate and needs to be orders of magnitude brighter ?

          2. I think it comes down to what will work in a specific application versus a general device. There are not as far as I am aware variable combiners although some designs let you switch between combiners or use Varilux type shade but they don’t change the combining ratio.

            The issues come down to how much you block the forward vision and whether the image will show up. It also makes a big difference if you are just data snacking and if you will accept that the image will disappear or be impossible to read sometimes. Companies that are building comerical display devices to sell for outdoor use typically want 3,000+ nits and automotive HUD designs typically specify 15,000 nits and this is after combiners that will transmit more than 80% of the light. Most people don’t wear Sunglasses all the time and when they do they put them on only when necessary.

            With EverySight design, the combiner and sunglasses are one and the same; it you take off the sunglasses you lose the display. I would wonder how it would work in a wide variety of lighting conditions from bright sunlight (into and away from the sun), to dim light, to night. With EverySight, they can assume a bicycle situation where you are looking more down than out and you may accept that you can’t see the display against say the sky (something you can’t assume with say a car HUD).

            Just quickly running through some numbers. Below is a table of typical ambient lighting. If you had a “matte white” surface, the nits is given by dividing the numbers below by PI. Most things out on the road are not matte white so they would give off less light; their “albedo” or reflectance knocks this down. New concrete has an albedo of 55%, old asphalt 12%, and green grass about 25% ( Also with a bicycle you are typically looking more down than out (at the road ahead of you. But let’s take full sunlight, you start with about 100 lux divide by PI and multiply by the albedo: Using new concrete this give about 17,500 nits with old asphalt you are at about 3800 nits. In this situation you may want only 20% transmissive sunglasses to get knock down the ambient but still the new concrete transmitting 3,500 nits which is too bright to see against with less than 1,000 nits. But what happens if it gets cloudy; then you might have only about 1000 lux (1/100th the light of a Sunny Day) and the concrete is only giving off 175 nits and it is the brightest thing around (everything else you want to see is darker; if you only transmit 20% you will be able to see the concrete but not much else. In between, there are probably many “Goldilocks” conditions where it does work; you can turn your head to look at something darker to see the display.

            The point of all this is the huge dynamic range of light that you have to deal with and how this drives the need for very high nits in outdoor conditions.

            0 lux  – 100 lux Pitch black to dim interior lighting
            100 lux  – 500 lux Residential indoor lighting
            500 lux  – 1,500 lux Bright indoor lighting:  kitchens, offices, stores
            1,000 lux  – 5,000 lux Outdoor lighting in shade or an overcast sky
            3,000 lux  – 10,000 lux Shadow cast by a person in direct sunlight
            10,000 lux  – 25,000 lux Full daylight not in direct sunlight
            20,000 lux  – 50,000 lux Indoor sunlight falling on a desk near a window
            50,000 lux  – 75,000 lux Indoor direct sunlight through a window
            100,000 lux  –     120,000 lux Outdoor direct sunlight

          3. If you check out the various video feeds from the Everysight twitter site you will notice that different videos have symbology with different colors displayed . Perhaps the user can adjust the color scheme to provide the best contrast for the environment that they currently find themselves in and overcome some of the problems you are talking about .

            As the technology progresses perhaps presets can be easily selected based on conditions similar to how preset equalizer selections are available in audio systems .

            I don’t fully understand all the technical aspects as I am not an optics expert but I do appreciate the complexity of problem and your patience in helping to explain things .

          4. Ah no. . . Mostly color does not work like that. Using an “unnatural color” might help it stand out a little bit, but the dominate factor is nits (which take into account the human perception of light as they are derived from Lumens). If you color balance for “white” with typical wavelengths very roughly 65% of the lumens come from Green, 30% from red, and only 5%. There are a lot of variables that will affect these percentages including the wavelengths and the “white” point, but Green is about 2X red, and red is about 5X blue. Thus in a see-though display you pretty much can’t use blue by itself, what you do if you want something blueish, you used cyan (blue plus green) to make it show. But note, colors come by SUBTRACTING from white.

            So what you see in HUDs and see-though displays is a lot of White, Green, Red, Yellow (green plus red), orange (essentially darker yellow), and cyan (blue plus green). By convention, red usually means “bad” and green means “good” and since you don’t know what it will be seen against, subtle differences in color don’t work. Making something “darker” really is just making it more transparent.

            Fundamentally you need the nits to make the image stand out on a bright background and there is a huge dynamic range over which the human encounters on a daily basis from full sun to night/dark with dim lighting. If you got with external darkening Varilux or electronics shutters, you darken the real world but don’t help the throughput from the display device so you will need a LOT of darkening to see the display.

          5. So you’re telling me contrast plays no role ?

            Why then is black print used on white paper instead of just a higher brightness white print ?

          6. I guess I am going to have to type more s-l-o-w-l-y :-). It is all about contrast, contrast is the difference between the dark and light. Sharpness can be definite as contrast at an edge.

            You can have say text on a background where both are the same brightness but differ in color and while you will be able to make it out, it will be very hard to read. When you generate text colors, you don’t ever want to pick two colors with the same perceived brightness. Also you have almost no resolution in blue (blue gets no respect, it has few lumens and you can’t resolve it so there little is no contrast in blue); I sometimes say, blue just changes the color.

            With a see-though display your “black” is whatever is in the background as seen through the combiner. Black=clear. You can only add light. So let’s say your display has a black level of Db and a white level of Dw. If the display’s contrast was 100:1 (which is not great and about what Google Glass had), then Dw is 100x brighter than Db. Then you have the background/real world as seen through the combiner, call this Rb. When you look through the combiner for white you see Dw+Rb and for black you have Db+Rb. So you net contrast is Dw+Rb/Db+Rb.

            For practical purposes except at night or in a dark room, Rb is much greater than Db, the equation above effectively becomes Dw/Rb or simply the Display’s white level over the real world as seen through the combiner. As a wrote before, you have to factor in the illumination of the background (say with the sun), the albedo (how much of the sunlight gets absorbed by what is in the real world), and how much of the real world light your block in the combining optics.

          7. Thanks for your patience however it appears the technical aspects are above my pay grade so I will have to use the practical evidence and common sense to make some determinations .

            1. Assuming ODG is using the Sony 500 nit 1080p OED micro display it appears to me that this shot through the lens video , outside , not in direct sunlight , appears to have excellent readability and see through capability . – I’m assuming the “see through” capabilities you are talking about the areas without data displayed .


            2. eMagin CEO Andrew Sculley stated this in a PR referring to their 4,500 nit Ultra High Brightness Display :

            The new full color DPD display, designated OLED-ULT, provides brightness that meets or exceeds the requirements for the augmented reality (AR) and virtual reality (VR) markets.


            3. Kopin is supplying military avionics LCD displays at 17,000 nits . Bright enough for military avionics but not enough for a consumer ?

            WESTBOROUGH, Mass.–(BUSINESS WIRE)– Kopin® Corporation (NASDAQ: KOPN), a leading developer of innovative wearable computing technologies and solutions today announced it has received a production order for high-brightness color SXGA displays (>5,000 fL or ~17,000 nits) in support of full-color helmet-mounted display (HMD) products for US Navy’s MH-60 helicopters.


            4. You stated “Starting with 30,000+ Nits is so you can work outdoors without requiring some pretty severe blocking of the ambient light.”

            However, on a sunny day you would likely want to block 75-90% of ambient light .

            “For comfortable vision on sunny days, sunglasses should block 75 to 90 percent of visible light.”


            Therefore the evidence points me to Andrew Scully’s statement that 4,500 nits “meets or exceeds the requirements for the augmented reality (AR) and virtual reality (VR) markets “.

          8. 1. For any one case, they can “balance up” the ambient and display light. If they had taken that display into full sunlight the display would be unreadable (there is a huge range in outdoor lighting (about 100 to 1 in daylight from shade to full sun versus indoor where it typically varies about 5 to 1). They are also darkening the slight severely which the camera is correcting. You will notice that the people walking by are NOT wearing sunglasses. You need to be able to adjust the brightness by about 100x to work outdoors with dynamically changing lighting conditions.

            2. My understanding having asked a number of people this is an “experimental/lab” device and not one available for sale.

            3. Starting with 17,000 nits is just getting into the ballpark of what is needed, but I don’t know all the conditions and requirements. Kopin has said that see through displays in daylight need more than 3000 nits. Starting with 17,000 nits suggest that they could be about 80% see through with a combiner and some reasonable other losses. At the same time, this supports my contention that you need not just 1,000 or even 4,500 nits to do see-through outdoors.

            4. Andrew Scully’s remark has no context and is certainly not enough for see-through outdoor use in bright sunlight. Blocking 75% to 90% of light means that everything in shadows will be black, not something you would necessarily want. You would have to flip up/take off the glasses and lose the display as light conditions vary or to see things in shadows. The 3,000+ nits is a something that multiple companies have said that is required.

  2. There have been a number of off-axis designs, particularly in aeronautic/ military helmets. They usually have long lens trains to correct for off-axis mirror aberrations. For some reason I don’t think I’ve ever seen a convex-concave ellipsoid mirror pair.

    I was thinking about OLED microdisplays and retinal condition display, wondering if by placing one of those fibre-optic faceplates on an OLED microdisplay, with lenses on each of the fibre ends, one could get collimated output. Thence a convex-concave ellipsoid magnifying pair is placed in train to give the angle difference to each pixel output and thereby achieve the retinal condition of an infinite-focus display? (or at the very least a very slowly increasing circle of confusion for contra-display focus?)

    1. Off-axis certainly has been done. It is tricky to keep in check the distortion/uniform-magnification, focus, and other image quality issues. The IMMY design is interesting in that it looks to be very simple (if maybe a little bulky); I will be curious to see the resultant image quality.

      I was talking with someone the other day about similar trying to increase the nits with OLEDs. With just simple whole image the nits goes up and down proportional to the net area. As you say, what you need is a microlenses over each pixel, something they have been doing for a number of years with cameras. These lenses have to be mated right up against the pixel or else you don’t gain anything; you can’t say add them to the outside of the package device as you suggested as too much of the light has already spread out/escaped to do much good.

      There is a little thing (really a big thing) called étendue that basically says light rays can only become more random (similar to 2nd law of thermodynamics) that I think applies to your trying to get infinite focus from an OLED. The way to get infinite focus is to start with a very high f-number light source such as a laser. You can use lasers to illuminate an LCOS or DLP device and get near infinite focus. But you can’t start with a diffuse light source and put the étendue genie back in the bottle.

  3. Karl, good day.

    Did you know this news? Augmented reality helmet maker Daqri is cutting about 25% of its nearly 330-person-strong workforce worldwide, multiple people have told Business Insider, or about 80 people and shutdown industrial helmet project.

    Microsoft made a pause (or shutdown?) HoloLens project. Why?

    Photo new prototype Magic Leap (CPU engine – Tegra X1 developer board). Its far away from mass production.

    1. Thanks for the links. I have been saying for years that AR/MR is being over-hyped. The problem with “re-targeting for industrial business” is that it is a tiny market. The factory worker and the UPS driver is not going to use Hololens; it is too big and bulky, does a bunch of things they don’t need, and is even dangerous to use in most environments. There is some market for simple informational displays for workers that need to keep their hands free, but most of these need something very simple, light, and not very expensive. Not matter how you slice it, it does not add up to a market in dollars that matches the hype. The “industrial/business” market is, like Google Glass and Hololens, go to hide while they struggle to find a use case.

      Hololens has a hard time getting past the core geek market. I saw a lot of Hololens at CES in booths and their use was DUMB. It was a pain to wait to see a presentation that you could have seen on a computer monitor. I remember at one booth they had a series of “screens” you looked at to see a presentation; my thought was they had a lot of personal to hand out and manage the $3,000 headsets to simulate a $200 LCD monitor that would look better. The business case for Hololens is maybe for a few designers and in the game world, I think the idea of Mixed Reality is very dubious; homes are so different and “cluttered” that I don’t see how developers will generate a lot of content. And the industrial case for Hololens is almost non-existent; people are not going to wear a big, bulky helmet that blocks so much of their vision in most industrial environments; it is dangerous. I’m waiting to see a Hololens “use case” beyond NASA and some very contrived demos.

      Magic Leap has claimed that the Inside Business picture is of a data collection system and not the PEQ (product equivalent). I don’t have enough information to know if this is true or not. I do thing this has been so over hyped that it will not live up to their expectations. I would not be at all surprised if they implode and become a zombie company (a company with a failed product but with a lot of investor money in the bank). I think they are a company that did some early demos that impressed the big money people. Maybe at the beginning they thought they could greatly improve it and reduce the cost and got showered with money. They have then been trying to use that money to buy the technology they need; the problem for them is what they promised could not be bought — sometimes the problem is bigger than all the smart people and money can solve. Even if it works and does everything they hope, I am dubious about the use model resulting in that big a market; this could be a repeat of 3-D TV at best.

      VR is a bit of a different animal as they can make it cheap with flat panel displays and mostly works for the game market; even VR has issues as to how much it can grow; can the get beyond the core game market?

    1. Thanks,

      The Birdbath design keeps coming up again and again, but normally with either LCOS or DLP and only with OLED for non-see-through use because of the light loss. It was curious that ODG used it for a see-though application with OLED. IMMY’s approach is interesting in that it starts out 4X more efficient without the double pass with the beamsplitter. This is still not really enough IMO for outdoor use with commonly available (non-R&D prototypes) OLED microdisplays but it is a huge improvement. IMMY does use “shuttering” of the external light which while not totally wrong, it will have to block too much of the outdoor light.

      An interesting thing with IMMY is that their shutter is selective. It can’t be “pixel accurate” as it will be out of focus, but more like local dimming used on the better (“direct LED” rather than side illuminated) LCD TV.

      1. the selective shuttering is done with liquid crystal I assume? Does this also have issues with acting as a polarizing filter on external lightsources (like LCD monitors)?

        1. That’s a very good question. I don’t know what IMMY is using for the local dimming.

          Using a polarizing LC would be the obvious way to do it. The downsides would including loosing over 50% of the reals world light off the time the issues with polarization of everything in the real world. There are LC modes/types that don’t depend/work-by polarization and then there is electrochromatic glass most famously used in the new Boeing 787 (see: All the non-polarizing LC’s and the e-c-glass are much slower switching that the polarizing LC’s used for making displays (and one reason they are not used). If what IMMY uses for light blocking is non-polarizing, then it is likely it could not keep up with changing motion on the main display.

    1. I could not find a link to “Augmented Reality’s impact by Gaia Dempsey, cofounder of DAQRI” by Scoble, just a 2015 talk by her with a similar title.

      Daqri just when through a pretty significant layoff suggesting they are not fulfilling their vision ( They seem to have big expensive helmets that block a lot of the forward vision. This would seem to leave them in a very small corner of the market.

      1. I tried their helmet at MAVRIC conference in Cork last Tuesday, it doesn’t really block out the forward vision or the extreme peripheries. Ambient light is a bit smeared at the middle of the sides because of the clear waveguide protectors, overall pretty good though.

        Their tracking also seemed very solid, the demo was in the main conference room with only a few chairs as unique objects but mostly carpet (lack of keypoints) and people moving around (occlusion and non-stationary).

        1. Everything is relative. I have not tried one but from the pictures the darken the forward vision quite a bit and both the helmet and the “glasses” appear to isolate the viewer. This may be appropriate for some tasks but for many others they would seem to be dangerous. These are not the kind of things you are going to have average people in most jobs to wear on a regular basis.

          Daqri’s price point is pretty high (reportedly $5,000 to $15,000 per unit –

          Compare them with say Real Wear and Vuzix which are focused on keeping the forward vision free. To be fair, these products are more aimed at simple AR information than immersion.

  4. Karl,

    The effect of external backlighting when comparing LCD/LCOS vs OLED “see throught” capabilities is not considered in your analysis .

    Because OLED only emits light from pixels that are turned on only a relatively small amount of the surface area is “obstructed” by the displayed light . I would guess when symbology is displayed only perhaps 5 -10% of the surface area is “obstructed” . Compared to LCOS and LCD which will always have some light cast upon 100% of the surface area . Lets call this” backlight haze” .

    A discussion by Uwe Vogel, Ph.D. Fraunhofer can be seen in this video beginning @ 14:20 :

    Obviously “backlight haze ” would be something that proponents of LCD/LCOS would want to ignore or downplay and it is likely harder to quantify .

    I’m no expert but wouldn’t a “brute force” brightness approach exacerbate “backlight haze” ?

    Secondly , wouldn’t the “backlight haze ” create additional problems when used with waveguides where light is already difficult to control ?

    I believe the problems associated with “backlight haze” is one of the reasons for the growing trend of companies moving toward OLED micro displays including :

    ODG moving from LCOS to OLED micro display
    Moverio BT-300 Si OLED microdisplay
    IMMY XGA OLED microdisplay
    Wave Optics OLED micro display
    Zeiss Smart Optic OLED 800x 600 microdisplay
    eSight XGA OLED microdisplay

    Everysight – (owned by Elbit and likely based off the Skylens – OLED micro display)

    Penny OLED microdisplay

    Hololens V3 – expected 2019 – unknown final display type

    Magic Leap – unknown final display type

    Apple – rumored to be based of Zeiss = Smart Optics & OLED micro display

    Sony – SED-E1 SmartEyeglass monochrome OLED micro display- future products likely OLED as they manufacture them .

    Samsung – produced OLED micro display prototype

    1. There is definitely a trend toward OLED and background “haze” is an issue. Certainly for “see-through” displays most of the pixels need to be off or near-off for you to see through, so you can leverage the peak brightness of OLEDs

      But you are still talking almost two orders of magnitude in terms of cd/m-squared (nits) between the OLED’s and DLP and LCOS. Thus when ODG goes to OLED they only are about 5% transmissive because they have to block a lot of light so you can even see the display. Micro OLEDs are over an order of magnitude too dim for making really “see through” displays with say 80+% transmission of the real world. The IMMY design seems to do about a 4X better job of closing the gap than the birdbath, but it is still going to be on the dim side of things for a see-through display and it incorporates a variable shutter.

  5. Hi Karl,
    Great blog. I regularly stop by.
    You haven’t mentioned the possibility of reflecting selective RGB wavelengths on the beam-splitter and combiner surface (ideally those wave-lengths matched to the image light-source). There will still be losses, but see-through could be improved (even with some “missing” RGB bands) and image light losses would be reduced. What do you think?

    1. A very good point, I certainly know about triple bandpass/notch filters in combiner and even mentioned them in the Navdy Patent application ( I have never used one in practice but have looked at using them several different times and I thought about including it in the article on combiners but must have forgotten to include it. It is an error of omission rather than commission as I am focused on the issues at hand when I write an article. Perhaps the most widely used triple notch filter people have used is Dolby’s 3-D where they use a different set of primary colors per eye. If you have a headset you would not need to use different primaries for each eye but for efficiency you would want narrow bandwidth light sources.

      I’m pretty sure that at least some military HUD’s use green notch filters with green only HUDs based on some pictures I saw when designing Navdy. The reason I say this is that I could see a color shift in the pictures in the part of the real world seen through the HUD. The issue with triple notch filters mirrors/combiner is that while they reflect the colors you want, they also notch them out of the real world. To get a very tight notch and thus have less effect on the real world you need a lot of coatings and you need to do it for 3 colors and that may get expensive. If you don’t have tight notches then you will see a lot of real world color shift.

      But as you say, in theory with a triple notch filter you could get very good light throughput for both the image and the real world. Triple notch filters are big secret to people that know optics so I would guess there are practical reasons, either cost and/or color shift why they are not used. There is even a company that specifically makes them for HUDs ( and other companies that make triple notch filters (such as

    1. I would think it might possible, but I would think they would have very diffuse light which is a problem. Maybe they could put microlenses over each pixel. I don’t “get” MicroLEDs, at least for quite a long time. I know Apple has bought into them so they are interested but Apple is “interested” in a lot of technologies.

      MicroLEDs pixels are what I call a tween-er size. They are smaller than necessary for direct viewing and way too big for see-though displays. Watch out for the quoted pixel pitches as they often quote them for single color (ala green only) display. The full color (RGB) ones have a pixel pitch on the order of 40 microns. Direct view displays such as an iPhone are on the order of 78 microns. A typical microdisplay used in AR/MR has pixels between 7 and 10 microns. You need different semiconductor materials to make red versus blue and green which makes shrinking the pixels difficult. I also would think the cost per pixel would be very large compared to the other technologies.

      The rumor is that Apple will try them in the iWatch. Another place they might fit with their larger pixels is in VR headsets if they can get the resolution up without being too expensive. Using flat panel OLEDs in headsets starts with pixels that are really too big but they are comparatively cheap relative to microdisplays grown on a semiconductor substrate; the result is that you get wide FOV but lousy angular resolution.

      There is a “display chasm” between flat panel pixels and semiconductor substrate pixels due to the size of the transistors for controlling the pixels. I’m not sure if MicroLEDs is the worst of both sides of the chasm with pixels that are too big or cost that is too high.

  6. In a previous post (sorry I couldn’t reply there so I replied here) you wrote that for a near-eye outdoor display or “HUD” for say a motorcycle you want 3,000 to 4,000 nits whereas for a CAR HUD you may want 15,000 or more nits. Why is this difference? Is 3,000 to 4,000 for a see through display?

    1. I think the primary reason for the different is that you can’t turn the car or move it up or down to avoid bright light. In the case of a helmet/glasses you can look down or to the side and you won’t deliberately look straight out into a bright sky. But in a car you have no choice a the display is bolted to the car and the only way to move it is to change the direction of the car.

      1. In a car the existing HUDs project the information so it appears a few degrees below the driver’s forward vision and it is usually visible against the dark background of the road. I would assume that it would be the opposite for a see through HMD. For example google glass, where the display was slightly above the right eye, the information would appear against the horizon (if outside) and for that reason it would need to be brighter. Of course you can turn your head down as you mentioned but those devices were meant to be used with the head-up.

        Another question I have about see through HMDs, many claim that they can project the image far away, 10m if not infinity. Assuming you are using a tiny display producing 10,000nits, in order for the virtual image to be readable at a large distance it needs to be magnified. Given that the micro displays are tiny, let’s assume they are 1cmx3cm (a typical LCOS), and a virtual image at 10m would need to be 100cmx300cm, that’s a 100x magnification. The virtual image has an area that is 10000 larger than the real image. When you magnify an image you also reduce its intensity quadratically. So the magnified virtual image would have an intensity of 1nits. How is that possible, I must be assuming something wrong or is there some trick the HMDs use?

        1. You started with the false premise that Google Glass (GG) did anything right :-). With all the money they put into GG, it was was almost comical how bad it was and how many people thought it would succeed. I think this is also a lesson for today’s AR/MR market; the general public and worse most of the people writing about AR/MR don’t really understand the issues and problems involved.

          GG made MANY mistakes and one of them was placing the image above the eye which is ergonomically TERRIBLE. It is stressful to lookup with just your eyes. Think about it, do you hold a book over your head to read it? You eye muscles are more relaxed when you look down. Generally when you look up you will also turn your head up which is also not as comfortable (I guess primordially, we were more worried about snakes and tigers than birds attacking us). Most other near eye display devices put the image either straight out of looking down. The one thing GG did was flush out of the labs a bunch of other half-baked developments and cause a business feeding frenzy in AR/MR.

          It is not so much “Heads Up” as much as not having to look so far down and away from what is going on. In a car, you really don’t want the imagery up in the critical vision of the road which is in very narrow band. You don’t want say the HUD map to obscure the person you might hit. Frankly, you want the “HUD” image to be seen in the hood of the car. To me, the more important function of the HUD is to put the image into your far vision rather than to be “see-through.” If you are looking at the road through the HUD then it is blocking things you need to see when driving. It takes some time to change the focus of your eyes, about a 1/3 of a second for a young person and it generally gets slower as you age and as your range of focus reduces.

          You are a bit mixed up on the issues of magnification, apparent focus, and nits too complicated, let’s see if I can help. What affects the nits is how much the image is magnified, not where it appears to be focus.

          It is a “marketing simplification” (some would say hype) to talk in terms of a X-diagonal image see from Y-feet away. When you are talking “virtual” images, it best to think in terms of angle of view. For reference 300 pixels per inch at reading distance (about 12 inches) is about 1-arcminute which is close to the limit of a person’s center (fovea) vision; if yo make the image and pixels 10X bigger and view it from 10X farther away the angular resolution is the same. For a near to eye display the optics don’t so much magnify the image as change the focus as the display is very near your eye and your eye can’t focus that close. Think about it, if you put a 1/4″ square right in front of your eye it will cover your entire vision and appear HUGE and block out a mountain in the distance. Also, without going into all the complications of the optics, when you magnify something, you are also move its apparent focus.

          As for “nits” I like to use the analogy of a hose with water. Assuming the same water per second comes out of the hose you could have a wide spray with little pressure or a narrow jet with high pressure. For a HUD you get high nits by narrowing the light to a tight “jet.” Nits (candellas per meter squared) are a measure of light per solid angle. If you looked directly into a projector you would see a very bright image in the projection lens (too bright to look at as the “nits” are so high. What happens when the projected image hits a screen the light is diffused to create a “real” image at the screen, the light is scattered and the nits are reduced by being spread out over an angle; back to the water analogy imagine if you put a finger in the way to spray the water. With a HUD, you end up trading the angle over which you can see the image, “the eyebox,” for the high nits. With a HUD you only worry about the driver seeing the image and not the passengers and only the driver can see the image (the eyebox is only big enough for the driver to see). The bigger the eyebox, the lower the nits for the same starting illumination.

          So to summarize, what counts in terms of nits in a HUD is how big the image is that you see at the point it is physically (not virtually) and how big the eyebox is. For example with a typical combiner HUD (such as Navdy) the physical image is about 5 inches diagonally (about the size of an iPhone-X Plus) (a little smaller than the combiner/lens so you can move your head).

          1. Thank you very much for taking the time to write such a detailed explanation. I think I begin to understand a little better now. So it is not the size of the virtual image that determines the brightness it is the size of the “real’ image on the combiner.

            So for the case of Navdy the physical image on the combiner is 5 inches diagonally and the virtual image is 6 ft beyond that. If you wanted to move the apparent focus to 30ft you would have to magnify the virtual image 5 times (otherwise it would be too small to read). Would the physical image enlarge 5 times as well, would it remain the same size or magnify by a smaller amount ?

          2. More or less, I am an electrical engineer by training so my optical descriptions are “causal”. On Navdy the combiner is a mirror and the image in it is “virtual.” The apparent side of the image is a function of how far away you are from the combiner so the 5″ is only approximate; it is roughly what you would see if you had a ruler taped to to the combiner (something I did at times). IF you say put a camera on the image you would find it would focus about 6ft or so behind the combiner (something else I did with a DSRL with a test chart behind it). Like the magnification, the virtual focus point varies with the distance from your eye to the combiner.

            With a spherical combiner you 3 variables, the curvature of the mirror, the distance from the real/projected-screen-image to the combiner, and the distance of your eye from the combiner. The focal point of a curved combiner is 1/2 the radius of curvature. On Navdy, the screen is between the focus point and combiner. As you move the screen to the focus point of the combiner, the magnification and apparent focus point go to infinity (it gets very unstable, small imperfections get greatly magnified), so it get VERY non-linear as you get near the focus point. The magnification and focus are related but not linearly and are a function of how far your eye is from the combiner.

          3. Once again thank you for the response. I really appreciate it.

            I have a basic understanding of the physics for optics and what you described makes sense. So I guess one advantage of the near-eye displays is that the distance of the combiner to the eye if fixed, and because this is a short distance (1cm-5cm) the “real” image on the combiner doesn’t have to be large which means you don’t have to use a lot of power to produce it. It is one thing producing a tiny image (1cmx2xm) with 10,000nits and another thing producing a larger image (10cmx20cm) with 10,000nits. The larger one would require 100times more power.

            So you are not an optical engineer? What part of Navdy did you design, the optical or the electronic?

          4. The big advantage of near eye in terms of “nits” is you can have a very small pupil/eyebox and thus concentrate the light.

            A big issue right now is that a lot of companies would like to use OLED Microdisplays but they have relatively low nits. The brightest that are available for sale and not just prototypes only have about 1,000 nits. 1,000 nits would be way too much if shot into the eye, but if you are doing a see-through display (which should be at least 85% transparent) you are going to lose/waste all but about 5% to 10% of the Display’s light. OLEDs put out very diffuse light and you can’t really get it back together again. LCOS and DLP start with collimated light and the light rays come off the display highly collimated (not spreading) and thus with extremely high nits (easily to get 30,000+).

            I primarily worked on the optics and the overall design. We had another person that had a lot of experience in board design and firmware so I could concentrate on the optics. I’m not an optical engineer but I have a lot of practical experience. I looked at what some others had done and got help where I needed it. I ran a lot of experiments and took measurements rather than doing optical simulations (frankly it was cheaper and faster). From my years in working with displays, I knew what I was looking for in the way of optical components (I needed a few tricky components) and I had connections with makers of small projectors. Late in the program we hired an optics person.

  7. In regards to near-eye bird path displays why can’t you have a single combiner at 45 degrees angle (or something similar) and get rid of the beam splitter which is the main source of light loss? For example in the IMMY figure eliminate the intermediate mirrors and have the display reflect directly off a curved combiner?

    1. There are various trade-offs. You can use a simple combiner IF you do all the optical work to adjust the focus point (virtual distance) of the image first. For example Osterhaut Design Group (ODG) did this with their R7. It gets tough to give a wide FOV this way and there are issues with the “pupil” (how much you can move the eye up/down/left/right and see the image). Optically the birdbath is simple, supports a wide FOV and is comparatively cheap, thus you see it used a lot.What IMMY is doing would seem obvious but it is tricky when you go off-axis; if it weren’t tricky, everyone would be doing it (I don’t know what limitations it might have).

      Usually when you go off-axis you introduce distortion and/or focus issues. What IMMY does is have a series of curved mirrors to correct this distortion. The nice thing about mirrors is that they don’t introduce chroma aberrations (prism like color breakup), but they have their limitations. IMMY’s design looks to be bigger than say a birdbath design would be. We will have to see how well IMMY’s works.

  8. KarlG, I am not that technical when it comes to Optics (Software Engineer by trade, so bare with me) and I work for a company that is heavily interested in looking at / using different AR products (we have several internal use cases that are promising). We have been evaluating several different products and have been testing them in-house like the DAQRI helmet, HoloLens, and ODG glasses. Each (as you noted in your article and comments above) have different pro’s and con’s, yours focuses more on the optics side though.

    One of the articles that posted in December regarding a patent by Microsoft is about combining light field and wave guide to try to improve on the current HoloLens optic design. Can you comment on that and what you think? Do you think it will help at all or improve the current down-sides to wave guide only?

    1. The December article is hyping a a patent application that is little more than a wish/concept. You can file for a patent on things that are totally impractical or will never work. The patent application is so thin on details that it is barely a concept. Trying to get two displays in series to work has MASSIVE problems both in terms of light throughput as well as image quality. I would give them a zero percent chance of getting it to work within 10 years and less than a 0.1% chance in less than 20 (the life of the possible patent).

      Frankly, every headset today has major drawbacks. That is not so say that there are not some good niche markets. But nothing that is see through is going to have great image quality. Peoples expectations are way out of sync with what is possible to make and they have been over-hyped in the press. They have visions of something they have seen in a movie that was done in post processing with special effects and think that it is possible to actually build. They have visions of this thin with perfect image quality, a wide field of view, that is small and light like a pair of sunglasses and what they get is a DAQRI at $5,000+ or Hololens at $3,000.

      I tell people you need to really understand the market you are going after and what the customer will accept. There is a danger is projecting things as moving as fast as they did in semiconductors back in the 1980’s and 90’s. Display devices are on a much slower “learning curve.”

  9. I know its a while since this thread was active, but all i can add is, that i have worked with the said device – and it works well – indoors and outdoors. I can’t go into specifics in terms of problems and solutions. I can say that it works really well as it its, version 1. Improvements on the imager side, changes to imager further down the line will definitely deliver a product that is better then current ones in the market.
    I have worn the usual contenders that are available in the market, most have either FOV issues, beignets issues and a combination of the two. The said device does have few of either issues even in the prototype version.
    Karl I think you should go at the upcoming CES to have a look and talk with them, as I have indicated in a email sent a few days ago.

Leave a Reply

Your email address will not be published. Required fields are marked *