AR/MR Optics for Combining Light for a See-Through Display (Part 1)

combiners-sample-cropIn general, people find the combining of an image with the real world somewhat magical; we see this with heads up displays (HUDs) as well as Augmented/Mixed Reality (AR/MR) headsets.   Unlike Starwars R2D2 projection into thin air which was pure movie magic (i.e. fake/impossible), light rays need something to bounce off to redirect them into a person’s eye from the image source.  We call this optical device that combines the computer image with the real world a “combiner.”

In effect, a combiner works like a partial mirror.  It reflects or redirects the display light to the eye while letting light through from the real world.  This is not, repeat not, a hologram which it is being mistakenly called by several companies today.  Over 99% people think or call “holograms” today are not, but rather simple optical combining (also known as the Pepper’s Ghost effect).

I’m only going to cover a few of the more popular/newer/more-interesting combiner examples.  For a more complete and more technical survey, I would highly recommend a presentation by Kessler Optics. My goal here is not to make anyone an optics expert but rather to gain insight into what companies are doing why.

With headsets, the display device(s) is too near for the human eye to focus and there are other issues such as making a big enough “pupil/eyebox” so the alignment of the display to the eye is not overly critical. With one exception (the Meta 2) there are separate optics  that move apparent focus point out (usually they try to put it in a person’s “far” vision as this is more comfortable when mixing with the real word”.  In the case of Magic Leap, they appear to be taking the focus issue to a new level with “light fields” that I plan to discuss the next article.

With combiners there is both the effect you want, i.e. redirecting the computer image into the person’s eye, with the potentially undesirable effects the combiner will cause in seeing through it to the real world.  A partial list of the issues includes:

  1. Dimming
  2. Distortion
  3. Double/ghost images
  4. Diffraction effects of color separation and blurring
  5. Seeing the edge of the combiner

In addition to the optical issues, the combiner adds weight, cost, and size.  Then there are aesthetic issues, particularly how they make the user’s eye look/or if they affect how others see the user’s eyes; humans are very sensitive to how other people’s eye look (see the EPSON BT-300 below as an example).

FOV and Combiner Size

There is a lot of desire to support a wide Field Of View (FOV) and for combiners a wide FOV means the combiner has to be big.  The wider the FOV and the farther the combiner is from the eye the bigger the combiner has to get (there is not way around this fact, it is a matter of physics).   One way companies “cheat” is to not support a person wearing their glasses at all (like Google Glass did).

The simple (not taking everything into effect) equation (in excel) to computer the minimum width of a combiner is =2*TAN(RADIANS(A1/2))*B1 where A1 is the FOV in degrees and and B1 is the distance to farthest part combiner.  Glasses are typically about 0.6 to 0.8 inches from the eye and the size of the glasses and the frames you want about 1.2 inches or more of eye relief. For a 40 degree wide FOV at 1.2 inches this translates to 0.9″, at 60 degrees 1.4″ and for 100 degrees it is 2.9″ which starts becoming impractical (typical lenses on glasses are about 2″ wide).

For, very wide FOV displays (over 100 degree), the combiner has to be so near your eye that supporting glasses becomes impossible. The formula above will let your try your own assumptions.

Popular/Recent Combiner Types (Part 1)

Below, I am going to go through the most common beam combiner options.  I’m going to start with the simpler/older combiner technologies and work my way to the “waveguide” beam splitters of some of the newest designs in Part 2.  I’m going to try and hit on the main types, but there are many big and small variations within a type

gg-combinerSolid Beam Splitter (Google Glass and Epson BT-300)

These are often used with a polarizing beam splitter polarized when using LCOS microdisplays, but they can also be simple mirrors.  They generally are small due to weight and cost issues such as with the Google Glass at left.  Due to their small size, the user will see the blurry edges of the beam splitter in their field of view which is considered highly undesirable.  bt-300Also as seen in the Epson BT-300 picture (at right), they can make a person’s eyes look strange.  As seen with both the Google Glass and Epson, they have been used with the projector engine(s) on the sides.

Google glass has only about a 13 degree FOV (and did not support using a person’s glasses) and about 1.21 arc-minutes/pixel angular resolution with is on the small end compared to most other headset displays.    The BT-300 about 23 degree (and has enough eye relief to supports most glasses) horizontally and has dual 1280×720 pixels per eye giving it a 1.1 arc-minutes/pixel angular resolution.  Clearly these are on the low end of what people are expecting in terms of FOV and the solid beam quickly becomes too large, heavy, and expensive at the FOV grows.  Interesting they are both are on the small end of their apparent pixel size.

meta-2-combiner-02bSpherical/Semi-Spherical Large Combiner (Meta 2)

While most of the AR/MR companies today are trying to make flatter combiners to support a wide FOV with small microdisplays for each eye, Meta has gone in the opposite direction with dual very large semi-spherical combiners with a single OLED flat panel to support an “almost 90 degree FOV”. Note in the picture of the Meta 2 device that there are essentially two hemispheres integrated together with a single large OLED flat panel above.

Meta 2 uses a 2560 by 1440 pixel display that is split between two eyes.  Allowing for some overlap there will be about 1200 pixel per eye to cover 90 degrees FOV resulting in a rather chunkylarge (similar to Oculus Rift) 4.5 arc-minutes/pixel which I find somewhat poor (a high resolution display would be closer to 1 a-m/pixel).

navdy-unitThe effect of the dual spherical combiners is to act as a magnifying mirror that also move the focus point out in space so the use can focus. The amount of magnification and the apparent focus point is a function of A) the distance from the display to the combiner, B) the distance from the eye to the combiner, and C) the curvature.   I’m pretty familiar with this optical arrangement since the optical design it did at Navdy had  similarly curved combiner, but because the distance from the display to the combiner and the eye to the combiner were so much more, the curvature was less (larger radius).

I wonder if their very low angular resolution was as a result of their design choice of the the large spherical combiner and the OLED display’s available that they could use.   To get the “focus” correct they would need a smaller (more curved) radius for the combiner which also increases the magnification and thus the big chunky pixels.  In theory they could swap out the display for something with higher resolution but it would take over doubling the horizontal resolution to have a decent angular resolution.

I would also be curious how well this large of a plastic combiner will keep its shape over time. It is a coated mirror and thus any minor perturbations are double.  Additionally and strain in the plastic (and there is always stress/strain in plasic) will cause polarization effect issues, say whenlink-ahmd viewing and LCD monitor through it.   It is interesting because it is so different, although the basic idea has been around for a number of years such as by a company called Link (see picture on the right).

Overall, Meta is bucking the trend toward smaller and lighter, and I find their angular resolution disappointing The image quality based on some on-line see-through videos (see for example this video) is reasonably good but you really can’t tell angular resolution from the video clips I have seen.  I do give them big props for showing REAL/TRUE video’s through they optics.

It should be noted that their system at $949 for a development kit is about 1/3 that of Hololens and the ODG R-7 with only 720p per eye but higher than the BT-300 at $750.   So at least on a relative basis, they look to be much more cost effective, if quite a bit larger.

odg-002-cropTilted Thin Flat or Slightly Curved (ODG)

With a wide FOV tilted combiner, the microdisplay and optics are locate above in a “brow” with the plate tilted (about 45 degrees) as shown at left on an Osterhout Design Group (ODG) model R-7 with 1280 by 720 pixel microdisplays per eye.   The R-7 has about a 37 degree FOV and a comparatively OK 1.7 arc-minutes/pixel angular resolution.

odg-rr-7-eyesTilted Plate combiners have the advantage of being the simplest and least expensive way to provide a large field of view while being relatively light weight.

The biggest drawback of the plate combiner is that it takes up a lot of volume/distance in front of the eye since the plate is tilted at about 45 degrees from front to back.  As the FOV gets bigger the volume/distance required also increase.
odg-horizons-50d-fovODG is now talking about a  next model called “Horizon” (early picture at left). Note in the picture at left how the Combiner (see red dots) has become much larger. They claim to have >50 degree FOV and with a 1920 x 1080 display per eyethis works out to an angular resolution of about 1.6 arc-minutes/pixel which is comparitively good.

Their combiner is bigger than absolutely necessary for the ~50 degree FOV.  Likely this is to get the edges of the combiner farther into a person’s peripheral vision to make them less noticeable.

The combiner is still tilted but it looks like it may have some curvature to it which will tend to act as a last stage of magnification and move the focus point out a bit.   The combiner in this picture is also darker than the one in the older R-7 combiner and may have additional coatings on it.

ODG has many years of experience and has done many different designs (for example, see this presentation on Linked-In).  They certainly know about the various forms of flat optical waveguides such as Microsoft’s Hololens is using that I am going to be talking about next time.  In fact,  that Microsoft’s licensed Patent from ODG for  about $150M US — see).

Today, flat or slightly curved thin combiners like ODG is using probably the best all around technology today in terms of size, weight, cost, and perhaps most importantly image quality.   Plate combiners don’t require the optical “gymnastics” and the level of technology and precision that the flat waveguides require.

Next time — High Tech Flat Waveguides

Flat waveguides using diffraction (DOE) and/or holographic optical elements (HOE) are what many think will be the future of combiners.  They certainly are the most technically sophisticated. They promise to make the optics thinner and lighter but the question is whether they have the optical quality and yield/cost to compete yet with simpler methods like what ODG is using on the R-7 and Horizon.

Microsoft and Magic Leap each are spending literally over $1B US each and both are going with some form of flat, thin waveguides. This is a subject to itself that I plan to cover next time.

 

Karl Guttag
Karl Guttag
Articles: 256

30 Comments

  1. I’d be also very interested to know what are the famous light field in the magic leap and how they compare to existing technology.

    • Magic Leap certainly qualifies for the (Winston Churchill’s) saying “riddle wrapped in a mystery inside an enigma” and I am working on breaking it down. I’ve decide to break up the next article into at least 2 parts. I’m first going to discuss Microsoft’s Hololens because it is comparatively straight forward approach to Mixed Reality. Then I plan on delving into Magic Leap and more I look at Magic Leap, the more there is to it.

      I don’t know how much you know about “light fields” but there is a good paper on one approach to near eye light fields by NVIDIA. In this solution NVIDIA effectively generates 120 pixels worth (and array of 8×15=120 light field images) of information per pixel seen. Clearly, Magic Leap could not be taking such a brute force approach or the resolution would have to be extremely low or the complexity, power/processing, and cost extremely high. To even have a hope of being real, they must have come up with a way to highly compress/reduce both the computation and image generation requirement per “pixel” that is perceived.

      Magic Leap has certainly impressed people with big money. But it would seem they are taking on a problem that will require a lot more than $1.4B.

  2. Karl, thank you for one of the most comprehensive introductions to the AR Optics on the web. Cannot wait for you to publish the next part.
    What do you think about Lumus’ tehch?

    • Thanks but I’m certainly not the most comprehensive, some of the sources I cite are more comprehensive. What I am trying to translate the optics into more layman terms and concepts. Also to boil things down to just the elements that are helpful in understanding the technology without needing a course in optics.

      So far I have over Lumus because I am trying to get to the $1b+ players. I was going to over Lumus in my first article but it was just to much. I first saw the combiner at SID in 2011. It is different waveguide from say Hololens that I just posted. In Lumus’s/your case the light is “side injected” the image to get the angle for total internal reflection which is simpler (and perhaps better image quality) than using diffraction (like Hololens) and it doesn’t need the 90 degree turn like Hololens. For anyone reading this comment that don’t know (I’m sure you do better than I), Lumus use a series of prism’s built into the combiner/waveguide to cause the image to “exit” the combine (see for example: https://www.engadget.com/2014/01/10/lumus-wearable-computer-hands-on-ces/ and https://www.osapublishing.org/oe/fulltext.cfm?uri=oe-22-17-20705&id=299627.

      It has been a while since I have seen through a Lumus combiner, but back then I did notice “lines”/small-gaps in the image caused by the multiple stacked prisms. I also would expect the exit prisms to cause some kind of distortion (perhaps chroma aberrations) of the “real world”, but I have not had the chance to try one lately. I would think the stacked prism approach is likely expensive to produce as they have to very precisely made to work, but it probably would do better than say the Hololens today (this is highly speculative as I do not have the two to compare).

      • Yuval,

        I would be interested in what you think the Lumus combiner advantages are over say the diffractive/waveguide of Hololens and the simple tilted plate combiner that Osterhout Design Group is using.

        Karl

    • Thanks for the list.

      All but WaveOptics I was aware of previously. Rather than give a complete survey of all the combiners, I focused on those associate with those making a big push into the market. It is a good point that there are MANY attempts at waveguides. They continue to amaze the public (and MANY people in the news media) because they have not broken through in the market. The movies have occasionally used them as plot point (the Borne movies come to mind), they are show with capabilities that in reality that don’t live up to (easy to do in post production with a movie, not so in real time in the real world).

      Some form of waveguides are being held out as the Holy Grail for near eye displays, but to date they have filled only niche markets and have their drawbacks which is why the public does not know about them and are amazed when they see one.

      I am well aware of Digilens and have been to their offices a few times over the years, but had not followed them of late. Interesting that they are working with BMW motorcycles. Doing a heads up display or HUD,for a motorcycle in this case, is a different proposition to doing a high resolution AR/MR system. With a HUD, you are presenting generally low resolution and low content (to be see through) information and typically with a relative small field of view.

      Below are a few random comments about the companies you cited:

      I was not aware of WaveOptics, they have a very small medial/marketing footprint. They say they have a patented technology but I could not find a patent to the company or the two founders that applied. The Twitter image shows the device I would assume (and looking a bit like Hololens). One thing that jumps out from the twitter picture is how non-transparent the display area is in the waveguide; this has to have a negative effect on the forward vision.

      TruLife Optics made a bit of a media splash in 2014 but not a whole lot since. They are hologram rather than diffraction grating for injections and exiting the waveguide. From the few pictures they don’t look to be that transparent and I wonder how the real world looks when looking through them.

      • Karl,

        Regarding the WaveOptics tweet , I think it’s a little hard to tell from 1 photo whats going on . You stated “One thing that jumps out from the twitter picture is how non-transparent the display area is in the waveguide; this has to have a negative effect on the forward vision.”

        It may be that they have intentionally turned up the brightness to overcome the high ambient light that is present or they could be using the device in an immersive mode . Could also simply be a bad camera angle . I was impressed by what appears to be a rather large FOV .

        I also searched for patents and came up empty handed .

        The founders came from BAE Systems so maybe they have license some tech from them .

        http://www.waveoptics.co.uk/#team

        Also Blippar has made an investment in WaveOptics :

        https://blippar.com/en/resources/blog/2015/06/16/blippar-invests-in-augmented-reality-display-pioneer-waveoptics/

        Blippar raised $54 m earlier this year .

        https://techcrunch.com/2016/03/02/blippar-augmented-reality-search-engine-raises-54-million-series-d/

        I have been seeing more information originating from WaveOptics & believe they we will be coming to the forefront with products soon .

      • Certainly you are correct that you really can’t tell a lot from just a photograph and I didn’t mean it to be the last word. But the lack of transparency would not be caused by a display image in that picture. A displayed image would look lighter and not darker in the area of the exit to from the waveguide. Yes, it could appear to be less transparent due to the camera angle so definitely my comment is only very preliminary based on the little information that I have, the one photo.

        A key point I am trying to make is that you have to add up all the pluses and minuses of the various approaches. People tend to be unduly impressed by quick demos and become “instant experts”. One of the fundamental problems with a waveguide is that whatever it is that redirects the image light to the persons eye in the exit area of the waveguide has to simultaneously been seen through and that these two purposes are ALWAYs at odds. Also there is an image quality issue with bending light at a tight angle which usually shows up as color separation and/or focusing issues with various colors (chroma aberrations).

        Thanks for all the other information about WaveOptics and I would love to learn more about them. Do you happen to know the FOV and/or pupil size of their waveguide and optics? There is certainly a lot of technical and financial interest in waveguide optics for near eye displays.

      • I stand corrected. I went back and took a closer look at the facebook picture (I should have pulled it in an measured it, my eyes played tricks on me) and indeed they image in the exit area of the waveguide is indeed brighter (not as much as the Hololens example you cited, but definitely there is an active image being projected on the WaveOptics tweet’s picture which is obscuring the person’s eyes.

        BTW, there is a whole other social issue: Are people going to be happy looking at people with these glowing images in front of their eyes? Humans are very sensitive to how other human’s eyes look (one of those “primordial” things).

        I’m looking for two things from these waveguides 1) is how good the projected image looks and 2) what effect they have on the real world. I think the second one in particular is overlooked. For example if you look through a simple diffraction grating, you will everything in the “real world” having color fringing due to the differences in wavelengths of light (a diffraction grating is classically used to separate colored light just like a prism).

        Its really tough when you have a single element/thing that has to both change the angle of light it reflects while not having some (bad) effect on the light passing through it. At Navdy, I worked with at semi-mirror combiner (aluminum coated), the trade-off was pretty straight forward, in reflecting the say 30% of the light you then blocked about 30% of the incoming light making it darker (as different trade-off but one none the less). With any optical element that has light coming from two directions, it is going to have an effect on both directions (kind of a conservation of energy type of thing).

        What I am after is what is the optical price you pay in the transmission (of the real world) to get the optical effect you want for the reflected computer image? Beyond this we get into the many other metrics the FOV, eyebox/pupil, and the MTF/resolution it can support.

  3. Hi, this is a very good article and the pdf file from kessler optics is very good. I was looking for more information on how the hololens works and that kessler report seems to explain how when read in conjunction with the other NEDS discussed. Basically, it seems the focal point needs to be distanced.

    However, all the NEDS seem overly complicated besides ODG. I first became interested in NEDs after glimpsing the reflection from my smartphone on the inside of my sunglasses when taking them off (which was impressively clear and focused). It put me on a still ongoing 2 year quest to explore NEDs, but the more one looks the whole NED industry the more bewildering it becomes.

    The vr cardboard suffices for now, but I believe either ODG R-7 have taken a massive step or the hololens will get there incrementally (which could take years). The 2017 CES show could see the latest ODG consumer device which may catch the media’s attention.

    Anyway, thanks for the great article, and now I have commented, I will never lose this page or website, (assuming there are followup comments which will be sent to my email 🙂

    • Thanks,

      People seem to think that “flat” is the “holy grail” of near eye displays that you can see through. To make something “flat” requires a waveguide, basically a plate of glass that the light will bounce around inside; the problem becomes how to get the light in and what makes it come out and both of these process can hurt the image quality and what redirects the light out toward the eye you have to look through and it may degrade the “real world” view. ODG design simply put an angled mirror which is simple and will have good optical quality, but it is not as thin from front to back; but frankly when you look at Hololens, I don’t see where there would be a significant difference for the overall system (there is so much stuff other than the waveguide/combiner).

      In many ways Hololens demonstrates how they have worked everything around the the waveguide technology they acquired from Nokia and yet there is so much left to do to have a complete system.

      The first problem issue that needs to be address is that the human eye starts to focus well at about 10″ and generally NEDs put the focus into your far vision. In most designs you move the focus before the combiner as the combiner pretty much acts like a mirror. The

      Magic Leap (and others) feel it is important if you are going to generate 3-D images that the focus distance agrees with where the image appears to be in 3-D space (known as accomodation). Magic Leap’s patents at least show ways to accommodate/move-focus with multiple waveguides.

      It is a very different game for the non-see-through VR devices (Oculus and Google Cardboard). There all you have to do is move the focus and there is no issue of seeing through the display. They are big and totally block your vision and addressing in many ways a different problem with a different set of constraints.

      • Thanks Karl,

        I slept on this and did consider the fact that ODG managed to get their fov from 30 degrees upto 50 degrees may mean that in time the hololens could increase their fov. Just a simple upgrade idea but important nonetheless.

        As many people have commented before elsewhere online, the whole NED
        Industry seems to be making incremental advances rather than any giant steps.

        ODG glasses could be slimmed down to possibly a 1 inch protrusion from the face if the combiner and protective shield were somehow moulded together or just have a combiner that is also protective..like traditional viewing glasses.

        Regarding the oculus and cardboard scenario, there do exist adjustable lens which can focus short and long distance. If that was combined with ultra high definition screens as well as fish eye camera lens (that could take in a 90 degree fov or more) then it would be possible to just have a slimmed down version of the oculus. Problem is the lag between the camera and the monitor as it takes in light and data is slow on smartphones as they are.

        What are your “projections” for the future…who or what NED would you bet on that will race ahead and become the standard NED? Please take a guess if you are not sure. I am just curious;)

        Regards,

  4. Hello Karl, you share an equation to calculate minimum width of combiner. I found it useful. I also found your angular resolution equation in other article very useful too. I want to design see through hud but how can i calculate eye box, eye relief, fov, image distance, image size, magnifying factor of all related with each other. Can you provide me a sample with equation or do you know an article you can suggest me

    • I’m not an optics designer so I can’t help you with all those details. Also it sounds like you are designing a “near eye” see through based on factors like “eye relief.” With say and automotive HUD the eye relief is huge. There are some articles out there on HUD design but it not clear from your question whether you are doing a automotive type HUD or a near eye HUD.

      • Dear Karl, to make it clear. Im planning near eye HUD design. For example for 15-20mm eye relief and 20degree FOV, display active area 6,4mm horizontal and 3,84mm vertical what should be my eye box size and how will i find it?

      • I’m not sure I fully understand your question, nor would I be the best person the answer it. I know a lot of “optical tricks” but I am not an optics designer. What I can say is:

        15 to 20 mm is very small eye relief (a person’s glasses would be at best right up against the optics) so it sounds like you are not planning on glasses wearing. The less the eye relief the easier the design and the less you need for the eyebox. A 20 degree FOV is fairly modest. I would not think it would be a very hard design, but as I said, I’m not an optical designer per say.

  5. Dear Karl

    Great article – loved reading it. You keep referring in this and your other articles that the combiner optics as used in Meta and Mira headsets are readily available – and also gave linkages to the car based HUDs like Navdy as an example.

    I couldn’t find much onlline in terms of companies in US or China that provide the same. Can you help with the same?

    We are creating a tracked Augmented Reality headsets as a research project and would love to know how to source these combiner optics.

    Thanks.

    • I think I either was not clear or you misunderstood me. It is tough to get one-offs, but I was commenting that that getting combiners like those did not cost a lot to make. The big issue for a specific combiner is that you have to have a mold made for the plastic and get set up for manufacturing and that can cost over $10,000 depending on the type of plastic; but after the mold is made and the manufacturing is set up, then the per unit cost in sub $10 and can be sub $5 depending on the size (it is probably more for the Meta 2 which is very large). There are a lot of way to make one-off optics but most of them are pretty expensive. The cheapest way to go is to hack/modify something that is already in production.

      I was pointing out that you can buy a somewhat similar cell phone reflective “Hud” combiner on Amazon for about $25. Unfortunately, these may not have the right radius of curvature for a headset, but you might look at taking one (or maybe two) of them and cutting them up (to make two smaller combiners, one per eye. The two I have played with Hudway (more transparent) and the Mpow (cheaper, but not very “see through” and with a larger radius of curvature).

      The big problem is that they both larger radius of curvature that you would want (quite a bit larger than say Mira’s but not that different from the Meta 2) to focus a cell phone display as a “headset” as is (basically just turning them upside down and using them like a headset. BUT if you locate the phone/display further away (than huds are designed) the display will come into focus. With the Hudway, I found the phone needs to be about 2″/50mm away whereas the larger radius of curvature MPOW requires the phone to be about 4.5″/100mm away to focus when the combiner is about 2″ from the eye (the closer you want the eye to the combiner the further away the phone would be. These distances might be acceptable if you are say locating the display in the top of a “visor”.

      So what you could to for experimenting is cut one of these in half (so you can then center the radius of curvatures for each eye — and as an improvement over Mira, you could support inter-pupil adjustment).

  6. Hey this is a really great write up. I appreciate you sharing your expertise with us all!

    • Yes, you have to move/change the focus of the image on the display device to at least 10-inches (254mm) away from the eye, but all the optics do this in one way or another. Most near eye optics move the focus beyond 2 meters so the it is a person’s “far focus.”

      In the pictures in the article you cited, there are three main ways to change the focus: Either with a concave curved mirror, or refractive optics (lens), or a combination of refractive optics and concave mirrors.

      Many, such as the birdbath, use a spherical concave curved mirror (or partial mirror to support see-through) which is relatively simple. With the concave curved mirror, the “object” (display) has to be between the mirror and the “focus” which is 1/2 the radius of curvature of the mirror. As the display approached the focus, the image becomes more magnified and the focus moves away. As the display location approaches the focus of the mirror, the magnification and focus become “unstable” and the display’s focus moves toward infinity. The curvature of the mirror and the location of the display relative to the focus of the mirror set the apparent distance/focus of the display.

      Some designs will shoot at the concave mirror from and angle and the mirror may not be spherical. In these cases there will typically be refractive optics (lens) or in some cases (IMMY for example) mirrors to correct for the off-axis distortion.

      Lastly there are the refractory optics cases. In these cases there are optics/lenses that are not show in the pictures that move the apparent focus of the display. These optics are necessary whether or not there is a flat beam splitting mirror or a waveguide. Note the waveguides are not the displays themselves but do the function (very loosely speaking) of a larger in volume flat mirror; there is a display with lenses in front of it before the light is injected into the waveguide. In some cases, such as with diffractive waveguides, the waveguides themselves have some optical power to them that affects the focus but generally they will requires other optics as well.

      You can’t just stick a transparent display device near to the eye, there has to be some form of optics.

  7. Dear Sir,
    Can you please tell me how to design an ellipsoid combiner for an AR headset. By design, I mean how to decide what should be the curvature of the combiner, how far should it be placed etc. Is there any software where I can do this calculations?
    Thanking you.

    • While it may seem simple, there is not a simple equation unless you are going to be using a simple spherical combiner. Also, I don’t have enough information about your configuration. The software you need for anything more than a spherical combiner is an optical modeling program, the most well known (and expensive) being Zemax.

      There are two main ways spherical combiners are used. One is in the “off axis” configuration like Mira, iGlasses, and Meta 2 where a larger display is used with a very large combiner. Having the display and view off-axis distorts both the image and the focus (the apparent focus point changes quite dramatically from the to the bottom of the image). The alternative is to use a “birdbath” configuration with a 45-degree semi-mirror so the display and eye are both on-axis so there is minimal distortion in both linearity and focus. The downside of the birdbath is that you have the light loss of the image and the eye looking through the 45-degree semi-mirror.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading