Tag Archive for Epson

Near Eye Displays (NEDs): Gaps In Pixel Sizes

I get a lot of questions to the effect of “what is the best technology for a near eye display (NED).” There really is no “best” as every technology has its strengths and weaknesses. I plan to right a few articles on this subject as it is way too big for a single article.

Update 2017-06-09I added the Sony Z5 Premium 4K Cell Phone size LCD to the table. Their “pixel” is about 71% the linear dimension of the Samsung S8 or about half the area but still much larger than any of the microdisplay pixels. But one thing I should add is that most cell phone makers are “cheating” on what they call a pixel. The Sony Z5 Premium’s “pixel” really only has 2/3rds of an R, G, and B per pixel it counts. It also has them in a strange 4 pixel zigzag that causes beat frequency artifacts when displaying full resolution 4K content (GSMARENA’s Close Up Pixtures show of the Z5 Premium fails the show the full resolution in both directions). Note similarly Samsung goes with RGBG type patterns that only have 2/3rd the full pixels in the way they count resolution as well. These “tricks in counting are OK when viewed with the naked eye at beyond 300 “pixels” per inch, but become more problematical/dubious when used with optics to support VR. 

Today I want to start with the issue of pixel size as shown in the table at the top (you may want to pop the table out into a separate window as you follow this article). To give some context, I have also included a few major direct view categories of displays as well. I have grouped the technologies into the colored bands in the table. I have given the pixel pitch (distance between pixel centers) as well as the pixel area (the square of the pixel pitch assuming square pixels. Then to give some context for comparison I have compared the pitch and area relative to a 4.27-micron (µm) pixel pitch which is about the smallest being made in large volume. Finally there are columns showing how big the pixel would be in arcminutes when view from 25cm (250mm =~9.84inches) which is the commonly accepted near focus point. Finally there is a column showing how much the pixel would have to be magnified to equal 1-arcminute at 25cm which gives some idea about the optics required.

In the table, I tried to use smallest available pixel in a given technology that was being produced with the exception of “micro-iLED” for which I could not get solid information (thus the “?”). In the case of LCOS, the smallest field sequential color (FSC) pixel I know of is the 4.27µm one by my old company Syndiant used in their new 1080p device. For the OLED, I used the eMagin 9.3 pixel and for the DLP, their 5.4 micron pico pixel. I used the LCOS/smallest pixel as the baseline to give some relative comparisons.

One thing that jumps out in the table are the fairly large gaps in pixel sizes between the microdisplays versus the other technologies. For example you can fit over 100 4.27µm LCOS pixels in the area of a single Samsung S8 OLED pixel or 170 LCOS pixels in the area of a the pixel used in the Oculus CV1. Or to be more extreme you can fit over 5,500 LCOS pixels in one pixel of a 55-inch TV pixel.

Big Gap In Near Eye Displays (NEDs)

The main point of comparison for today are the microdisplay pixels which range from about 4.27µm to about 9.6µm in pitch to the direct view OLED and LCD displays in 40µm to 60µm that have been adapted with optics to be used in VR headsets (NEDs). Roughly we are looking at one order of magnitude in pixel pitch and two orders of magnitude in area. Perhaps the most direct comparison is the microdisplay OLED pixel at 9.3 microns versus the Samsung S8 at 4.8X linear and a 23x area difference.

So why is there this huge gap? It comes down to making the active matrix array circuitry to drive the technology. Microdisplays are made on semiconductor integrated circuits while direct view displays are made on glass and plastic substrates using comparatively huge and not very good transistor. The table below based on one in an article from 2006 by Mingxia Gu while at Kent State University (it is a little out of date, but gives lists the various transistors used in display devices).

The difference in transistors largely explains the gap. With the microdisplays using transistors made in I.C. fabs whereas direct view displays fabricate their larger and less conductive transistors on top of glass or plastic substrates at much lower temperatures.

Microdisplays

Within the world of I.C.’s, microdisplays used very old/large transistors often using nearly obsolete semiconductor processes. This is both an effort to keep the cost down and the fact that most display technologies need higher voltages than would be supported by smaller transistor sizes.

There are both display physics and optical diffraction reasons which limit making microdisplay pixels much smaller than 4µm. Additionally, as the pixel size gets below about 6 microns, the optical cost of enlarging the pixel to be seen by the human start to escalate so headset optics makers want 6+ micron pixels which are much more expensive to make. To a first order, microdisplay costs in volume are a function of area of the display so smaller pixels means less expensive devices for the same resolution.

The problem for microdisplays is even using old I.C. fabs, the cost per square millimeter is extremely high compared to TFT on glass/plastic, and yields drop as the size of the device grows so doubling the pixel pitch could result in an 8X or more increase in cost. While is sounds good to be using old/depreciated I.C. fabs, it may also mean they may not have the best/newest/highest yielding equipment or worse yet, they close down the facilities as being obsolete.

The net result is that microdisplays are no where near cost competitive with “re-purposed” cell phone technology for VR if you don’t care about size and weight. They are the only way to do a small lightweight headsets and really the only way to do AR/see through displays (save the huge Meta 2 bug-eye bubble).

I hope to pick up this subject more in some future articles (as each display type could be a long article in and of itself. But for now, I want to get onto the VR systems with larger flat panels.

Direct View Displays Adapted for VR

Direct View VR (ex. Oculus, HTC Vive, and Google Cardboard) have leveraged direct view display technologies developed for cell phones. They then put simple optics in front of the display so that people can focus the image when the display is put so near the eye.

The accepted standard for human “near vision” is 25cm/250mm/9.84-inches. This is about as close as a person can focus and is used for comparing effective magnification. With simple (single/few lens) optics you are not so much making the image bigger per say, but rather moving the display closer to the eye and then using the optics to enable the eye to focus. A typical headset uses a roughly 40mm focal length lens and then put the display at the focal lens or less (e.g. 40mm or less) from the lens.  Putting the display at the focal length of the lens makes the image focus at infinity/far away.

Without getting into all the math (which can be found on the web) the result is that with a 40mm focal length nets an angular magnification (relative to viewing at 25cm) of about 6X. So for example looking back at the table at the top, the Oculus pixel (similar in size to the HTC Vive) which would be about 0.77 arcminutes at 25cm end up appearing to cover about 4.7 arcminutes (which are VERY large/chunky pixels) and about a 95 degree FOV (depends on how close the eye gets to the lens — for a great explanation of this subject and other optical issues with the Oculus CV1 and HTC Vive see this Doc-Ok.org article).

Improving VR Resolution  – Series of Roadblocks

For reference, 1 arcminute per pixel is consider near the limit of human vision and most “good resolution” devices try to be under 2 arcminutes per pixel and preferably under 1.5. So let’s say we want to keep the ~95 FOV but improve the angular resolution by 3x linearly to about 1.5 arcminutes, we have several (bad) options:

  1. Get someone to make a pixel that is 3X smaller linearly or 9X smaller in area. But nobody makes a pixel this size that can support about 3,000 pixels on a side. A microdisplay (I.C. based) will cost a fortune (like over $10,000/eye if it could be made at all) and nobody makes transistors that a cheap and compatible with displays that are small enough. But let’s for a second assume someone figures out a cost effective display, then you have the problem that you need optics that can support this resolution and not the cheap low resolution optics with terrible chroma aberrations, god rays, and astigmatism that you can get away with 4.7 arcminute pixels
  2. Use say the Samsung S8 pixel size (a little smaller) and make two 3K by 3K displays (one for each eye). Each display will be about 134mm or about 5.26 inches on a side and the width of the two displays plus the gap between them will end up at about 12 inches wide. So thing in terms of strapping an large iPad Pro in front of your face only, it now has to be about 100mm (~4 inches) in front of the optics (or about 2.5X as far away at on the current headsets). Hopefully you are starting to get the picture, this thing is going to huge and unwieldy and you will probably need shoulder bracing in addition to head straps. Not to mention that the displays will cost a small fortune along with the optics to go with them.
  3. Some combination of 1 and 2 above.
The Future Does Not Follow a Straight Path

I’m trying to outline above the top level issue (there are many more). Even if/when you solve the display cost/resolution problem, lurking behind that is a massive optical problem to sustain that resolution. These are the problems “straight line futurists” just don’t get; they assume everything will just keep improving at the same rate it has in the past not realizing they are starting to bump up against some very non-linear problems.

When I hear about “Moore’s Law” being applied to displays I just roll my eyes and say that they obviously don’t understand Moore’s Laws and the issued behind it (and why it kept slowing down over time). Back in November 2016 Oculus Chief Scientist Michael Abrash made some “bold predictions” that by 2021 we would have 4K (by 4K) per eye and 140 degree FOV with 2 arcminutes per pixel. He upped my example above by 1.33x more pixels and upped the FOV by almost 1.5X which introduces some serious optical challenges.

At times like this I like to point out the Super Sonic Transport or SST of the 1960’s. The SST seemed inevitable for passenger trave, after all in less than 50 years passenger aircraft when from nothing to the jet age; yet today, over 50 years later, passenger aircraft still fly at about the same speed. Oh by the way, in the 1960’s they were predicting that we would be vacationing on the moon by now and having regular fights to Mars (heck, we made it to the moon in less than 10 years). We certainly could have 4K by 4K displays per eye and 140 degree FOV by 2021 in a head mounted display (it could be done today if you don’t care how big it is), but expect it to be more like the cost of flying supersonic and not a consumer product.

It is easy to play arm chair futurist and assume “things will just happened because I want them to happen. The vastly harder part is to figure out how it can happen. I lived through I.C. development in the late 1970’s through the mid 1990’s so I “get” learning curves and rates of progress.

One More Thing – Micro-iLED

I included in the table at the top Micro Inorganic LEDs, also known as just Micro-LEDs (I’m using iLED to make it clear these are not OLEDs). They are getting a lot of attention lately, particularly after Apple bought LuxVue and Oculus bought InfiniLED. These essentially use very small “normal/conventional” LEDs that are mounted (essentially printed) on a substrate. The fundamental issue is that red requires a very different crystal from blue and green (and even they have different levels of impurities). So they have to make individual LEDs and then combine them (or maybe someday grow the dissimilar crystals on the common substrate).

The allure is that iLEDs have some optics properties that are superior to OLEDs. They have tighter color spectrum, more power efficient, can be driven much brighter, less issues with burn in, and in some cases have less diffuse (better collimated) light.

These Micro-iLEDs are being used in two ways, one to make very large displays by companies such as Sony, Samsung, and NanoLumens or supposedly very small displays (LuxVue and InfiniLED). I understand how the big display approach works, there is lots of room for the LED and these displays are very expensive per pixel.

With the small display approach, they seem to have to double issue of being able to cut very small LEDs and effectively “print” the LEDs on a TFT substrate similar to say OLEDs. What I don’t understand is how these are supposed to be smaller than say OLEDs which would seem to be at least as easy to make on similar TFT or similar transistor substrates. They don’t seem to “fit” in near eye, but maybe there is something I am missing at this point in time.

Kopin Entering OLED Microdisplay Market

Kopin Making OLED Microdisplays

Kopin announced today that they are getting into the OLED Microdisplay business. This is particularly notable because Kopin has been a long time (since 1999) manufacture of transmissive LCD microdisplays used in camera viewfinders and near eye display devices. They also bought Forth Dimension Displays back in 2011, a maker of high resolution ferroelectric reflective LCOS used in higher end near eye products.

OLED Microdisplays Trending in AR/VR Market

With the rare exception of the large and bulky Meta 2, microdisplays, (LCOS, DLP, OLED, and transmissive LCD), dominate the AR/MR see-through market. They also are a significant factor in VR and other non-see-through near eye displays

Kopins entry seems to be part of what may be a trend toward OLED Microdisplays used in near eye products. ODG’s next generation “Horizon” AR glasses is switching from LCOS (used in the current R7) to OLED microdisplays. Epson which was a direct competitor to Kopin in transmissive LCD, switched to OLED microdisplays in their new Moverio BT-300 AR glasses announced back in February.

OLED Microdisplays Could Make VR and Non-See-Through Headsets Smaller/Lighter

Today most of the VR headsets are following Oculus’s use of large flat panels with simple optics. This leads to large bulky headsets, but the cost of OLED and LCD flat panels is so low compared to other microdisplays with their optics that they win out. OLED microdisplays have been far too expensive to compete on price with the larger flat panels, but this could change as there are more entrants into the OLED microdisplay market.

OLEDs Don’t Work With Waveguides As Used By Hololens and Magic Leap

It should be noted that the broad spectrum and diffuse light emitted by OLED is generally incompatible with the flat waveguide optics such as used by Hololens and is expected from Magic Leap (ML). So don’t expect to see these being used by Hololens and ML anytime soon unless they radically redesign their optics. Illuminated microdisplays like DLP and LCOS can be illuminated by narrower spectrum light sources such as LED and even lasers and the light can be highly collimated by the illumination optics.

Transmissive LCD Microdisplays Can’t Compete As Resolution Increases

If anything, this announcement from Kopin is the last nail in the coffin of the transmissive LCD microdisplay in the future. OLED Microdisplays have the advantages over transmissive Micro-LCD in the ability to go to higher resolution and smaller pixels to keep the overall display size down for a given resolution when compared to transmissive LCD. OLEDs consume less power for the same brightness than transmissive LCD. OLED also have much better contrast. As resolution increases transmissive LCDs cannot compete.

OLEDs Microdisplays More Of A Mixed Set of Pros and Cons Compared to LCOS and DLP.

There is a mix of pro’s and con’s when comparing OLED microdisplays with LCOS and DLP. The Pro’s for OLED over LCOS and DLP include:

  1. Significantly simpler optical path (illumination path not in the way). Enables optical solutions not possible with reflective microdisplays
  2. Lower power for a given brightness
  3. Separate RGB subpixels so there is no field sequential color breakup
  4. Higher contrast.

The advantages for LCOS and DLP reflective technologies over OLED microdisplays include:

  1. Smaller pixel equals a smaller display for a given resoluion. DLP and LCOS pixels are typically from 2 to 10 times smaller in area per pixel.
  2. Ability to use narrow band light sources which enable the use of waveguides (flat optical combiners).
  3. Higher brightness
  4. Longer lifetime
  5. Lower cost even including the extra optics and illumination

Up until recently, the cost of OLED microdisplays were so high that only defense contractors and other applications that could afford the high cost could consider them. But that seems to be changing. Also historically the brightness and lifetimes of OLED microdisplays were limited. But companies are making progress.

OLED Microdisplay Competition

Kopin is long from being the first and certainly is not the biggest entry in the OLED microdisplay market. But Kopin does have a history of selling volume into the microdisplay market. The list of known competitors includes:

  1. Sony appears to be the biggest player. They have been building OLED microdisplays for many years for use in camera viewfinders. They are starting to bring higher resolution products to the market and bring the costs down.
  2. eMagin is a 23-year-old “startup”. They have a lot of base technology and are a “pure play” stock wise. But they have failed to break through and are in danger of being outrun by big companies
  3. MicoOLED – Small France startup – not sure where they really stand.
  4. Samsung – nothing announced but they have all the technology necessary to make them. Update: Ron Mertens of OLED-Info.com informed me that I was rumored that the second generation of Google Glass was considering a Samsung OLED microdisplay and that Samsung had presented a paper going back to 2011.
  5.  LG – nothing announced but they have all the technology necessary to make them.

I included Samsung and LG above not because I have seen or heard of them working on them, but I would be amazed if they didn’t at least have a significant R&D effort given their sets of expertise and their extreme interest in this market.

For More Information:

For more complete information on the OLED microdisplay market, you might want go to OLED-info that has been following both large flat panel and small OLED microdisplay devices for many years. They also have two reports available, OLED Microdisplays Market Report and OLED for VR and AR Market Report.

For those who want to know more about Kopin’s manufacturing plan, Chris Chinnock of Insight Media has an interesting article outlining Kopin’s fabless development strategy.

AR/MR Optics for Combining Light for a See-Through Display (Part 1)

combiners-sample-cropIn general, people find the combining of an image with the real world somewhat magical; we see this with heads up displays (HUDs) as well as Augmented/Mixed Reality (AR/MR) headsets.   Unlike Starwars R2D2 projection into thin air which was pure movie magic (i.e. fake/impossible), light rays need something to bounce off to redirect them into a person’s eye from the image source.  We call this optical device that combines the computer image with the real world a “combiner.”

In effect, a combiner works like a partial mirror.  It reflects or redirects the display light to the eye while letting light through from the real world.  This is not, repeat not, a hologram which it is being mistakenly called by several companies today.  Over 99% people think or call “holograms” today are not, but rather simple optical combining (also known as the Pepper’s Ghost effect).

I’m only going to cover a few of the more popular/newer/more-interesting combiner examples.  For a more complete and more technical survey, I would highly recommend a presentation by Kessler Optics. My goal here is not to make anyone an optics expert but rather to gain insight into what companies are doing why.

With headsets, the display device(s) is too near for the human eye to focus and there are other issues such as making a big enough “pupil/eyebox” so the alignment of the display to the eye is not overly critical. With one exception (the Meta 2) there are separate optics  that move apparent focus point out (usually they try to put it in a person’s “far” vision as this is more comfortable when mixing with the real word”.  In the case of Magic Leap, they appear to be taking the focus issue to a new level with “light fields” that I plan to discuss the next article.

With combiners there is both the effect you want, i.e. redirecting the computer image into the person’s eye, with the potentially undesirable effects the combiner will cause in seeing through it to the real world.  A partial list of the issues includes:

  1. Dimming
  2. Distortion
  3. Double/ghost images
  4. Diffraction effects of color separation and blurring
  5. Seeing the edge of the combiner

In addition to the optical issues, the combiner adds weight, cost, and size.  Then there are aesthetic issues, particularly how they make the user’s eye look/or if they affect how others see the user’s eyes; humans are very sensitive to how other people’s eye look (see the EPSON BT-300 below as an example).

FOV and Combiner Size

There is a lot of desire to support a wide Field Of View (FOV) and for combiners a wide FOV means the combiner has to be big.  The wider the FOV and the farther the combiner is from the eye the bigger the combiner has to get (there is not way around this fact, it is a matter of physics).   One way companies “cheat” is to not support a person wearing their glasses at all (like Google Glass did).

The simple (not taking everything into effect) equation (in excel) to computer the minimum width of a combiner is =2*TAN(RADIANS(A1/2))*B1 where A1 is the FOV in degrees and and B1 is the distance to farthest part combiner.  Glasses are typically about 0.6 to 0.8 inches from the eye and the size of the glasses and the frames you want about 1.2 inches or more of eye relief. For a 40 degree wide FOV at 1.2 inches this translates to 0.9″, at 60 degrees 1.4″ and for 100 degrees it is 2.9″ which starts becoming impractical (typical lenses on glasses are about 2″ wide).

For, very wide FOV displays (over 100 degree), the combiner has to be so near your eye that supporting glasses becomes impossible. The formula above will let your try your own assumptions.

Popular/Recent Combiner Types (Part 1)

Below, I am going to go through the most common beam combiner options.  I’m going to start with the simpler/older combiner technologies and work my way to the “waveguide” beam splitters of some of the newest designs in Part 2.  I’m going to try and hit on the main types, but there are many big and small variations within a type

gg-combinerSolid Beam Splitter (Google Glass and Epson BT-300)

These are often used with a polarizing beam splitter polarized when using LCOS microdisplays, but they can also be simple mirrors.  They generally are small due to weight and cost issues such as with the Google Glass at left.  Due to their small size, the user will see the blurry edges of the beam splitter in their field of view which is considered highly undesirable.  bt-300Also as seen in the Epson BT-300 picture (at right), they can make a person’s eyes look strange.  As seen with both the Google Glass and Epson, they have been used with the projector engine(s) on the sides.

Google glass has only about a 13 degree FOV (and did not support using a person’s glasses) and about 1.21 arc-minutes/pixel angular resolution with is on the small end compared to most other headset displays.    The BT-300 about 23 degree (and has enough eye relief to supports most glasses) horizontally and has dual 1280×720 pixels per eye giving it a 1.1 arc-minutes/pixel angular resolution.  Clearly these are on the low end of what people are expecting in terms of FOV and the solid beam quickly becomes too large, heavy, and expensive at the FOV grows.  Interesting they are both are on the small end of their apparent pixel size.

meta-2-combiner-02bSpherical/Semi-Spherical Large Combiner (Meta 2)

While most of the AR/MR companies today are trying to make flatter combiners to support a wide FOV with small microdisplays for each eye, Meta has gone in the opposite direction with dual very large semi-spherical combiners with a single OLED flat panel to support an “almost 90 degree FOV”. Note in the picture of the Meta 2 device that there are essentially two hemispheres integrated together with a single large OLED flat panel above.

Meta 2 uses a 2560 by 1440 pixel display that is split between two eyes.  Allowing for some overlap there will be about 1200 pixel per eye to cover 90 degrees FOV resulting in a rather chunkylarge (similar to Oculus Rift) 4.5 arc-minutes/pixel which I find somewhat poor (a high resolution display would be closer to 1 a-m/pixel).

navdy-unitThe effect of the dual spherical combiners is to act as a magnifying mirror that also move the focus point out in space so the use can focus. The amount of magnification and the apparent focus point is a function of A) the distance from the display to the combiner, B) the distance from the eye to the combiner, and C) the curvature.   I’m pretty familiar with this optical arrangement since the optical design it did at Navdy had  similarly curved combiner, but because the distance from the display to the combiner and the eye to the combiner were so much more, the curvature was less (larger radius).

I wonder if their very low angular resolution was as a result of their design choice of the the large spherical combiner and the OLED display’s available that they could use.   To get the “focus” correct they would need a smaller (more curved) radius for the combiner which also increases the magnification and thus the big chunky pixels.  In theory they could swap out the display for something with higher resolution but it would take over doubling the horizontal resolution to have a decent angular resolution.

I would also be curious how well this large of a plastic combiner will keep its shape over time. It is a coated mirror and thus any minor perturbations are double.  Additionally and strain in the plastic (and there is always stress/strain in plasic) will cause polarization effect issues, say whenlink-ahmd viewing and LCD monitor through it.   It is interesting because it is so different, although the basic idea has been around for a number of years such as by a company called Link (see picture on the right).

Overall, Meta is bucking the trend toward smaller and lighter, and I find their angular resolution disappointing The image quality based on some on-line see-through videos (see for example this video) is reasonably good but you really can’t tell angular resolution from the video clips I have seen.  I do give them big props for showing REAL/TRUE video’s through they optics.

It should be noted that their system at $949 for a development kit is about 1/3 that of Hololens and the ODG R-7 with only 720p per eye but higher than the BT-300 at $750.   So at least on a relative basis, they look to be much more cost effective, if quite a bit larger.

odg-002-cropTilted Thin Flat or Slightly Curved (ODG)

With a wide FOV tilted combiner, the microdisplay and optics are locate above in a “brow” with the plate tilted (about 45 degrees) as shown at left on an Osterhout Design Group (ODG) model R-7 with 1280 by 720 pixel microdisplays per eye.   The R-7 has about a 37 degree FOV and a comparatively OK 1.7 arc-minutes/pixel angular resolution.

odg-rr-7-eyesTilted Plate combiners have the advantage of being the simplest and least expensive way to provide a large field of view while being relatively light weight.

The biggest drawback of the plate combiner is that it takes up a lot of volume/distance in front of the eye since the plate is tilted at about 45 degrees from front to back.  As the FOV gets bigger the volume/distance required also increase.
odg-horizons-50d-fovODG is now talking about a  next model called “Horizon” (early picture at left). Note in the picture at left how the Combiner (see red dots) has become much larger. They claim to have >50 degree FOV and with a 1920 x 1080 display per eyethis works out to an angular resolution of about 1.6 arc-minutes/pixel which is comparitively good.

Their combiner is bigger than absolutely necessary for the ~50 degree FOV.  Likely this is to get the edges of the combiner farther into a person’s peripheral vision to make them less noticeable.

The combiner is still tilted but it looks like it may have some curvature to it which will tend to act as a last stage of magnification and move the focus point out a bit.   The combiner in this picture is also darker than the one in the older R-7 combiner and may have additional coatings on it.

ODG has many years of experience and has done many different designs (for example, see this presentation on Linked-In).  They certainly know about the various forms of flat optical waveguides such as Microsoft’s Hololens is using that I am going to be talking about next time.  In fact,  that Microsoft’s licensed Patent from ODG for  about $150M US — see).

Today, flat or slightly curved thin combiners like ODG is using probably the best all around technology today in terms of size, weight, cost, and perhaps most importantly image quality.   Plate combiners don’t require the optical “gymnastics” and the level of technology and precision that the flat waveguides require.

Next time — High Tech Flat Waveguides

Flat waveguides using diffraction (DOE) and/or holographic optical elements (HOE) are what many think will be the future of combiners.  They certainly are the most technically sophisticated. They promise to make the optics thinner and lighter but the question is whether they have the optical quality and yield/cost to compete yet with simpler methods like what ODG is using on the R-7 and Horizon.

Microsoft and Magic Leap each are spending literally over $1B US each and both are going with some form of flat, thin waveguides. This is a subject to itself that I plan to cover next time.