CES 2018 (Part 1 – AR Overview)

First, I just added a grammar checker to my computer which should help make this blog more readable. I tried it out in the last two articles, and it picked up a “few” (ok, if over 30 counts as few) mistakes.  This blog has over 15,000 “active users” (as defined by Google Analytics), including executives at both small and large companies, and I would like to make it easier for them to read.


Attending CES is the proverbial drinking from a firehose. I had over 25 meetings in 4 days on the show floors and in hotels all over the strip, and from breakfast through dinner. This year I was mostly focused on Augmented Reality (AR) displays and optics with some look at automotive HUD and projection.  Unfortunately, some of the best things I saw, I can’t talk about, and it is going to take some time to write up what I can discuss. I thought it would be best to start with a set of photos with some quick comments and then will write some more in-depth articles laters on some selected topics.

I’m going to try and just present the various things I saw at CES without many comparisons or deep analysis. I had to make a bit of exception for “diffractive waveguides” as there are so many of them and it is a major topic concerning Microsoft Hololens and Magic Leap. It was tempting to go into a discussion of the pro’s and con’s of the various optical solutions, but I will have to save most of that for another day as there is so much to cover.

Lumus Waveguide

Show below is waveguide maker Lumus’s demo headset with dual 40-degree FOV waveguides, each driven by a Raontech 1080p field sequential color LCOS device. The headset is mocked up to look similar to Hololens, and there are no functional cameras, sensors, or computers in the headset. There is plenty of eye relief to wear over glasses, and you easily see the wearer’s eyes. The waveguides are over 80% transmissive of real-world light to the eyes. Additionally, with the “top shooter” design, it leaves the user’s peripheral vision to the sides and down unobscured.

Lumus has a multi-semi-mirror waveguide technology which is distinctly different from the diffractive waveguide technology of Hololens and Magic Leap.

Their image quality appears to be significantly better than diffractive waveguides. But I must caution that Lumus was only showing limited demo content, and I did not have the time nor opportunity to try test patterns that might catch problems. As I warn my readers, usually demos on a show floor are tested to show content that makes them look good and avoid content that highlights issues.

While the 40 Degree 1080p waveguide garnered the most attention in the Lumus booth, they were also showing their lower cost and lower resolution “side-shooter” designs along with a sleeker design concept. This design trades resolution for cost, weight, and smaller size. It blocks more of the user’s peripheral vision but not that much more than a pair of sunglasses with a large frame.

Lumus makes the optics and attaches the displays, the rest of the system is up to their OEM customers, so their prototypes are only mockups.  The next company I am going to discuss, Vuzix, makes both the optics and the end product.

Vuzix Blade Monocular Diffractive Waveguide (With DLP Microdisplay)

Vuzix represents perhaps a more practical, stylish, and end equipment business-oriented approach to waveguide glasses. Vuzix has dropped all pretext of using them for watching movies and is demoing text and simple graphics content. It is monocular (single eye display) aimed at providing basic information to the user.

The Vuzix Blade uses a diffractive waveguide custom made by them. Like Lumus, it transmits more than 80% of the real-world light to the eyes. The overall image quality suffers from being a diffractive waveguide, and there are noticeable color variations across the field of view; something that is inherent in all diffractive waveguides including Hololens and should be expected in Magic Leap’s if/when it comes out (so I am not just picking on Vuzix).

To begin with, they move the lenses so close to the eye that they require prescription lenses which is both a pro and a con.  With the lenses worn closer to the eye, it put less weight on the person’s nose, as well fitting better and more comfortably. They have (only) a WVGA (854×480) DLP display that is portrait mode (long side running vertically) to allow them to move the image vertically in the user’s view to better position the location of the image. They can, with the DLP, make the image very bright (more than 3,000 nits) as needed for outdoor use. The Blade is wirelessly connected to the phone or computer system and will run for about 1 hour with the display on continuously or for about 8 hours of “typical” intermittent use.

Vuzix is trying to build a product to fit their customer’s needs. They are going to lose out spec and image quality wise to other people trying to make the best demo. The Blade is much more of a complete product and not a mock-up to show what they can do.

WaveOptics Diffractive Waveguides

WaveOptics invited me to their suite to see their latest work in diffractive waveguides made of both glass and plastic. They were shooting a “top shooting” design running with a mocked-up set of glasses for how it will look in a more finished product. Like They are a waveguide component and not a whole system maker.  One thing that makes WaveOptics stand out a bit is that they support both glass and a lower cost plastic waveguides whereas most other waveguide makers only use glass (which can be laminated for safety). WaveOptics claims to have made advances that allow their waveguides to be canted/angle to better wrap around a person’s head like normal glasses.

The picture (left) does not show the image as it very tricky and time-consuming to get a picture from a waveguide that looks anything like what you see with your eye. They also had several mockups of how their waveguides look in a finished product. It is also impossible to objectively compare the various waveguides when you have different setups and different content. I can say that I saw the “typical” diffractive waveguide issues (color uniformity across the field and “waveguide glow’), but not it was better or worse than any other diffractive waveguide.

DigiLens Diffractive Waveguide

Another Diffractive Waveguide maker is DigiLens. They use a “frozen LCD” process to make their diffraction gratings. I did not see them at CES (due to a mistake on my part), but I did meet with them at Display Summit in November and saw their technology. Digilens is in the process of coming to market with consumer products for bicycle and motorcycle use.

As I wrote concerning the other diffractive waveguides, it has similar advantages (thin and reasonably transparent) and image issues associated with diffractive waveguides.

Broad Statement On Diffractive Waveguide Limitations (Including Magic Leap and Hololens)

I have seen many diffractive waveguides including Microsoft Hololens and the ones above and I’m sorry to take this aside but, this must be said. The same principles of physics that make waveguides work, the bending of light based on wavelength, is the source of their downfall when it comes to image quality. There are additional problems in that the image light must bounce many times across the waveguide exit grating and on each encounter with the exit grating the image quality is degraded.

Each manufacturer has their own “tricks.” Hololens uses Nokia’s slanted Waveguide, Vuzix started with Nokia’s slanted waveguide but have developed their own method, Digilens uses UV-frozen LCD, and Magic Leap, based on their latest papers and patents may be using dual nano-beams. But the fundamental principle of using a diffraction grating is all the same. The tricks may reduce some of the negative effects, but they can’t eliminate them.

Diffractive waveguides may be used to make useful products within their limits. They are thin, light, and usually, have good light transmission  (often greater than 80%). They support the “look” of glasses. Companies appear to be getting better at manufacturing them, so costs should keep coming down. For making low-resolution basic information and “data snacking” diffractive waveguides such as Vuzix is doing, diffractive waveguides might very well be useful, but they are never going to have great image quality as objectively measured.

Note, Lumus with their partial mirror-based waveguide is in a different category as they don’t depend on refraction. They may have other optical issues (I have not had the chance to evaluate them). Their diffraction waveguide competitors suggest that Lumus waveguides are much more costly to make, but as I can’t get reliable cost data from any of the companies, this is impossible to prove one way or the other.

LetinAR Pin Mirror Optics – Something Very Different

LetinAR is small Korean startup that has clear optics with “pin mirrors” embedded it that look like one or more dots in the clear material from far away. Their multi-pin-mirror 70-degree element is shown in the picture at left. The physics would seem to be related pinhole cameras and to the University of North Carolina’s Pinlight Display, but LentinAR uses larger and fewer pin mirrors. LetinAR makes their optics out of both glass and plastic.

The fascinating thing about their technology is that while it seems very simple, the resultant image light has a very high “f-number” which results in the image being in-focus regardless of where your eye is focused, and also a result, the image is very sharp. Their Pin Mirror optics are acting on the light in a very different way than any of the waveguides.

In my discussions with LetinAR, I was never able to understand the display light throughput of their system.  One would expect a high f-number optical system to have low efficiency due to throwing away most more diffuse light. But then they showed it using an OLED which can not supply that much light and claimed to be reasonably efficient.

They support a ~40-degree FOV with a single pin-mirror and then add pin-mirrors to increase the FOV. The user’s eye must be relatively close to the pin-mirror for it to work and they will require prescription lenses for a person that uses glasses. Perhaps because of the pin mirror’s focus, they will only need the lenses to correct the user’s view of the real world (I’m not sure, particularly concerning astigmatism).

On their 70-degree FOV optics with 15 pin mirrors, I noticed some distortion/misalignment as my eye was viewing one pin mirror to the next.  LetinAR assured me that this was due to this being an early prototype and they need to improve their manufacturing tolerances.

DeepOptics Adaptive Lens

In Lumus’s booth and demonstrated with a Lumus waveguide was an electrically controlled, variable focus lens by Deepoptics.  Waveguides, both Lumus semi-mirror-based and diffractive, required collimated (focused near infinity) light to work was a waveguide. As explored in my prior articles about Vergence Accommodation Conflict (VAC), to support VAC the focus of the light must change on or after the light exits the waveguide.

Deepoptics uses a phase modulated liquid crystal sandwiched between optically transmissive control plates. In their current demo, they did not have any eye tracking to control the focus as would be required for a working VAC system, but they did demonstrate that they could electronically control focus.

Because it must be between the output of the waveguide and the eye, changing the focus of the display image would also affect the real world focus. Deepoptics demonstrated using a polarizer on to polarized the real world light to the opposite polarity of the display’s light. In this way, their adaptive lens would only change the focus of the display’s light. But this does mean negating the >80% see-though advantage of Lumus and other waveguides.

An alternative approach that would block less light would be to have an adaptive lens on both sides of the waveguide; the inner adaptive lens would correct for VAC, and the outer adaptive lens would undo the correction for the real-world. But how well dual corrective optics would work is yet to be seen.

Raontech LCOS and More Conventional Optics

Raontech is a maker of LCOS microdisplays, and they had a significant presence in the AR area of CES this year. In addition to their booth, their LCOS devices could be found in Lumus’s 1080p demo as well as Mad Gaze and ThirdEye.

Raontech is currently shipping two different sizes of 720p field sequential color LCOS displays. One is smaller and less expensive and the other, used Mad Gaze, and ThirdEye is larger and more costly but makes for better image quality with simpler optics. Lumus was using Raontech’s 1080p display, and Raontech was demonstrating a Quad-HD (quad 720p or 2560 by 1440 pixel) device they recently developed.

Both Mad Gaze and Thirdeye appeared to be using the same optics that Raontech had developed for their panels. While the optics use a “birdbath” design (see my article on birdbath optics), the curved mirror is on the bottom rather than in front.  This configuration results in significantly more real-world and display light getting through to the eye than say last year’s Osterhout Design Group’s R-8 and R-9 with the curved combiner in the path to the real world. Optically, Raontech’s design is very similar to Google Glass only scaled up and rotated 90 degrees. The image quality is relatively good, but is still blocks about 60% (before any additional tinting is added) of the real world light and polarizes it.

The Raontech design gives reasonably good eye relief so you can wear it over a person’s glasses. But this also means the lenses and optics are further away which puts more of weight on a person’s nose and looking bigger overall. These are just the type of design trade-offs that are made

Because it is a birdbath with a beam splitter, the optics are bulky. Thirdeye said that they are looking at other optical designs for their future products. ThirdEye’s design looks bulkier than Mad Gaze due to the larger batteries built into their design, a classic trade-off of battery life versus looks.

Direct View Displays With Large Spherical Combiners

Meta 2 seems to be the one that started a trend of using a large flat panel display with dual large spherical combiners, what many are calling “bug-eye displays” for obvious reasons. In particular, I came across Dreamworld (one of the founders of which came from Meta), Real Max, and Mira using bug-eye optics.

The bug-eye approach with either an LCD or OLED flat panel (either dedicated or using a phone’s display) has to be about the least expensive way make a near-eye display with reasonably good display image quality. The downsides include it being big and bulky, blocks most (usually well more than 50%) percentage of the real-world light. You will see a blurry zone where the two spheres meet in the middle; it is only bright enough for indoor use, it blocks the view of the user’s eyes.  Additionally, it makes the user look like well . . . , a giant robot insect.

More Information From My Trip to CES Next Time

Next time I plan to cover information related to Micro-LEDs and Heads Up Displays (HUD) for automobiles. In the future, I also plan on going into more detail some of the topics and devices I only briefly addressed in this article.


  1. Great write up on many companies and technologies I haven’t seen covered elsewhere – thank you!

    Any ETA when you can cover some of things you can’t talk about? months, years? Looking forward to it!

    Long-term when AR glasses are ubiquitous, prescriptions will make a lot of sense, they seem like a dead end for this stage- most of these early adopter units will be used by developers to demo applications to clients, press, etc- each device will be used by a variety of people, not one user, and can’t rely on you press or clients to not have glasses. Seems like many of these companies devices will be hard to use for current real-world use cases, if prescriptions are required. Your photo on the blog seems to indicate you wear glasses- how did you demo many of the displays?

    Along those lines, not in this article but a previous one- Magic Leap’s lack of peripheral vision (their design blocking sides) might be very deliberate for who the initial demo audience is- entertainment, artists, press, “wowing”. For a non-enterprise audience (or safety concern) , blocking off what isn’t manipulated is a benefit, to not be reminded how much isn’t affected. A lower peripheral vision will give the illusion a smaller FOV is more acceptable- only some of your view isn’t working.

    For some hololens demos we’ve actually masked over everything that wasn’t in FOV, so everything the user saw could be altered and was more immersive. Obviously, a product like this would be unsafe for actual real world use, but for certain clients or entertainment it is preferable. Not defending any of their choices, just that this choice to block off the sides might be very deliberate and this might trickle into other manufactures devices as well, depending on device audiences

    1. Thanks,

      I don’t have any control over when companies make their disclosures.

      The general feeling is that VR is not a big enough market for the big companies to care about. VR is seen as a limited to a subsegment of the game market. A big problem is that it is too isolating.
      So the theory goes, AR is the next big thing after cell phones. But to make this happen, you want something that is more capable and portable for everyday use than a smartphone. Something you can take with you out into the real world.

      As you cut down a person’s peripheral vision and block/darken the real world, you are tending in the direction of turning AR into a variation of VR where you can see through to a darken room a little bit. This might be fine for a theme park attraction or a very limited demo, but it is not going to be the next big thing.

    1. Thanks,
      I think this is still something of a lab prototype. The big issues as the article linked to below point out is that it can’t deal with color. Dealing with color including wider wavelength single perceived colors is a failing of diffractive waveguides as well. I have not read all the details, but likely the way it bends light is wavelength dependent.

      As such it might have some interesting applications in fields where the color of not important or necessary. I’m mostly interested in displays and optics for sizable markets in the next 10 years and most of these requires “full” color.


      1. Some of the visible wavelength metamaterial lenses in TiO or Al can at least several narrow colour bands if they use an analogue to a Bayer pattern/ pixel for each target wavelength.

          1. I can’t find the bookmarks but it was something like this:

            It seems to be that the each component block of an RGB building block will resonate for one tight wavelength group only.
            In at least some cases in these papers, angular multiplexing is needed, in at least some cases the focus of each target wavelength group must be tuned so they appear at the same plane.
            I’m assuming that the hologram of a mirror can be created and that pixel-wise amplitude modulation of the incident light (i.e. a 2D image) can be used with a pair of these as waveguide input and output gratings.

          2. I just took a quick look (I have a lot of other things going on right now), but these papers look to be highly experimental techniques that show interesting characteristics but are not close to being practical for commercialization.

  2. Can give more details on DigiLens “frozen LCD” please?

    I thought that instead of just a liquid crystal/ polymer holographic grating, they instead had an active polymer-dispersed liquid crystal grating whereby the liquid crystal would be electrically injected from holes in the substrate into the cavity on front of the surface relief polymer holographic grating and each colour band would have its grating activated sequentially to avoid colour cross-talk.
    The active switching gratings were also marketed for functions such as stacking at the exit pupil for switchable focus and display tiling.

    I’ve seen quite a few pinhole aperture display architectures but never one where the field of view increases with increasing number of pinholes. It suggests that each pin only samples part of the display.

    1. Jonathan Walden has a slide set on Linkedin Slide Share that gives some description, particularly slide 10 (marked 9).


      My understanding is that he freezes/cures/locks the LC with UV light and can then affect the non-cured/polymerized LC with an electric charge to switch it.

      Regarding the pinhole lens, it seemed to my eye that each pinhole covers an overlapping region.

  3. Interesting that many of these seem to be display technology prototypes rather than actual AR. Vuzix for example is basically just a screen on your face – it doesn’t interact with the real world in any greater sense than google glass back in the day. Are there even intertial sensors in the frame?

    It’s fair enough I suppose. True AR requires so many different techs to be perfected that companies tend to focus on just one or two.

    1. The expectations for AR greatly exceed what people know how to do. People see things done in post-production in movies and thing that it is possible or soon will be. The “physics” involved with getting an image coupled to the eye while maintaining transparency is extremely difficult. I don’t believe the basic physics of diffractive waveguides will ever result in very good image quality as I have written many times. I wish it were different but those are the facts. If you want a very transparent and thin display, can accept image quality results, then waveguides will have a market.

      Vuzix has been around for a long time and seems to be taking a pragmatic approach. They are not trying to show movies or the like, they are showing basic information to help people get a job done. They are living within the limitations of the waveguide technology and leveraging its advantages.

  4. Pin mirrors wins the prize for original problem solving approach. Can’t wait to check those out and to see where they can go with it! Thanks for the write-up Karl and congrats on dramatically reducing the typos 😉


  5. I lament the fact that so much effort and attention is made on this barely useful technology-fluff really-when so little is placed on producing rechargeable zinc air batteries. This battery technology is advanced enough there are 2 US companies manufacturing them commercially. These batteries will REVOLUTIONIZE the solar energy field and actually revolutionize life as we know it!! They are many times less expensive and lighter than lead acid and Li-ion and many times superior in every way. The two companies are LOS and Fluidic Energy (both private). I can’t buy their batteries and they won’t return my phone calls. I’m going to attempt to make my own, charge them only in Summer when solar is abundant and save them for Winter. Here efficiency is not a factor and a low tech version will be practical.

    1. There are huge efforts going into various forms of solid state, super fast charging and related batteries, particularly due to electric automobiles. If they were practical we would have then already and maybe we will in a few years. But even if the battery size and weight were reduced to near zero, there are still other serious challenges for AR headsets in other areas including optics. There is a reason that companies start by drawing a “Ray-Ban” looking glasses and end up looking like Hololens; it is not just the battery.

  6. Just had a thought as to why rechargeable zinc air batteries haven’t made it into the automotive field. This technology has been around for decades with one major hurdle until now: In repeated recharging the zinc forms dendrites which short the batteries out. They have come up with effective solutions, but I suspect with the rapid recharging by an alternator in a car their solutions still can’t handle this.

  7. We bought a META2 AR glasses because they claimed 90º of FOV. After receiving it, we measured it and it was only 60º diagonal. Did you try it? How is it possible they claim that and no one says the truth about that?

    1. There are no “marketing police” short of people returning the product, or a few people will call someone out for a false claim on a blog.

      One look at how the Meta 2 should tell you that the 90-degree FOV was not possible. You can see in the picture the reflective coating on the combine which is pretty far away from the wearer’s eye. The FOV can’t be bigger than the angle between the wearer’s eye and the reflective patch on the front combiner/shield.

  8. Hi Karl,

    In light of the recent WaveOptics partnership with EV Group, what more can you say about the experience with their prototype waveguides? Are their FOV, brightness, eyebox superior to other waveguide offerings?


    1. While I saw Waveoptics at CES, I didn’t have a chance to do a technical evaluation.

      Subjectively, the glass version looks similar to Hololens in terms of image quality (similar issues). They have an advantage in allowing the waveguide to cant/slant to better wrap around a person’s head like normal glasses do where the Hololens waveguide is viewed perpendicular to the eye. Waveoptics also has the option to use plastic rather than glass for the waveguide.

  9. Hi Karl, any idea on who or what kind of companies manufactures the semi spherical combiners on both the MIRA and META 2 seeing that they are similar? Also, will it be right to assume that the curvature follows a standard? Was hoping to be able to get my hands on just the combiners.

    1. Most likely they are custom made by a Chinese optical company. I don’t know of any standard as the curvature and amount of coating would be custom.

      The amount of curvature you want is a function of the distance from the display to the combiner and then from the combiner to the eye as well as how much magnification and focus-change is desired.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: