Intel AR Glasses Follow Up

Introduction

The day after I posted an article about news of Intel trying to sell their AR Glasses Unit, The Verge came out with an article and video about Intel’s AR glasses. The AR glasses are called Vaunt, and it is from Intel’s New Device Group (NDG). The Verge article fills in some gaps, namely what Intel has done Intel with the concept after acquiring the Swiss startups of Lemoptix and Composyt (both spun out of EPFL).

The Verge article pretty much seems to present without judgment Intel’s use case rationale for the glasses.  On the surface, the progress is not that far from where EPFL was back in 2012 other than packaging it up into a smaller form factor. I’m assuming the mems scanner that was a cell phone sized Microvision ShowWx (see the Appendix at the end) has been replaced with the Intel acquired Lemoptix beam scanner.

BTW, I’m not trying to pick on Intel or The Verge here. Intel is doing what companies do all the time, presenting their product in the most favorable light to reporters that don’t understand the underlying technology. So it is very easy for the reporter to accept and go along with the company story. It is also self-fulfilling as companies will place exclusives with media sources that tend to report technology favorably. In the extreme, cough . . . cough, a company like Magic Leap will go to Rolling Stone.

Hard Problems Unsolved – Limitations Become “Features”

Key issues relative to Vaunt discussed in my last article included: eye-box/pupil size, the low resolution of laser beam scanning (LBS)  displays, and how would the holographic mirror/lens work with color images. I was curious to see how Vaunt would address these limitations and the simple answer is they didn’t. Vaunt has very small eye box, very low resolution (400 by 150 pixels according to The Verge), and still only has the single color of red. All the hard problems are yet to be addressed. What you are left with is a very expensive toy. What more, I don’t think given 10X more time and money it will solve the key problems while giving a good view of the real-world.

The largest part of an LBS projector is combining the red, green, and blue lasers into a single beam. So doing red only has a major impact on the size and cost. Using a single color also means that the holographic film that bends the light toward the eye could be tuned to the one wavelength and not deal with the optical issues with happens when three different wavelengths. A single color also reduces problems the holographic film will have in diffracting the real world light since it only has to deal with the one wavelength. If anyone else thought they could get away with a single color, their headset would be a lot smaller too.

The application and patent I discussed last time were both attempts to address the inherently small eye box of the near-eye laser scanning optics. Application 2015/0362734 (below left) proposed diffusing the image by the holographic film and then using a contact lens (developed by Innovega) to focus it. This obviously has issues with being not user-friendly (you have to be fitted with contact just to try it). Alternatively, Patent 9,846,307 (‘307 below right) shows using multiple projectors, but this quickly becomes expensive and extremely difficult to implement. The ‘307 patent above shows how if the eye moves the light rays miss the eye with a single projector.

Neither of the approaches to increasing the eye-box is even remotely practical. I was particularly curious how Intel would address the eye box issue and “surprise” they didn’t. Intel has instead turned the small eye box a “feature.” The Verge quotes Mark Eastwood, Intel’s NDG’s industrial design director:

”We didn’t want the notification to appear directly in your line of sight,” says Eastwood. “We have it about 15 degrees below your relaxed line of sight. … An LED display that’s always in your peripheral vision is too invasive. … this little flickering light. The beauty of this system is that if you choose not to look at it, it disappears. It is truly gone.”

 

 

You have to admire some good spin. The reason the image disappears is the eye box is so small and can only be seen when looking directly at it from a precise location (per the diagram in ‘307 above). Quoting the author in The Verge, “you’ll need to have Vaunt glasses adjusted to your pupillary distance,” and this process is shown in the video (left) as part of the process of getting the prescription glasses.

In The Verge video, the author says that it would be so natural to glance over and down to see a message (still from the video shown on the right). But first, the user would have to be alerted by some other means because the image is invisible unless you are looking precisely at it. Also, people are instinctively extremely sensitive to where other people’s eyes are looking, so glancing to look at messages is going to make the user look “shifty-eyed” as the author did in the video (right — hopefully, he was exaggerating).

Most companies go to the effort to make a larger eye box so the image will disappear only when you want it to, not just because you have to look in a certain direction. If they wanted your glance to control the display, then they would track your eye.

The Verge makes that point that everything is small and low powered. But then, supporting only a single color and low-resolution image with a small eye box would be small and low powered with any other display technology.

The good technical point is that because the Vaunt uses laser light, the image is “in-focus” regardless of your prescription and this is true. It could also be true using a panel type display (LCOS or DLP) using laser illumination. What The Verge did not say (and may not have known) is that the same laser light physics can also be problematical for people with “floaters” which most people develop between the ages of 40 and 65, cast sharp shadows on the retina (I learned this first-hand when wearing an LBS headset).

So much was not shown

There are no through the lens videos and no idea as the eventual price and availability, but the video and article made it sound expensive. The author was only shown short demo loops of video and apparently not it fully working.  A big issue for me is that they did not let the author take it outside, this is a big red-flag for being “demoware.” I’m particularly concerned about the hologram’s diffraction grating effects of the real world when looking through the glasses; going outside would expose the glasses to the very bright light coming from all types of angles that could show problems, specifically diffraction grating color separations.

Conclusion – I Just Don’t See  Big Market For Vaunt

The eye box is one of those things most people would not think to ask. Once you understand it, it becomes a problem that may never be solved cost effectively. Laser Beam Scanning is chock full of these types of problems that only become apparent when you dig down into the details. The simple drawing and presentations by LBS companies skip over these issues. There are reasons why even though the concept has been around for over 27 years since the HIT lab at the University of Washington proposed them, you have not seen single- or full-color near-eye displays.

The question becomes how big a market there could be for a low resolution (fewer pixels than an Apple Watch), red (only), with a small eye box, requiring custom prescription glasses, which will have problems with users over 40 (when people typically start getting floaters in their eye)? In other words, take what the technology gives you approach. How much could you sell them for versus how much they would cost to make?

It is not just what the Vaunt glasses currently do; it is that after over five years of development and investment, the last two years or so with Intel’s backing, they have not been able to demonstrate solving any of the key problems with the technology. How can Intel think, at least according to Bloomberg’s article “Intel Is Said to Plan Sale of Majority Stake in AR Glasses Unit” (February 1, 2018) that it is worth $350M?

My reading of the tea leaves is that Intel has figured it has pumped enough money into it and wanting someone else to take it off their hands with all the seriously hard problems yet to be solved. It looks like Intel was giving The Verge’s access to their developments to put “lipstick on a pig” in their effort to sell the group. Sorry for being so blunt, but that is honestly the way I see it.

Appendix: Some EPFL and Innovega Background to the Vaunt

After writing the first article, a user on Reddit alerted me to an old video by EPFL. The video gives a snapshot of where the technology was over five years ago and what has been accomplished. Pretty much the Vault is an integration of the EFPL technology into the form factor of the mockup. In 2012, they used a Microvision LBS projector (losing 10’s of millions of dollars for Microvision, BTW) shown in the still captured on the left.

The video also showed a very short “through the optics” video clip taken with an IR sensitive camera with the name “Innovega” on it. This explains patent application 2015/0362734 with the special contact lens that I comment last time looks like what has Innovega talking about for years. Innovega (now branding the product as Emacular) has been trying to develop these special contacts since as far back as 2008.

Also shown in the video is a red laser shooting at the lens and the diffraction pattern created (left).  What is not shown is what happens to real-world images and light sources when viewing through the glasses. One would expect to see the diffraction effects.

Karl Guttag
Karl Guttag
Articles: 257

12 Comments

  1. Excellent post, Karl. Kudos to the Intel engineers for squeezing the pieces into a small form factor. Any comments as to the battery life expected in the Vaunt glasses?

    • Hard to know the power consumption. It is only supporting low-resolution and sparse content. I would get they only turn the display on for short time periods. There is not a lot of room for the battery.

  2. Karl, good article. you dispel lots of myths in display industry.
    what I don’t agree is following statement:
    “Also shown in the video is a red laser shooting at the lens and the diffraction pattern created , What is not shown is what happens to real-world images and light sources when viewing through the glasses. One would expect to see the diffraction effects”.
    it’s actaully possible to add only optical effect to reflective path not transparent path, especally laser is used. you have seen our NanoAR demo in CES, which strongly affect reflective beam; same effect doesn’t happen to tranpaent light path.
    Yong-jing

    • Hi Karl, I don’t agree with the following
      “But then, supporting only a single color and low-resolution image with a small eye box would be small and low powered with any other display technology”
      If true we would have been submersed since years of lightweight , small form-factor HMDs designs and products, even though monochrome and low resolution or small FOV. But that’s not the case as the technology is still, even in such cases, truly pushed. 🙂 Regards, Sara

      • Sara, you are free to disagree with me. But a large part of the microdisplay optics and light losses is in the color combining. If you only have a single color it will be much more efficient. If you collimate the light, which is easier with a single color, you can get more brightness with a small eye box.

        The problem is that people want color and a small eye box is very user unfriendly (very hard to get the image viewable with glasses
        If you only wanted to have, say, a 320×240 pixel red (only) LCOS display, it could be made much cheaper than a red laser beam scanning display and would consume much less power.

        The problem is not making it, the problem is whether there is a real and not a theoretical market.

      • Hi Karl,
        Are you saying that you could get a 320×240 monochromatic LCOS display in a similar form factor to the Vaunt? What about a 640×480? How much would the size increase if you wanted to do full color?

        What intrigued me with the Vaunt was that it looked just like normal glasses from the outside. For example, I think there is a solid use case and market for sunglasses with an integrated HUD in the corner…if you can make them stylish and look like regular sunglasses.

      • The size will pretty much grow with resolution and color. Color adds a number of complications. You need to have 3 colors and then combine and homogenize them, this typically takes over half the size of the projector. Vaunt looks like it was using a holographic mirror tuned to the red wavelength of the laser. This only works for red. If you have R, G, and B they will each get bent and focused differently, thus using red only is a big “cheat.” If you go full color then it pretty much rules out a holographic mirror. Also, I would be curious what the effect of the holographic mirror has when you look at something (say white) with red in it (there will likely be some effect).

        If you only want a single color, LCOS can give great contrast, what trips it up in near-eye displays is that with field sequential color they can’t be tuned to a single color and end up compromised. The tiny eye box is another cheat in the Vaunt design.

        If you have a single color, you could do a much cheaper version of Vuzix or Waveoptics like design with a single color waveguide for example (going to a single color waveguide would greatly simplify and reduce the cost of the waveguide). If you want color and moderate resolution (say WVGA) then Lumus seems to be the best today. Lumus demonstrated at CES up to 1080p displays and greater than 40-degree FOV with their optics.

        The point is that ordinary people want color displays (the military, in particular, will accept a single color to do a job) and full color majorly complicates the problem and makes the display bigger (be it LCOS or Laser Beam Scanning). Optics can be made significantly simpler and smaller with a single color. On top of this, you also waste a lot of light in the combining and homogenization of the colors plus, depending on the optical design, you can tune all the optics to a specific color with a single color to improve both efficiency and image quality.

  3. Hi Karl, Robert.
    Karl, thank you very much for your answer
    Just three little reflections
    -One single color may do a wonderful job or not. It depending on the application, on how much you’re good with using a proper symbology for a specific use case (huge impact).
    -And wether you are really taking advantage from lightweightness or not at all, so in the latter case better RGB and all heavy equipments and sensors, too.
    One may never forget the form factor in the evaluation, it’s a scaling factor for this technology. No realistic evaluation can be done without.
    -Even then, three colors may be put together onto a HOE in several ways, more or less efficient, with a more or less lightweight design. I don’t think there exist simple formula, in holography, which is based on interference and diffraction, it depends also a lot on sources sharpness if you can use easy three or not, and on how you recerded the HOE and its resulting spectum.
    There are several other details coming from the material used and the way you recorded it and again its a compromise in the lightweight aspect.
    I don’t think that with one single color optics be that simple, and of little value with the correct form factor.
    Like Intel’s Vaunt, for example.

    Regards
    Sara

  4. Hi Karl,

    There are too much technical errors and approximations in your article. I am in PW this week, let’s meet.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading