North’s Focals Laser Beam Scanning AR Glasses – “Color Intel Vaunt”

North’s Focals LBS AR Glasses Overview

North (formerly Thalmic Labs) announced a few days ago (Oct 23, 2018) the “Focals” laser beam scanning-based AR glasses (aside: a set of confusing names). The best article I saw on the “Focals” was from The Verge. The Verge, back in February 2018 covered a similar (now canceled) concept from Intel call the Vaunt.

A set of low-resolution (~300 by 300 pixels), small field of view (~15 degrees), custom fitted AR glasses, and costing $999 might otherwise be a nothing announcement. But with both Amazon and Intel have invested in North/Thalmic it is garnering them some attention from the press.

The Focals use laser beam scanning (LBS) to generate a low-resolution image (described by The Verge as 300 by 300 pixels or about the resolution of an Apple watch) with a holographic film to redirect the light back toward the eye. The basic “physics” behind the Focals with red, green and blue lasers are the same as the single (red) color Intel Vaunt AR glasses that I describe in two articles (article 1 and article two) in Feb. 2018.

Laser Beam Scanning (LBS) and Its Midas Touch Consequences

LBS is the “King Midas Touch” (the king whose touch turned everything into gold) of displays; it seems great until you understand the consequences.  I have been describing the multitude of problems with LBS displays since this blog started in 2011. The “superpowers” of “infinite focus of laser beams” bring with them all manner of difficult to solve problems. The problems associated with using them in AR glasses include:

  1. Optics directing the laser light into the eye and the problems it causes with the view
  2. Low resolution and low frame rate electro-mechanical scanning
  3. Extremely small eye-box/pupil (the image disappears unless perfectly lined up with eye)
  4. The complexity of combining three (or more) lasers into a single highly coaxial and tight set of beams for color
  5. Cost of lasers versus LEDs
  6. Controlling the brightness of a beam with a highly variable velocity
  7. Casting shadows on the retina due to “floaters” in the eye

Holographic Film to Bend The Light Toward The Eye

The reason for the holographic film is that if they use a simple mirror reflection, with the angle of incidence equal to the angle of reflection, the light will miss the eye (see figure on the right). So they use a holographic film to act as a tilted mirror at an angle to redirect the light at a sharper angle.

Thalmic/North has several patents for embedding the holographic film in molded glasses. But embedding the film has its issues including how much you can curve it. North only supports a limited range of diopter (focus) corrections and no astigmatism nor bi-focal support.

The film is going to have a negative effect on the view through it and will likely have similar problems as diffractive waveguides such as those used by Intel and Magic Leap (see my articles on diffractive waveguides including here). As described by The Vaunt, “The photopolymer material that serves as the display location isn’t noticeable for the most part, but when it catches the light, it looks like the glasses need to be wiped down.”

Laser Scanning AR’s Tiny Eye Box

North faced the same tiny eye box issue as Intel with their similar “Vaunt” glasses. Both the Verge article on the Intel Vaunt and the Verge article on the North Focals describe needing to have the custom fitting and the need to look directly at the image to see the image at all.

The fundamental problem is that with laser scanning direct writing on the retina, the laser beams have to be aimed to go directly through the pupil of the eye. If the laser beam aim is off or the eye moves even slightly, then the laser beams miss the pupil, and no image is seen at all.

The problem is discussed in several patent applications assigned to bothThalmic/North and Intel. The figures below illustrate the problem from their applications/patents hoping to improve it. Both of these try to replicate the laser beam so that when the eye moves relative to the beam, the laser light will still enter the pupil. Since both the Focals and the Vaunt have a tiny eye box, neither of these techniques were successfully implemented.

Tiny Field of View (FOV)

In the “Marketing Versus Reality” department, we have North’s “concept” image showing a huge clock with the date filling the FOV. On top of North’s stock image, I have taken a photo from the Verge article then rotated and scaled it to overlay in order to show the actual size of the holographic film. The image will then have to fit inside the circle of the film as is roughly indicated by the dotted line square. This is the difference between the ~60-degree FOV implied by the marketing concept and the ~ 15-degree FOV that they can actually support.

Conclusion

The Verge had to run a correction that they called them “North Labs,” but with a company name of “North” and a product named “Focals,” name confusion is inevitable. Based on what they are planning on selling: expensive, custom fit, low resolution, and small FOV AR glasses, I don’t think we will have to worry about the name problem for too long.

What I don’t understand, is why big companies like Amazon and Intel throw money into this type of concept without understanding the basic physics involved. You would think it would be far cheaper to do some good due diligence first. Ah well, at least they spent a lot less on the Focals than has been spent on Magic Leap. I want to see companies showering money on startups that will have a chance of bringing a good product to market.

Acknowledgment

I would like to thank Ron Padzensky for reviewing and making corrections to this article.

Karl Guttag
Karl Guttag
Articles: 260

52 Comments

  1. Great analysis as usual, Karl.

    However, I’m surprised you’d call these “AR” glasses considering they support no AR capabilities. I’ve made the point in the past that merely being equipped with a transparent display does not make an HMD any more “AR” than an opaque head-worn display is VR — if you’re missing the sensing capabilities for content from the display top be spatially located, then you’ve just got smart glasses (transparent display) or 3D video goggles (opaque display).

    • I’m not sure who the official keeper of the words “Augmented Reality” (AR) is today, but I think I am using a common definition of simply overlaying a virtual image on the real world. Unfortunately, the definitions are rather loose with AR compared with VR. I think the term “Mix Reality” came up because AR was being used so broadly, everything from Google Glass to HoloLens, to Pokémon Go. I’ve been writing about AR since 2012 and sort of gave up at some point on making the distinction.

      As I am most interested in the display technology itself, I am more trying to describe the display functionality. I don’t care as much as to whether it is “locked” to the real world, that is something that is applied using the display technology at the system level. The same display and optics could be used to simply overlay or with more sensors and computer processing perform Mixed Reality. The display technology doesn’t change just because it is used in a different application. If I take the display and optics out of a system does it cease to be an AR display?

      I get more offended when a company uses the term “Hologram” to describe “Mix Reality” because there is a well-established definition of a hologram (https://www.kguttag.com/2016/10/21/armr-optics-for-combining-light-for-a-see-through-display-part-1/)

      In 2012 when I started writing about AR, everyone was calling Google Glass AR (https://www.kguttag.com/2012/03/03/augmented-reality-head-mounted-displays-part-1-real-or-not/). BTW I love the ADmented reality parody (https://www.youtube.com/watch?feature=player_embedded&v=_mRF0rBXIeg).

      You could argue that the definition is becoming tighter as to what AR now means and that might be true (don’t know as it is such a mess right now). You may be trying to put the genie back in the bottle. I wish there was a good definition (like there used to be for Holograms, although 99.9% of what people call holograms are “Pepper’s Ghost” effects).

      • Karl, would agree with your assessment. In my mind, non-immersive, unregistered experiences is in fact an AR experience as it does overlay data in the person’s real world. One term that is being used (which I shamelessly gave name to) is assisted reality. Mixed reality is the other end of the spectrum, but all should be considered AR experiences.

      • Another term they may apply here is a head-mounted heads-up display (HUD). The use of the term “HUD” may better connote that it just displays information in user’s view without locking it to the real world. Still, this definition can get muddy as automotive “HUDs” are trying to virtual paths for the diver to follow such as http://www.ti.com/content/dam/ticom/images/applications/automotive/car-head-up-display-dm8729.jpg and what Continental Calls “Augmented reality HUD” http://continental-head-up-display.com/ar-hud/. A guess you could say that Continental is defining the HUD without the AR as being non-locked, but when you add the “AR” to it is implying some way of locking the image to the real world.

    • I agree with Ben – in some industry circles HMDs are just displays that may be transparent whearas AR requires tracking capability to actually augment reality, hence the name. By this token Vuzix and North are HMDs not AR – but respective marketing departments want us to ignore reality completely and call it AR (in the same way as Microsoft referring to the images as Holograms in their Hololens!) 😉

      • Isn’t Microsoft calling the images in the Hololens “holograms” because they are attempting a Light Field Display that resolves the accommodation to vergence issue?

      • No they’re not doing that, but even if they were, the images are still not holograms by the fundamental definition of holography. Unfortunately physics terms are being misused 😉

      • Yes, the whole “Hololens having holograms” is a marketing concept and a misuse of the scientific term “Hologram.” What Hololens calls a “hologram” is a standard 3-D stereo image with SLAM to lock it to the real world, what others would call Mixed Reality.

        To Ben’s comment, there is nothing in Hololens to address Vergence-Accommodation, and I have never seen Microsoft claim that it does nor I have I seen them claim to have light fields. It is Magic Leap that has been making false claims about supporting “Light Fields” (they don’t as Light Fields are defined). Magic Leap does have feeble (works poorly) attempt at addressing vergence-accommodation with its dual layer of waveguides.

      • Unfortunately, AR is used by different people to mean different things. This is not like Holograms, which is a well defined scientific term, that is being abused. I like the term Mixed Reality (MR) as it seems to distinguish itself as “AR with SLAM” or other real-world mapping/locking.

        BTW, I am fond of saying that, “often things that are easy in with pass through AR are easy with optical AR and vice versa.” Things like hard edge occlusion (pixel level masking) are easy with pass through AR and impossible to solve for the general case optically. At the same time, there are always going to be issues with lag and perfect alignment with pass-through AR.

      • Jose, thanks for the link – I never saw that one. As Karl says the meaning has become perhaps a bit too loose – however going by the Milgram paper, the definition of see-through AR under section 3.1 requires “realspace imaging” in order to “superimpose images conformally” – i.e. in order to appropriately generate content this requires tracking cameras and spatial mapping to a degree that is not available in North or Vuzix – which would therefore put them in the HMD category.

  2. Is there a future where this technique could get better ? Is the eye box problem harder / similarly difficult to solve as waveguide issues?

    Lumus seems like they currently have the best display solution all around for most cases with their non-diffraction waveguide, but wondering if this is closer or further away from a breakthrough

    • Personally, I think laser beam scanning is “fools gold” for any headset display. It looks good until you really understand it. There are just so many drawbacks lurking beneath the surface understanding of what sounds like a simple concept. In addition the some of the obvious problems with laser beam scanning, it can be the tricker “second order” things that the layman do not understand like eye box that make it impossible. They have been talking about LBS for near-eye displays for ~30 years with nothing even mildly successful. The best case was the “scam” Microvision pulled off with earmarks to offload a bunch of useless headsets on the military with their Nomad project (see https://www.seattletimes.com/nation-world/45-million-for-a-boat-that-nobody-wanted/ and search for Microvision).

      Lumus seems to have one of the best options I have seen, certainly in the waveguide form factor. There are pros and cons to all the technologies, some like LBS, have way more cons than pros.

      • Please explain your assertion that the patent applications addresses my concerns. There are many patents for LBS and many people have theorized using LBS for near-eye displays and ZERO successful products. The Reddit Group has primarily am echo chamber with fanatics telling each other how great it will be someday and trying to induce others to share their delusion. Based on the fact that the stock is down around the $1 range again, the stock market has voted on the possibility.

        BTW, I have never so much as seen or heard of an LBS display used with a waveguide. Can you point me to even one company, lab, university, or paper that has an LBS display going into a waveguide that they say works? There are some basic physics incompatibilities that short of scatting the light (which wastes most of it) to randomize the light rays would seem to me to even “work.”

  3. Karl,

    You have been consistent in your view that LBS is unsuitable for AR, which you summarize here. Thank you. We have discussed the subject before, in the context of Microvision’s 20+ year effort in LBS. I wonder, is there room in your mind for re-consideration of the insurmountability of the problems you identify for LBS in AR given what seems like a concerted effort recently by Microsoft in LBS? In the last 3 years, patent after patent application has been filed by MSFT seeking to address the very list of problems you identify LBS faces in the AR context. I’d be very interested to have your take on these efforts by MSFT. A significant number of these Microsoft LBS patent applications have been complied at the Microvision subreddit in a timeline anticipating the 2019 launch of Hololens 2. Here is the reddit post in question.
    https://www.reddit.com/r/MVIS/comments/90izcb/mvismsft_hololens_timeline/

    • I would suggest that you are not going to get any good technical information in the Reddit Microvision echo chamber. What you are getting is a bunch of religious zealots trying to convince each other of why LBS is a good idea. Unfortunately, they only have a very superficial understanding of how it works in a system. A large company filing patents is not necessarily evidence of serious activity. The patents DON’T seriously address the technical problems (with one exception, I will discuss) but merely throw a laser beam scanning engine in as one of the solutions or throw out possible improvements. There are many patents from Microsoft showing the use of other technologies, particularly LCOS.

      Patent 10,025,093 (www.freepatentsonline.com/10025093.pdf) by Microsoft does address one of the fundamental physics problems associated with using LBS with waveguides, namely the angles of the light rays cannot be collimated to enter the light guide. The introduce an Exit Pupil Expander” EPE, effectively (and what is described in the patent as a diffuser. Essentially they are turning the laser scanning into a very expensive but small rear projection TV (see figure 4 which I have labeled: http://www.kguttag.com/wp-content/uploads/2018/11/Microsoft-LBS-EPE-01.png). This whole process is highly inefficient in that the scattering will waste most of the light. But without the scattering, I don’t see how it would work at all.

      • Something subtle here also, you can’t have pupil replication / expansion on anything other than an afocal system… (Magic Leap patents are a bit economical with the truth). Think what happens when you focus something – you introduce an image plane closer than infinity. If you introduce different path lengths – the image plane moves with each position. I’ll leave it to the reader to draw an optical diagram and see what happens 😉

      • Apologies, appears I hit reply to the wrong comment… should have been in relation to csineni’s comment

      • Apologies, appears I replied in the wrong bit, was meant to be in relation to csineni’s comment 😉

  4. “This whole process is highly inefficient in that the scattering will waste most of the light.”

    That seems the least of their worries considering laser intensity is something that is easily scalable and is perhaps its biggest strength compared to all other display technologies that struggle for luminance. So you’re giving up something for which you have great excess and ability to easily compensate for as-needed in order to get something else: in this case making up for some of its deficiencies/difficulties. That’s about the best way to do a tradeoff as you’ll ever find in tech R&D.

    • Excuse me, but you don’t appear to even a clue how any of this works and are just hand waving. Laser scanning display are worse in terms of lumens per Watt and then you are going to throw away most of it? Just crank up the lasers, battery, and heat? Give me a break.

      There are massive problems with using lasers in near-eye displays beyond getting the image into a diffractive waveguide. Laser scanning displays have been around for about 30 years and have failed time and again.

      • You’re right, LBS has lots of issues – although SLEDs might resolve the narrow linewidth / speckle issues, should there be an efficient green one. On top of all this, what happens if the mirror stops for some reason, shining an intense laser spot in the back of your retina?…

        30 years… Diffractive waveguides have been around longer then…
        https://patents.google.com/patent/US4711512
        😉

  5. https://www.techradar.com/news/hololens-2-could-be-much-cheaper-and-lighter

    https://patentscope.wipo.int/search/en/detail.jsf?docId=US235602685&tab=PCTDESCRIPTION&queryString=ALLNAMES%3A%28Microsoft%29&recNum=1&maxRec=99463

    Karl,

    What technology do you think MSFT is using to make this possible? No doubt you have been correct about LBS since the start. Is it possible they are figuring out how to solve for your objections? Do you think MSFT just pulled the same scam on the military?

    • It is going to take a while to explain what I think Microsoft is doing. I hope to get an article out soon.

      I don’t think Microsoft was scamming the military per se, I don’t think Hololens is the right solution for military use.
      Karl

  6. I understood. I am a young schoolboy who is working on a computerized lens project and what I need is a mini projector that you show the image to the lens as in your project could you please describe how it works

  7. Hi Karl, thanks for the insightful article. Any idea, who might be behind the LBS assembly for the Focals? There’s not that many manufacturers in this field – one of them was Lemoptix (acquired by Intel, then, perhaps, passed all the IP to Thalmic)

    • I don’t know who is making the LBS engine for North. It is a very low-resolution display (nominally about 100×100 pixels) which means that just about anyone could be capable of making it. The “usual suspect” would be ST Micro but it could be any of dozens of companies around the world.

  8. Hello Karl,
    In your opinion, is it possible to make mixed reality glasses that look like a normal pair of prescription glasses and if yes, what would be the best type of display?

    • First, sorry to be so long in responding. I was traveling followed by getting a bad case of the flu.

      There are many more variables that need to be tied down to answer this question. Those include resolution, brightness, eye-box, image quality, distortion of the real-world, and Field of View (FOV).

      There are several companies including North Focals, Bosch, Tooz, and Norm AR that I have seen recently that are all supporting very narrow FOV and look a lot like normal prescription glasses.

      The North Focals and Bosch use direct Laser Beam scanning into the eye which results in a tiny eye-box so unless the glasses are precisely positioned relative to the user’s eye you see nothing. They use a holographic mirror film to redirect the light toward the eye. The resolution is somewhere on the order of 160 by 100 pixels with a 9 to 12-degree FOV. Another major problem with direct laser scanning is that everyone over about 30 years old has “floaters” in the eye that they normally don’t notice as they are out of focus with normal light, but they end up casting shadows on the retina with direct laser scanning.

      Tooz uses a Fresnel mirror and other optics molded into fairly normal-looking glasses and they support prescription-correction. They use Micro-OLED displays. The field of view and resolution is much better than North and Bosch and they have a reasonable eye-box. Still, you are looking at about 400 pixel wide by 640 pixel tall (the display is in “portrait mode”). Currently, there are some compromises in image quality that Tooz says they are working to correct.

      Norm AR by Human Capable is more of a concept from a small startup. They are planning on embedding a small simple beam splitter in a lense. I have seen a non-functioning prototype. Basically, it a simple beam splitter that will get embedded. The known issue with this approach is the small FOV. The FOV gets limited by the thickness of the beam splitter with drives the thickness of the lense in which it will be embedded.

      More directly answering your question as to the best type of display. I think in the short run, Micro-OLEDs are the best choice today but are limited in terms of brightness. Longer-term MicroLEDs should have all the technical advantages. The problem still becomes one of fitting optics that will not cause other issues.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading