Near Eye AR/VR and HUD Metrics For Resolution, FOV, Brightness, and Eyebox/Pupil

Image result for oculus riftI’m planning on following up on my earlier articles about AR/VR Head Mounted Displays
(HMD) that also relate to Heads Up Displays (HUD) with some more articles, but first I would like to get some basic Image result for hololenstechnical concepts out of the way.  It turns out that the metrics we care about for projectors while related don’t work for hud-renaultmeasuring HMD’s and HUDs.

I’m going to try and give some “working man’s” definitions rather than precise technical definitions.  I be giving a few some real world examples and calculations to show you some of the challenges.

Pixels versus Angular Resolution

Pixels are pretty well understood, at least with today’s displays that have physical pixels like LCDs, OLEDs, DLP, and LCOS.  Scanning displays like CRTs and laser beam scanning, generally have additional resolution losses due to imperfections in the scanning process and as my other articles have pointed out they have much lower resolution than the physical pixel devices.

When we get to HUDs and HMDs, we really want to consider the angular resolution, typically measured in “arc-minutes” which are 1/60th of a degree; simply put this is the angular size that a pixel covers from the viewing position. Consumers in general haven’t understood arc-minutes, and so many companies have in the past talked in terms of a certain size and resolution display viewed from a given distance; for example a 60-inch diagonal 1080P viewed at 6 feet, but since the size of the display, resolution and viewing distance are all variables its is hard to compare displays or what this even means with a near eye device.

A common “standard” for good resolution is 300 pixels per inch viewed at 12-inches (considered reading distance) which translates to about one-arc-minute per pixel.  People with very good vision can actually distinguish about twice this resolution or down to about 1/2 an arc-minute in their central vision, but for most purposes one-arc-minute is a reasonable goal.

One thing nice about the one-arc-minute per pixel goal is that the math is very simple.  Simply multiply the degrees in the FOV horizontally (or vertically) by 60 and you have the number of pixels required to meet the goal.  If you stray much below the goal, then you are into 1970’s era “chunky pixels”.

Field of View (FOV) and Resolution – Why 9,000 by 8100 pixels per eye are needed for a 150 degree horizontal FOV. 

As you probably know, the human eye’s retina has variable resolution.  The human eyes has roughly elliptical FOV of about 150 to 170 degrees horizontally by 135 to 150 degrees vertically, but the generally good discriminating FOV is only about 40 degree (+/-20 degrees) wide, with reasonably sharp vision, the macular, being about 17-20 degrees and the fovea with the very best resolution covers only about 3 degrees of the eye’s visual field.   The eye/brain processing is very complex, however, and the eye moves to aim the higher resolving part of the retina at a subject of interest; one would want the something on the order of the one-arc-minute goal in the central part of the display (and since having a variable resolution display would be a very complex matter, it end up being the goal for the whole display).

And going back to our 60″, 1080p display viewed from 6 feet, the pixel size in this example is ~1.16 arc-minutes and the horizontal field of view of view will be about 37 degrees or just about covering the generally good resolution part of the eye’s retina.

Image result for oculus rift
Image from Extreme Tech

Now lets consider the latest Oculus Rift VR display.  It spec’s 1200 x 1080 pixels with about a 94 horz. by 93 vertical FOV per eye or a very chunky ~4.7 arc-minutes per pixel; in terms of angular resolution is roughly like looking at a iPhone 6 or 7 from 5 feet away (or conversely like your iPhone pixels are 5X as big).   To get to the 1 arc-minute per pixel goal of say viewing today’s iPhones at reading distance (say you want to virtually simulate your iPhone), they would need a 5,640 by 5,580 display or a single OLED display with about 12,000 by 7,000 pixels (allowing for a gap between the eyes the optics)!!!  If they wanted to cover the 150 by 135 FOV, we are then talking 9,000 by 8,100 per eye or about a 20,000 by 9000 flat panel requirement.

Not as apparent but equally important is that the optical quality to support these types of resolutions would be if possible exceeding expensive.   You need extremely high precision optics to bring the image in focus from such short range.   You can forget about the lower cost and weight Fresnel optics (and issues with “God rays”) used in Oculus Rift.

We are into what I call “silly number territory” that will not be affordable for well beyond 10 years.  There are even questions if any know technology could achieve these resolutions in a size that could fit on a person’s head as there are a number of physical limits to the pixel size.

People in gaming are apparently living with this appallingly low (1970’s era TV game) angular resolution for games and videos (although the God rays can be very annoying based on the content), but clearly it not a replacement for a good high resolution display.

Now lets consider Microsoft’s Hololens, it most criticized issue is is smaller (relative to the VR headsets such as Oculus) FOV of about 30 by 17.5 degrees.  It has a 1268 by 720 pixel display per eye which translates into about 1.41 arc-minutes per pixel which while not horrible is short of the goal above.   If they had used 1920×1080 (full HD) microdisplay devices which are becoming available,  then they would have been very near the 1 arc-minute goal at this FOV.

Let’s understand here that it is not as simple as changing out the display, they will also have to upgrade the “light guide” that the use as an combiner to support the higher resolution.   Still this is all reasonably possible within the next few years.   Microsoft might even choose to grow the FOV to around 40 degrees horizontally rather and keep the lower angular resolution a 1080p display.  Most people will not seriously notice the 1.4X angular resolution different (but they will by about 2x).

Commentary on FOV

I know people want everything, but I really don’t understand the criticism of the FOV of Hololens.  What we can see here is a bit of “choose your poison.”  With existing affordable (or even not so affordable) technology you can’t support a wide field of view while simultaneously good angular resolution, it is simply not realistice.   One can imaging optics that would let you zoom between a wide FOV with lower angular resolution and a smaller FOV with higher angular resolution.  The control of this zooming function could perhaps be controlled by the content or feedback from the user’s eyes and/or brain activity.

Lumens versus Candelas/Meter2 (cd/m2 or nits)

With an HMD or HUD, what we care about is the light that reaches the eye.   In a typical front projector system, only an extremely small percentage of the light that goes out of the projector, reflects off the screen and makes it back to any person’s eye, the vast majority of the light goes to illuminating the room.   With a HMD or HUD all we care about is the light that makes it into the eye.

Projector lumens or luminous flux, simply put, are a measure of the total light output and for a projector is usually measure when outputting a solid white image.   To get the light that makes it to the eye we have to account for the light hits a screen, and then absorbed, scattered, and reflects back at an angle that will get back to the eye.  Only an exceeding small percentage (a small fraction of 1%) of the projected light will make it into the eye in a typical front projector setup.

With HMDs and HUDs we talk about brightness in terms candelas-per-Meter-Squared (cd/m2), also referred to as “nits” (while considered an obsolete term, it is still often used because it is easier to write and say).  Cd/m2 (or luminance) is measure of brightness in a given direction which tells us how bright the light appears to the eye looking in a particular direction.   For a good quick explanation of lumens, cd/m2 I would recommend a Compuphase article.

Image result for hololens

Hololens appears to be “luminosity challenged” (lacking in  cd/m2) and have resorted to putting sunglasses on outer shield even for indoor use.  The light blocking shield is clearly a crutch to make up for a lack of brightness in the display.   Even with the shield, it can’t compete with bright light outdoors which is 10 to 50 times brighter than a well lit indoor room.

This of course is not an issue for the VR headsets typified by Oculus Rift.  They totally block the outside light, but it is a serious issue for AR type headsets, people don’t normally wear sunglasses indoors.

Now lets consider a HUD display.  A common automotive spec for a HUD in sunlight is to have 15,000 cd/m2 whereas a typical smartphone is between 500 and 600 cd/m2 our about 1/30th the luminosity of what is needed.  When you are driving a car down the road, you may be driving in the direction of the sun so you need a very bright display in order to see it.

The way HUDs work, you have a “combiner” (which may be the car’s windshield) that combines the image being generated with the light from the real world.  A combiner typical only reflects about 20% to 30% of the light which means that the display before the combiner needs to have on the order of 30,000 to 50,000 cd/m2 to support the 15,000 cd/m2 as seen in the combiner.  When you consider that you smartphone or computer monitor only has about 400 to 600 cd/m2 , it gives you some idea of the optical tricks that must be played to get a display image that is bright enough.

phone-hudYou will see many “smartphone HUDs” that simply have a holder for a smarphone and combiner (semi-mirror) such as the one pictured at right on Amazon or on crowdfunding sites, but rest assured they will NOT work in bright sunlight and only marginal in typical daylight conditions. Even with combiners that block more than 50% of the daylight (not really much of a see-through display at this point) they don’t work in daylight.   There is a reason why companies are making purpose built HUDs.

The cd/m2 also is a big issue for outdoor head mount display use. Depending on the application, they may need 10,000 cd/m2 or more and this can become very challenging with some types of displays and keeping within the power and cooling budgets.

At the other extreme at night or dark indoors you might want the display to have less than 100 cd/m2 to avoid blinding the user to their surrounding.  Note the SMPTE spec for movie theaters is only about 50 cd/mso even at 100 cd/m2 you would be about 2X the brightness of a movie theater.  If the device much go from bright sunlight to night use, you could be talking over a 1,500 to 1 dynamic range which turns out to be a non-trivial challenge to do well with today’s LEDs or Lasers.

Eye-Box and Exit Pupil

Since AR HMDs and HUDs generate images for a user’s eye in a particular place, yet need to compete with the ambient light, the optical system is designed to concentrate light in the direction of the eye.  As a consequence, the image will only be visible in a given solid angle “eye-box” (with HUDs) or “pupil” (with near eye displays).   There is also a trade-off in making the eyebox or pupil bigger and the ease of use, as the bigger the eye-box or pupil, the easier it will be the use.

With HUD systems there can be a pretty simple trade-off in eye-box size and cd/m2 and the lumens that must be generated.   Using some optical tricks can help keep from needing an extremely bright and power hungry light source.   Conceptually a HUD is in some ways like a head mounted display but with very long eye relief. With such large eye relieve and the ability of the person to move their whole head, the eyebox for a HUD has significantly larger than the exit pupil of near eye optics.  Because the eyebox is so much larger a HUD is going to need much more light to work with.

For near eye optical design, getting a large exit pupil is a more complex issue as it comes with trade-offs in cost, brightness, optical complexity, size, weight, and eye-relief (how far the optics are from the viewer’s eye).

Too small a pupil and/or with more eye-relief, and a near eye device is difficult to use as any small movement of the device causes you to to not be able to see the whole image.  Most people’s first encounter with an exit pupil is with binoculars or a telescope and how the image cuts offs unless the optics are centered well on the user’s eye.

Conclusions

While I can see that people are excited about the possibilities of AR and VR technologies, I still have a hard time seeing how the numbers add up so to speak for having what I would consider to be a mass market product.  I see people being critical of Hololens’ lower FOV without being realistic about how they could go higher without drastically sacrificing angular resolution.

Clearly there can be product niches where the device could serve, but I think people have unrealistic expectations for how fast the field of view can grow for product like Hololens.   For “real work” I think the lower field of view and high angular resolution approach (as with Hololens) makes more sense for more applications.   Maybe game players in the VR space are more willing to accept 1970’s type angular resolution, but I wonder for how long.

I don’t see any technology that will be practical in high volume (or even very expensive at low volume) that is going to simultaneously solve the angular resolution and FOV that some people want. AR displays are often brightness challenged, particularly for outdoor use.  Layered on top of these issues are those size, weight, cost, and power consumption which we will have to save these issues for another day.

 

Karl Guttag
Karl Guttag
Articles: 256

28 Comments

  1. Karl,

    ODG’s Project Horizon uses an OLED micro display and claims to have a FOV equivalent to a 120″ screen @ 8′ . This equates to approximately a 64 degree FOV . ODG claims it as +50 degrees .

    ” It also has low-persistence OLED displays with up to 120 fps and high contrast.”

    http://www.businesswire.com/news/home/20160601006256/en/ODG-OTOY-Join-Forces-Create-Uncompromising-Platform

    Emagin now has up to 4,500 nits full color with timeline as follows :

    “We are currently shipping our OLED-ULT displays as engineering samples to a few key customers that are interested in ultra-high brightness microdisplays for applications as diverse as aviation head mounted displays to AR and VR applications. Our current expectation is that we will begin shipping additional engineering samples to other customers in the fourth quarter of 2016.”

    http://www.businesswire.com/news/home/20160602005730/en/eMagin-Announces-Public-Demonstration-Ultra-High-Brightness-Direct

    It appears to me that ODG is leapfrogging Microsoft in regards to display & optics . MSFT is also working on a WFOV solution . Is there a problem in using LCOS when enlarging the FOV ? Does the smear and rainbow effect become more problematic with the sudden movement using AR & VR ?

    Furthermore, Emagin has developed a 2K x2K OLED micro display display for VR applications . This has been demonstrated in their VR prototype with +110 degree FOV . Bear in mind this article is over a year old so advancements have been made .

    http://www.volantidisplays.com/blog/hands-on-with-emagins-virtual-reality-headset-and-interview-with-vp-dan-cui/

    The EMAGIN 2K x 2K display will be generally available in Q1 2017 according to their 2Q 10Q .

    “Our product development efforts on the 2K x 2K display project that was initiated in the fourth quarter of 2015 continued in the current quarter. This project is on track to produce engineering samples for select customers beginning in the fourth quarter including the customer with whom we entered into the license agreement in December 2015. General availability is scheduled for the first quarter of 2017.”

    https://www.sec.gov/Archives/edgar/data/1046995/000156276216000424/c995-20160630x10q.htm

    Emagin’s 2K x 2K is initially likely to be used in higher end products such as those used in Location Based applications such as the ones Roy Taylor of AMD has discussed until volumes increase & prices decline .

    Roy Taylor mentions AMD has been working with Emagin for 18 months @ 16:42 in this video .

    =BfOyFnqOdyk&feature=youtu.be&t=1002

    If your not aware AMD has been aggressively pursuing the VR market .

    Finally Emagin has a pathway to even higher resolution . They are currently beginning work on a 35mm X 35 mm 4K x 4K (per eye) display .

    Andrew Sculley 1Q CC :

    We’re also looking to build the prototype display that is 35 millimeters by 35 millimeters, which we could design with a resolution up to 4,000 by 4,000. We’re discussing this display with potential foundry partners and we’ll bring the proposal to a half dozen likely customers who are looking for displays of this type. The customers with whom we’ve discussed this display are very interested in our proposal.

    Emagin has already hit +110 degree FOV mark with their optics using a 2K x 2K micro display – center column Figure 2 of their Microdisplay Immersive Headset patent . Their 35mm X 35mm display will fall somewhere in the right hand column with a corresponding FOV of 210 degrees .

    http://pdfpiw.uspto.gov/.piw?PageNum=0&docid=09366871&IDKey=E6736DE900D2&HomeUrl=http%3A%2F%2Fpatft.uspto.gov%2Fnetacgi%2Fnph-Parser%3FSect1%3DPTO2%2526Sect2%3DHITOFF%2526p%3D1%2526u%3D%25252Fnetahtml%25252FPTO%25252Fsearch-adv.htm%2526r%3D1%2526f%3DG%2526l%3D50%2526d%3DPTXT%2526S1%3Demagin%2526OS%3Demagin%2526RS%3Demagin

    In the past you have been quite dismissive of OLED micro displays . It may be time for you to take another look .

    BTW the new Epson Moverio BT -300 will use OLED micro displays as well .

    http://www.epson.com/cgi-bin/Store/jsp/Landing/moverio-augmented-reality-smart-glasses.do

    My question to you is why would ODG transition from LCOS in their R-7 to OLED .

    • Actually, I think the 60″ diagonal at 8 feet works out to about a 48 degree horizontal FOV and a 51.3 diagonal FOV. Nima Shams in the video from ODG is all over the place with the resolution of the display, first he says 720P and then he says 1080P. The article http://vrworld.com/2016/06/03/project-horizon-ultimate-mixed-reality-headset/ says it is 720p per eye (the camera is 1080p). Using the 50 degree which is 3000 arc-minutes divide by 1280 horizontal pixels gives about 2.34 arc-minutes per pixel, not horrible but not particularly great. It has a wider FOV with the same number of pixels at the Hololens.

      Everyone I have known that has looked at using eMagin devices has come back saying they are ridiculously expensive (many times that of the same resolution LCOS panel). I would expect that device that they only have samples of that has 2K by 2K pixels is going to be extremely expensive. I also would wonder what the lifetime of the panels will be at 4,500 nits. Emagin has been a “startup” for 23 years. I didn’t see a pixel pitch for this display, but if it follows their current pixel pitch of about 9.74 microns, this is going to be a very big and expensive device.

      I’m not sure where you think ODG is ahead of Microsoft. I know that Microsoft paid ODG $150M to license technology from ODG (see https://techcrunch.com/2014/03/27/microsoft-paid-up-to-150m-to-buy-wearable-computing-ip-from-the-osterhout-design-group/) so I guess they had something Microsoft wanted badly.

      Interesting on the Volanti Display using the eMagin. My guess is that this display will cost more than a middle to high end BMW sports car. Still the first step in getting to high volume production is making one of them.

      Sure AMD with their ATI acquisition of a number of years back and Nvidia are excited about AR and VR.

      BUT eMagine say what it would COST? Even their 720p device is ridiculously expensive by every report I have heard. A 35mm by 35mm OLED device will likely cost more than an expensive car for some time. Show me any of eMagine’s displays shipping in any volume and give a price.

      The Epson BT-300 is interesting with dual 720p OLED displays that they are pre-selling at $799. This looks like a technology that Epson has developed according to their video: https://www.youtube.com/watch?v=j1ytXjEhw_o I can’t make total sense from their video directly and it looks like some kind of “hybrid” technology. They talk about using white OLEDs and then some filtering process. But you can’t get an LED to produce white, you can only get “white” by say phosphor converting blue or uv native OLED. But then why don’t they go straight to R, G, and B phosphors? The video is probably glossing over exactly what they are doing, maybe they are really generating blue or uv and then generating a “loose” R, G, and B and then using color filters to make it more saturated (just a wild guess). I would be interested in seeing any papers on what they are doing. I would have a lot more belief in Seko-Epson pulling this kind of thing off than eMagin (just based on their manufacturing capability).

      As for the R-7, it is being sold for $2,750. This puts it out realm of a volume consumer device already. I would thin that an OLED model will sell for more.

      I think that hits all your questions (there were a lot).

    • I see I missed you question about FOV and LCOS. Not there are no limitations to the FOV with LCOS. The field sequential breakup (“rainbow effect”) can be an issue if a person’s head jerks around a lot. Some of the technologies have higher field sequencing rates that helps.

      Also, if you want an interesting article on something nVidia has been working on in the AR/VR space you might want to look at: https://research.nvidia.com/sites/default/files/publications/NVIDIA-NELD_0.pdf . The article says they were using a Sony ECX332A OLED display, note this display only has 200 nits which is probably OK for non-see through displays but weak for a see through display and way too dim for outdoors. The Sony panel came out in 2011 but they have not not make a lot of noise since (only a smaller but lower resolution panel in 2014).

  2. Karl,

    Thanks for the feedback .

    With regards to ODG . If you watch the video carefully first the current R-7 was discussed with a 720P display . Then the Project Horizon is discussed with the 1080p resolution (2K) .
    With regard to the article you cited again the R-7 is referenced with 720p then the Project Horizon is mentioned with a 2K resolution (1080p) .

    Again it is clear to me that ODG is migrating from a 720p LCOS to a 1080p OLED . Perhaps they are using another manufacture than Emagin but whether Emagin has been able to able to make their WUXGA display or others cost effective to be put in a consumer device should be known soon as CES 2017 approaches in January . Emagin will have a booth there as will ODG .

    • Thanks for following up. In trying to answer all your questions I skipped through the video a bit and missed the transition where he started talking about the 1080p.

      From what he said in the video, they are going from 720p with a “55-inch display at 8 feet” to a 1080p with a “120-inch display at 8 feet” or a jump from roughly 50 degree to more than double that to ~110 degree horizontally, BUT they are only adding 1.5X more pixels linearly. This means their angular resolution is getting worse dropping to about 3 arc-minutes per pixel. This would mean that the user will be able to see individual pixels if their optics are as good as they claim (or they will try and blur them out which is not a good solution). Not to be too critical, but I am just translating what he is saying into what it means in terms of what the user will see. It would seem they would be going from on the edge to over the edge in terms of having clearly noticeable pixels which is something you don’t want when watching a video.

      Everything in the video suggests that they are just stepping things up a notch and I would assume this means they are just switching to a 1080p LCOS device of which I think there are several available. This is not to say that they couldn’t switch display technologies, but I did not see an indication of that in this content.

      Do you have some other source that suggest they will be switching to OLED? I know that http://www.oled-info.com is speculating that they will some day http://www.oled-info.com/odgs-project-horizon-emagins-oled-microdisplay-first-consumer-vr-customer. But this article says the Horizon display will be 2K by 2K when the video says each eye has 1080P (~2k by 1k), so it makes be doubt their report. Additionally, it would require a much more significant design change to go from an 720p 16 by 9 form factor display to a 2K 1 by 1.

      Something else to note in the Horizon prototype is that it still has the light shading to block ambient light. This is an indication of it not having enough nits (cd/m2). This would say they are not using a 4,500 nits display device, probably more like something in the 200 nits range.

      The big thing that the R7 and Horizon seem to have on the Hololens is that it is much more compact fitting into “sunglass” (albeit very big and bulky sunglasses) form factor as opposed to the semi-helmet of Hololens. But we don’t have the information to compare one for one all the features.

      The BT-300 at $800 is still a bit pricy for the common person but at least it is getting in the ballpark whereas the Hololens and R7 at 720p are about $3,000. Their optics are rather crude with a prism in front of the person’s eye which has to distort their vision of the real world. The BT-300 does not have the level of sensors and the like of Hololens or the R7 and it has a cable to a remote pack. I would certainly like to know more about their SI-OLED technology and its affordability and roadmap. I currently have no serious skin in the game for any of the technologies right now and I see pros and cons in all of them.

      • ODG using OLED in Project Horizon came from their own press release that I originally posted .

        “It also has low-persistence OLED displays with up to 120 fps and high contrast.”

        http://www.businesswire.com/news/home/20160601006256/en/ODG-OTOY-Join-Forces-Create-Uncompromising-Platform

        OLED was also cited here :

        “The glasses work by using a pair of micro OLED displays to reflect images into your eyes at 120 frames-per-second. And the quality blew Urbach away, he tells Business Insider.”

        http://www.businessinsider.com/odg-releases-new-augmented-reality-glasses-2016-6

        I believe ODG is using a somewhat loose definition of 2K -1920×1080 . I believe some including OLED- Info is confusing that with the 2K x 2K resolution (per eye ) Emagin display .

        Therefore , the ODG Project Horizon is using a 2k (1080p) display per eye .

        The R-7 is a 720P LCOS display per eye with a 30 degree FOV . Project Horizon is 1080pOLED display per eye with a 50 degree FOV . Where does 110 degree FOV enter the picture ?

        Also where does fill factor enter the equation ? My understanding is that OLED micro displays have a fill factor around 80% as opposed to cell phone type displays at around 30% . The Rift needs to blur their pixels to reduce the noticeable pixilation .

        This certainly would affect the image quality as well as the arc minute per pixel , no ?

      • Thanks, for the OLED reference in the Businesswire article. I went through your long comments too fast late last night.

        I was working the FOV back from the comments in the ODG video: http://www.youtube.com/watch?v=PMH5I05unZE but I stand corrected base on other information including this article: http://newatlas.com/odg-smartglasses-r7-review-hands-on/41395/. I also screwed up my math and 55-inch diagonal at 8 feet works out to about a 37 degree horizontal FOV. and 120 inch diagonal at 8 feet works out to about 58 degrees.

        Going with the articles 50 Degrees with 1080p works out to about 1.56 arc-minutes per pixel which is probably just enough to keep most people from noticing the individual pixels. Basically it is about as far as they should go with a 1080P device in terms of FOV before the pixels start becoming apparent. There is no absolute threshold just like how many hairs does a man have to have on his head before he is not-bald, but at 1 arcminute per pixel you are good for most people and by 2 the pixels are becoming noticeable (particularly when moving) and by 3 they are definitely noticeable. Anti-aliasing can only help so much before you start seeing zipper effects on moving images (many people only consider still images, but people will notice the wriggling on a slow moving image when they might better accept a still image). Oculus Rift is off the charts at 4.7 arc minutes per pixel for their better model, starting to approach 1970’s video game type arc-minutes per pixel territory.

        Fill factor definitely helps on whether you notice pixels. I don’t know how to quantify it (have not seen studies) but I would guess it might be as much as a 30% to 50% advantage in terms of noticing the individual pixels. Having the black around the pixels will give a contrast edge that the eye is more likely to pick up.

    • The Microvision patent, I assume you are talking about US Patent 8,721,092 Wide field of view substrate guided relay, appears to me to be worthless. There are MANY light guide technologies out there and this one looks overly complex and expensive requiring per claim 1 a plurality of output couplers and electrically operate light valves:

      “1.A substrate guided relay comprising: a substrate to relay light; a plurality of output couplers; and a plurality of electrically operated light valves positioned between the substrate and the plurality of output couplers”

      This patent also seems to only be applicable to laser beam scanning (it apparently switches ahead of the scanning beam) which has been a total failure in near eye displays.

      Then we have the issue of angular resolution necessary to support a 120 . To support a 120 degree FOV, you would want on the order of 120*60 = 7,200 pixels in the horizontal direction. This is about 10X measurable resolution that Microvision has achieved to date and based n their historical rate of progress they will not achieve until long after the term of the patent (20 years) expires. So from a business perspective it is worthless as I see it.

  3. Hello Karl, I enjoy reading your posts about AR and HUD. In your discussion about the nits required for proper bright HUD for use in sunlight you mention that a HUD display should have of 30k-50k nits. What kind if display can produce these amounts of nits or is there a trick to using displays with lower values of nits?

    • Note that the 30k to 50k nits is BEFORE the combiner/windshield which reduces the nits by 3x to 5x depending the amount that gets reflected back toward a person’s eyes. You don’t want that much going into the eyes. Typically the amount going to the eye’s in bright sunlight needs to be about 15k to compete.

      Nits (Candela’s per meter-squared) is a measure of light per solid angle. The trick to getting high nits without needing an extremely bright light source is to concentrate the light over a small angle. With an automotive HUD you only want/need the light to get to the driver’s eyes and viewed in the combiner or windshield which acts as a combiner. The problem with using normal flat panel such as a phone display is that they output only about 500 nits that is spread out over a wide angle, you can’t get the light concentrated enough over a smaller angle.

      Most automotive HUDs use a small LCD display with a very bright LED light source that has its light focused over a small angle. A more efficient way is to use a pico projector which is what Navdy (that I designed the original optics) and other aftermarket HUDs use. The light combining out of the projector is already concentrated. A “normal” projector screen will highly diffuse the light so people can watch it over a wide angle. But for a HUD they use a very high gain screen (either transmissive/backlit or reflective) that only slightly diffuses the light. The result is an image on the screen that has high nits but is very directional (like looking at a very old LCD panel were you have to look at it straight-on). Depending on how large the image is and a number of other factors, you can get to 30k to 50k nits off the “screen” with only about a 25 to 80 lumens projectors.

      • Indeed – I have an engineer friend who works for Navdy and he said they had many challenges getting the extremely high dynamic range requirement for the DLP; the ends of their ranges are night driving and “driving snowy roads on a sunny day”. He also mentioned the difficulties of producing a good looking image when you can only add light. One thing about the magic leap demos (esp Weta’s videogame one) was that they clearly implied a display capable of subtractive combining (i.e. they draw black over an area that is in reality illuminated) – e.g. the holes in the ceiling that the robots drop through, the smoke/scorch marks left by the raygun when it’s first fired, and so on. This seems impossible to do in practice unless there’s an additional display layer that performs darkening (e.g. an LCD), however some ability to selectively subtract natural light seems important if you want to display good looking images. The hololens simply darkens the whole FOV with its ‘sunglasses’. This whole field is quite interesting..

      • Heh I didn’t realize you actually used to be the CTO of Navdy.. preaching to the choir then.. Wow you worked on the TMS9918 too? I remember that chip well 🙂

      • Per your next message, I am very well aware of the dynamic range. I don’t know if they used the trick that another engineer and I came up with (we both left Navdy) for LED dimming. There is an extreme range compared to any other display I have ever seen. I don’t think you can directly get that range out of say lasers (you would need a secondary dimmer such as an LCD).

        Also correct, you can’t draw dark or cancel out non-coherent light (laser speckle is an example of coherent light canceling and adding to itself).

        There have been lots of proposals for “pixelated shutters” but I have not seen one that is successful yet. Maybe someday. Note even if you have the device the shutter will have to be “real” and out of focus relative to the image. It could act more as local dimming. There is also a serious parallax issue which I guess could be deal with via eye tracking.

      • Reading (many) more of your blog posts has been absolutely fascinating; to me as an EE/software geek it’s a whole new world; optics+displays looks like an area where developing mass-market products is very complex and potentially wildly expensive (especially if one goes down the wrong path for any number of reasons). Thanks very much for your time posting here; I trust your demonstrated expertise in the area brings you a fair amount of consulting gigs.

      • Thanks, I apologize in advance for my writing and typos. This is a one person operation and I write like I am an engineer with ADD. After I write something, I would rather get onto the next topic than endlessly review my writing knowing I will still get it wrong without an editor. So if something is confusing, it could be my fingers having a mind of their own or a cut and past that went awry.

        In many was this is a learning experience for me. I learn and invent by explaining concepts to others. It forces me to “fill in the blanks” and really think about the issues.

  4. Glad I found this site with someone that is so knowledgeable on this subject.
    Karl you explain, if someone looks at a small Near Eye Display (NED), mounted in a HUD for VR purposes will he see:
    1. a display surround by a dark rectangle OR
    2. a display that fills his entire view?

    I want to create a VR solution used as an eyepiece in astronomy. The VR will receive an AV signal from a video camera and render the image on the Near Eye Display for an immersive experience. However I want the image to cover the entire view (as with the Samsung head gear).

    Can a small NED do this OR does one need to use a large LCD such as a smartphone?

    Thanks
    Tim

    • I’m not totally clear about your question and what you are trying to do.

      If you have a “combiner” and the combiner does not fill the entire FOV, then you will see the edges of the combiner as an out of focus image, even if the combiner is nominally clear. If the combiner is in a frame, then you will see the frame even though it is out of focus.

      With a display on an see through device you can only add light. A complaint can be that you see a “gray” or colored rectangle in the “black/clear” area of the image. For example Google Glass had a magenta colored black/clear rectangle.

      The Samsung Gear is just optics in front of a flat panel LCD that totally block the real world. Are you going to have a camera looking through the telescope and then just want to merge the video from the camera with some other information?

      There are other issue such as “angular resolution.” Most of the flat panel solutions are horrible in this regard with angular resolutions of about 4 to 5 arcminutes/pixel when you want more like 1 to 1.6. They support “immersive” but give terrible resolution/detail.

      If you are really trying for an eyepiece, then you likely want a microdisplay, but I don’t totally understand what you want to do.

      • Hi Karl
        I want to use an astronomy camera (with an AV output) for viewing deep space objects (DSO). Reason for this is that these cameras have high gain and allows one to see nebulae and galaxies that would be impossible to see using a telescope (even with a very large aperture). Now these cameras typically output the images/video via PAL/NTSC/HDMI to a large LCD screen for viewing. The problem with this, is that a large LCD screen is not terribly immersive. However VR is such as Samsung gear is.

        I want to then take the amplified output of the camera and feed it into an eyepiece-like device that encloses a NED and lenses (for now I won’t do any AR overlaying of the signal nor do I want to see around the NED-it should just be dark). This will then allow the user to view deep space objects and see images that one normally only see in magazines as well as give him the feel that he is viewing it through an eyepiece.

        The issue you mention with angular resolution is a huge problem because viewing galaxies as huge pixels will be such a turn off to any user.
        I guess it implies I will need to make a trade off between FOV and angular resolution? That is if I want the user to view at 1 arcminute/pixel and assuming my NED has a resolution of 640 x 480 it would imply that the horizontal FOV = 10.6 deg and the vertical FOV=8 deg?

        Thanks
        Tim

      • “Are you going to have a camera looking through the telescope and then just want to merge the video from the camera with some other information?”

        Yes that is correct. The camera will be used in the place of a regular eyepiece.

      • Ok, but it is still not clear to me what you are wanting to do. You could combine the camera image with anything such as a computer monitor.

        How does the NED fit into what you want to do. How do you see the person using this?

        Karl

  5. Hi Karl
    Instead of outputting the camera image to a monitor, it is streamed to an “electronic” eyepiece that houses the NED. The person then views the image through the “electronic” eyepiece instead of looking at it on a computer monitor.

    As stated previously, I want the user to have an immersive experience. A computer monitor does not provide this.
    Hope this is clear?

    If you want the entire NED to fill the user’s view how would one calculate the FOV required? For arguments sake let’s forget about angular resolution.

    • If you forget about angular resolution, then there is not much to talk about.

      You probably don’t want to fill the FOV with this type of display. The FOV of an eye is considered to be about 110 degrees horizontally. But only the center 30 degrees is very good and only then because the eye jitters. As you get beyond 50 degrees you are into the peripheral vision which is very low (see for example: https://en.wikipedia.org/wiki/Fovea_centralis). In this region is where you want about 1 arcminute/pixel. The human visual system does not work like a camera and monitor. The eye brain jitters around and brings the aim of the fovea to bear.

      Being “immersive” is about filling the peripheral vision with moving low resolution content. Your peripheral vision is all about detecting MOTION (as in there is a lion coming to eat you that you want to sense is coming :-), at least that is what people theorized how it evolved). For “immersion” to work you need moving stuff in your periphery. If you are looking through a telescope at a star field it would appear black to your peripheral vision as it could not resolve the stars.

      • Damn at 110 deg it will require a 6600 horizontal pixel resolution!

        If I then decide to stick to my 640 x 480 LCoS display (price unfortunately is a consideration for now) and I design for 1.5 arcminute angular resolution (thus my horizontal FOV= 16 deg and vertical FOV=12 deg)…how does one calculate the type of lenses needed for this?

      • I’m not an optics designer, but I might suggest you look at some various loupe magnifiers (see for example https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=hand+lens+loupe&rh=i%3Aaps%2Ck%3Ahand+lens+loupe). Calling them “magnifiers” in a way is misleading, what they really let you do is focus at closer and closer distances. You might try some cheap 10X, 20X and 30X and see what works best for you (it will cost you a lot more to get a “real” design done) and then buy a high quality one (with glass element(s)) based on the one you like best. If you are just doing a one-off this will likely be the cheapest. If you want something better (and willing to pay a lot more) you could look at Edmunds (http://www.edmundoptics.com/microscopy/magnifiers/) or ThorLabs (https://www.thorlabs.com/newgrouppage9.cfm?objectgroup_id=1427) but it is going to cost 10X more.

        If you only want 640×480 resolution, I would suggest using a transmissive LCD (such as Kopin) or an OLED. LCOS gets complicated for a one-off design because you need a beam splitter unless you can get your hands on the newer Himax “Front-Lit” LCOS. Then you have all the electronics challenges of connecting up the controller. Even to do a “simple basic” design is a LOT of work. Something to consider would be to cannibalize/hacking an existing product; if you count your time for anything, it will be a LOT cheaper. There is a lot of work to do both the optics and electronics which is why they have teams of specialists to do a real product.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading