Avegant “Light Field” Display – Magic Leap at 1/100th the Investment?

Surprised at CES 2017 – Avegant Focus Planes (“Light Field”)

While at CES 2017 I was invited to Avegant’s Suite and was expecting to see a new and improved and/or a lower cost version of the Avegant Glyph. The Glyph  was a hardly revolutionary; it is a DLP display based, non-see-through near eye display built into a set of headphones with reasonably good image quality. Based on what I was expecting, it seemed like a bit much to be signing an NDA just to see what they were doing next.

But what Avegant showed was essentially what Magic Leap (ML) has been claiming to do in terms of focus planes/”light-fields” with vergence & accommodation.  But Avegant had accomplished this with likely less than 1/100th the amount of money ML is reported to have raised (ML has raised to date about $1.4 billion). In one stroke they made ML more believable and at the same time raises the question why ML needed so much money.

What I saw – Technology Demonstrator

I was shown was a headset with two HDMI cables for video and USB cable for power and sensor data going to an external desktop computer all bundle together. A big plus for me was that there enough eye relief that I could wear my own glasses (I have severe astigmatism so just diopter adjustments don’t work for me). The picture at left is the same or similar prototype I wore. The headset was a bit bulkier than say Hololens, plus the bundle of cables coming out of it. Avegant made it clear that this was an engineering prototype and nowhere near a finished product.

The mixed reality/see-through headset merges the virtual world with the see-through real world. I was shown three (3) mixed reality (MR) demos, a moving Solar System complete with asteroids, a Fish Tank complete with fish swimming around objects in the room and a robot/avatar woman.

Avegant makes the point that the content was easily ported from Unity into their system with fish tank video model coming from the Monterrey Bay Aquarium and the woman and solar system being downloaded from the Unity community open source library.  The 3-D images were locked to the “real world” taking this from simple AR into be MR. The tracking was not all perfect, nor did I care, the point of the demo was the focal planes, lots of companies are working on tracking.

It is easy to believe that by “turning the crank” they can eliminate the bulky cables and  the tracking and locking to between the virtual and real world will improve. It was a technology capability demonstrator and on that basis it has succeeded.

What Made It Special – Multiple Focal Planes / “Light Fields”

What ups the game from say Hololens and takes it into the realm of Magic Leap is that it supported simultaneous focal planes, what Avegant call’s “Light Fields” (a bit different than true “light fields” to as I see it). The user could change what they were focusing in the depth of the image and bring things that were close or far into focus. In other words, they simultaneously present to the eye multiple focuses. You could also by shifting your eyes see behind objects a bit. This clearly is something optically well beyond Hololens which does simple stereoscopic 3-D and in no way presents multiple focus points to the eye at the same time.

In short, what I was seeing in terms of vergence and accommodation was everything Magic Leap has been claiming to do. But Avegant has clearly spent only very small fraction of the development cost and it was at least portable enough they had it set up in a hotel room and with optics that look to be economical to make.

Now it was not perfect nor was Avegant claiming it to be at this stage. I could see some artifacts, in particularly lots of what looked like faint diagonal lines. I’m not sure if these were a result of the multiple focal planes or some other issue such as a bug.

Unfortunately the only available “through the lens” video currently available is at about 1:01 in Avegant’s Introducing Avegant Light Field” Vimeo video. There are only a few seconds and it really does not demonstrate the focusing effects well.

Why Show Me?

So why were they more they were showing it to me, an engineer and known to be skeptical of demos? They knew of my blog and why I was invited to see the demo. Avegant was in some ways surprising open about what they were doing and answered most, but not all, of my technical questions. They appeared to be making an effort to make sure people understand it really works. It seems clear they wanted someone who would understand what they had done and could verify it it some something different.

What They Are Doing With the Display

While Avegant calls their technology “Light Fields” it is implemented with (directly quoting them) “a number of fixed digital focal planes, and then interpolate the planes in-between them.” Multiple focus planes have many of the same characteristics at classical light fields, but require much less image data be simultaneously presented to the eye and thus saving power on generating and displaying as much image data, much of which the eye will not “see”/use.

They are currently using a 720p DLP per eye for the display engine but they said they thought they could support other display technologies in the future. As per my discussion on Magic Leap from November 2016, DLP has a high enough field rate that they could support displaying multiple images with the focus changing between images if you can change the focus fast enough. If you are willing to play with (reduce) color depth, DLP could support a number of focus planes. Avegant would not confirm if they use time sequential focus planes, but I think it likely.

They are using “birdbath optics” per my prior article with a beam splitter and spherical semi-mirror /combiner (see picture at left). With a DLP illuminated by LEDs, they can afford the higher light losses of the birdbath design and support having a reasonable amount of transparency to the the real world. Note, waveguides also tend to lose/wast a large amount of light as well. Avegant said that the current system was 50% transparent to the real world but that the could make it more (by wasting more light).

Very importantly, a birdbath optical design can be very cheap (on the order of only a few dollars) whereas the waveguides can cost many tens of dollars (reportedly Hololen’s waveguides cost over $100 each). The birdbath optics also can support a very wide field of view (FOV), something generally very difficult/expensive to support with waveguides. The optical quality of a birdbath is generally much better than the best waveguides. The downside of the birdbath compared to waveguides that it is bulkier and does not look as much like ordinary glasses.

What they would not say – Exactly How It Works

The one key thing they would not say is how they are supporting the change in focus between focal planes. The obvious way to do it would with some kind of electromechanical device such as moving focus or a liquid filled lens (the obvious suspects). In a recent interview, they repeatedly said that there were no moving parts and that is was “economical to make.”

What They are NOT Doing (exactly) – Mechanical Focus and Eye/Pupil Tracking

After meeting with Avegant at CES I decided to check out their recent patent activity and found US 2016/0295202 (‘202). It show a birdbath optics system (but with a non-see through curved mirror). This configuration with a semi-mirror curved element would seem to do what I saw. In fact, it is very similar to what Magic Leap showed in their US application 2015/0346495.

Avegant’s ‘202 application uses a combination of a “tuning assembly 700” (some form of electro-mechanical focus).

It also uses eye tracking 500 to know where the pupil is aimed. Knowing where the pupil is aimed would, at least in theory, allow them to generate a focus plane for the where the eye is looking and then an out of focus plane for everything else. At least in theory that is how it would work, but this might be problematical (no fear, this is not what they are doing, remember).

I specifically asked Avegant about the ‘202 application and they said categorically that they were not using it and that the applications related to what they were using has not yet been published (I suspect it will be published soon, perhaps part of the reason they are announcing now). They categorically stated that there were “no moving parts” and that the “did not eye track” for the focal planes. They stated that the focusing effect would even work with say a camera (rather than an eye) and was in no way dependent on pupil tracking.

A lesson here is that even small companies file patents on concepts that they don’t use. But still this application gives insight into what Avegant was interested in doing and some clues has to how the might be doing it. Eliminate the eye tracking and substitute a non-mechanical focus mechanism that is rapid enough to support 3 to 6 focus planes and it might be close to what they are doing (my guess).

A Caution About “Demoware”

A big word of warning here about demoware. When seeing a demo, remember that you are being shown what makes the product look best and examples that might make it look not so good are not shown.

I was shown three short demos that they picked, I had no choice. I could not pick my own test cases.I also don’t know exactly the mechanism by which it works, which makes it hard to predict the failure mode, as in what type of content might cause artifacts. For example, everything I was shown was very slow moving. If they are using sequential focus planes, I would expect to see problems/artifacts with fast motion.

Avegant’s Plan for Further Development

Avegant is in the process of migrating away from requiring a big PC and onto mobile platforms such as smartphones. Part of this is continuing to address the computing requirement.

Clearly they are going to continue refining the mechanical design of the headset and will either get rid of or slim down the cables and have them go to a mobile computer.  They say that all the components are easily manufactureable and this I would tend to believe. I do wonder how much image data they have to send, but it appears they are able to do with just two HDMI cables (one per eye). It would seem they will be wire tethered to a (mobile) computing system. I’m more concerned about how the image quality might degrade with say fast moving content.

They say they are going to be looking at other (than the birdbath) combiner technology; one would assume a waveguide of some sort to make the optics thinner and lighter. But going to waveguides could hurt image quality and cost and may more limit the FOV.

Avegant is leveraging the openness of Unity to support getting a lot of content generation for their platform. They plan on a Unity SDK to support this migration.

They said they will be looking into alternatives for the DLP display, I would expect LCOS and OLED to be considered. They said that they had also thought about laser beam scanning but their engineers objected to trying for eye safety reasons; engineers are usually the first Guinea pigs for their own designs and a bug could be catastrophic. If they are using time sequential focal planes which is likely, then other technologies such as OLED, LCOS or Laser Beam Scanning cannot generate sequential planes fast enough to support that more than a few (1 to 3) focal planes per 1/60th of a second on a single device at maximum resolution.

How Important is Vergence/Accomodation (V/A)?

The simple answer is that it appears that Magic Leap raised $1.4B by demoing it. But as they say, “all that glitters is not gold.” The V/A conflict issue is real, but it mostly affects content that virtually appears “close”, say inside about 2 meters/6 feet.

Its not clear that for “everyday use” there might be simpler, less expensive and/or using less power ways to deal with V/A conflict such as pupil tracking. Maybe (don’t know) it would be enough to simply change the focus point when the user is doing close up work rather than have multiple focal planes presented to the eye simultaneously .

The business question is whether solving V/A alone will make AR/MR take off? I think the answer to this is clearly no, this is not the last puzzle piece to be solved before AR/MR will take off. It is one of a large number of issues yet to be solved. Additionally, while Avegant says they have solved it economically, what is economical is relative. It still has added weight, power, processing, and costs associated with it and it has negative impacts on the image quality; the classic “squeezing the balloon” problem.

Even if V/A added nothing and cost nothing extra, there are still many other human factor issues that severely limit the size of the market. At times like this, I like to remind people the the Artificial Intelligence boom in the 1980s (over 35 years ago) that it seemed all the big and many small companies were chasing as the next era of computing. There were lots of “breakthroughs” back then too, but the problem was bigger than all the smart people and money could solve.

BTW, it you want to know more about V/A and related issues, I highly recommend reading papers and watching videos by Gordon Wetzstein of Stanford. Particularly note his work on “compressive light field displays” which he started working on while at MIT. He does an excellent job of taking complex issues and making them understandable.

Generally Skeptical About The Near Term Market for AR/MR

I’m skeptical that with or without Avegant’s technology, the Mixed Reality (MR) market is really set to take off for at least 5 years (an likely more). I’ve participated in a lot of revolutionary markets (early video game chips, home/personal computers, graphics accelerators, the Synchronous DRAMs, as well as various display devices) and I’m not a Luddite/flat-earther, I simply understand the challenges still left unsolved and there are many major ones.

Most of the market forecasts for huge volumes in the next 5 years are written by people that don’t have a clue as to what is required, they are more science fiction writers than technologist. You can already see companies like Microsoft with Hololens and before them Google with Google Glass, retrenching/regrouping.

Where Does Avegant Go Business Wise With this Technology?

Avegant is not a big company. They were founding in in 2012. My sources tell me that they have raise about $25M and I have heard that they have only sold about $5M to $10M worth of their first product, the Avegant Glyph. I don’t see the Glyph ever as being a high volume product with a lot of profit to support R&D.

A related aside: I have yet to see a Glyph “in the wild” being using say on an airplane (where they would make the most sense). Even though the Glyph and other headsets exist, people given a choice still by vast percentages still prefer larger smartphones and tablets for watching media on the go. The Glyph sells for about $500 now and is very bulky to store, whereas a tablet easily slips into a backpack or other bag and the display is “free”/built in.

But then, here you have this perhaps “key technology” that works and that is doing something that Magic Leap has raised over $1.4 Billion dollars to try and do. It is possible (having not thoroughly tested either one), that Avegant’s is better than ML’s. Avegant’s technology is likely much more cost effective to make than ML’s, particularly if ML’s depends on using their complex waveguide.

Having not seen the details on either Avegant’s or ML’s method, I can’t say which is “best” both image wise and in terms of cost, nor whether from a patent perspective, whether Avegant’s is different from ML.

So Avegant could try and raise money to do it on their own, but they would have to raise a huge amount to last until the market matures and compete with much bigger companies working in the area. At best they have solved one (of many) interesting puzzle pieces.

It seems obvious (at least to me) that more likely good outcome for them would be as a takeover target by someone that has the deep pockets to invest in mixed reality for the long haul.

But this should certainly make the Magic Leap folks and their investors take notice. With less fanfare, and a heck of a lot less money, Avegant has as solution to the vergence/accommodation problem that ML has made such a big deal about.

Karl Guttag
Karl Guttag
Articles: 256

39 Comments

  1. Karl, whats your conclusion? Do we need multiple focal planes or we don’t need it?

    When you will publish the article about scenarios and user cases for Augmented reality? You told that you will present soon this article.

  2. how does immy hand A/V conflict? how does this compare to their approach that you covered last week?

    • It is solving a different problem, that of the beam splitter relative to a birdbath design as used by Avegant. I don’t know exactly how Avegant’s focal planes works, but in theory you might be able to combing IMMY with Avegant’s to get about a 4X brighter image. But there may be space/configuration or optical issues; I just don’t know without knowing exactly how Avegant’s works.

      • Thanks Karl

        Immy is also claiming that there is no A/V conflict in their design. So I was wondering how they are doing it? They seem to imply that due to their unique approach this issue doesn’t arise just as this issue doesn’t exist in the real world

      • I have not see or hear that IMMY was claiming to solve the Vergence/Accommodation conflict. I don’t see anything in their optical design that would address it. They talk about a “relaxed” eye which would suggest that they focus somewhere in the “far” vision. The vast majority of headsets make the apparent focus in a person’s far vision (say about 2+ meters).

        The V/A conflict occurs when you provide 3-D stereo images that appear to be near to the eye but the focus is still far away. The eye/brain is getting conflicting information. The eye’s stereo vision is saying to focus close but the image gets sharper when focused closer. For a headset to “fix” this problem the needs to be some way to have the focus change (at least with non-holographic displays). I’m wondering if the “relaxed eye” talk by IMMY is getting confused with solving the A/V Conflict. Do you have a source/link?

        BTW, there is a “totally different” approach to the A/V conflict of using Holograms such as by Real View (http://realviewimaging.com/company/) and and interesting article about them (http://www.roadtovr.com/realview-holoscope-ar-headset-hologram-display/). IMMY is mentioned in the comments of the article (the comments say IMMY does NOT address A/V conflict).

  3. I’m glad they invited you to check out the display – I really needed a coherent view on it after seeing so many “Avegant unveils Hololens Killer!” articles.

    Thanks for the writeup. I’m hoping Avegant have the good sense to partner with companies that can provide assistance with tracking, manufacture and ecosystem development rather than being yet another company with one solid idea and a hard-to-justify $billion valuation.

    • Comparing Avegant to Hololens is very misguided IMO. The comparison should be to Magic Leap as they are both addressing the Vergence/Accommodation issue. Yes it is a very deep pockets business issues. Avegant at best has one puzzle piece of a very big and expensive problem that I think could take over a decade to solve if ever.

      • Hi Karl,
        Agree that comparing Avegant (or ML) with HoloLens is like comparing Star Wars and Star Trek :). Still, do you think that it is likely that HL (with its waveguide-based approach) will land up in the next 2 years and that Microsoft will establish a beachhead so as to capture large marketshare/mindshare? I have learned from your posts that even with V/A victory, the mainstreaming of Avegant/ML (with the “Light Field” multiple focus points) is at least 5-10 years away. Markets tend to be irrational (Beta vs VHS), hype-driven (x86 vs 6400) and impatient. I wonder if Google’s Cardboard affected Oculus sales in VR and similarly what if HL gets consumer traction just as ML investors lose interest.
        Thanks and regards,

      • Frankly, I don’t think anyone, not even Apple or Microsoft, is going to have a dominate market position in head mounted displays. The whole market is being WAY over-hyped with forecast by “analysis” (and I use the term VERY loosely) that don’t have a clue. I don’t think ANYONE is going to make a product with the “take-off threshold” (a product that is “good enough” to make the market take-off). I lived through the PC revolution and had chips in some of the early home computers and video games, I know what it looks like and how technology changes with time. The head mounted display market is NOT ready for take-off. I think Hololens is highly likely to end up on the trash-heap of other hardware projects Microsoft has tried through the years. I don’t see how they get it to ever be a high volume product that will keep a multi-billion dollar company interested.

        VR (non-see-through) while they appear similar is a different case that AR/MR. This is a market for a very dedicated audience. They are leveraging very cheap cell-phone sized flat panels. The angular resolution SUCKs (typically 4 to 5 arcminutes per pixel) and there really is no way to improve it short of using EXTREMELY expensive (for the next decade or two) microdisplays with enough resolution to support the FOV while providing enough angular resolution).

        There is what I call a “band gap” the ability to make small affordable pixels to support both a wide FOV and good angular resolution. Note there are 60 degrees in an arcminute and you want 1 to 1.5 arcminutes per pixel; it doesn’t take much math to see that to support a wide FOV with good angular resolution you need an 8K or more display per eye (something not happening anytime soon and will cost a small fortune each). Flat panels can make pixels relatively cheaply, but they make BIG pixels (on the order of 80 microns or bigger). Microdisplays on silicon have pixels from 4 microns to about 10 microns. So you either get a wide FOV and LOW angular resolution with flat panels or you get a narrow FOV with good angular resolution with Microdisplays.

  4. As someone who has developed “Mixed Reality” software for Google Tango and Hololens, I have been coming around to being more excited by more “UI AR” for ‘data snacking’ than actual MR applications – everything that would allow me to not look down on my phone, but be aware of what is happening in my digital world while I’m just doing whatever.

    I wrote a (very non-technical, somewhat rambling) article that MR applications aren’t that useful for everyday mobile-style use, https://haptic.al/putting-the-mixed-reality-cart-before-the-augmented-reality-horse-fb5582c38c1f so it is strange we are chasing this before just having simple, good AR sunglasses.

    In my opinion, this is where somebody like Apple could shine- not needing tracking, or occlusion, or see things at different depths, it can have higher latency (Bluetooth vs tether) between viewer and main cpu (receiving similar information as Apple Watch), etc- it is your phone w/o looking down

    The tough part is that the MR promise is what is sexy to investors and public; seeing virtual thing interact in the real world makes for an amazing video, but it isn’t how you will daily use the device. It is like when Apple would demo a high res 3D game on iPhone to sell the new phone’s power, but rarely does anyone do anything with real-time 3D graphics on phone; it is mostly text, internet, camera.

    Which leads me more the Vuzix Blade 3000 and Lumus DK-50- Karl I know you’ve mentioned these devices or tech before- in your opinion could they work for more “in corner of your eye notification somebody is calling”, etc?

    • You can go round and round in circles with simplicity versus value versus size, weight, cost. But it always spirals away from a solution. There is so much overhead cost in just making a near eye display that it is very difficult to compete with any flat panel technology. If you go really cheap with the optics and still want it very see through you art still taking about $150 to $200/eye (retail) before you put any computing into it. If you then want something “fancy” like a waveguide, then it can be $400+ per eye.

      Just going from non-see through to see through roughly doubles the cost as you need more expensive optics and about 10X the starting brightness with its associated power/heat management. Then you have to consider what do you do with it when you take it off; you will need to put it in a case to keep from breaking it; it is not like a smartphone you can slip in your pocket.

      Go to the store and you can buy a 55″ 4K TV (3840×2160 pixels) for less than $500. If you go buy a true 4K projector it will cost you about $10,000 before you factor in the screen (a good one will be several hundred dollars more). It kind of the same issue in terms of cost with near eye displays. You can get a tablet computer for less than $100 and a monocular see through display system is going to start at $500. There have been on the order of 100 attempts from both big and small companies.

      Vuzix Blade 3000 “consumer version” is supposed to cost $1,000 later this year or about the same as a loaded iPhone Plus. I can see this type of product for “enterprise” or factory floor “data snacking” but I do see it as a high volume consumer product.

  5. Could you comment about Avegant resolution, your experience about this?
    720p its not enough for 50 FOV. It is 2.34 a-m/pixel

    • I was really concentrating on the focusing effects at the time. The Glyph reportedly has about a 40 degree FOV and I would suspect this was the same which gets you into the 1.9 arcminute/pixel range. The key thing I was looking at was the ability to do “focus planes.”

      If they wanted to support 50 degrees with good resolution then in theory they could substitute a 1080p device getting them to about 1.6 arcminutes/pixel. It should not be any problem for the birdbath optical design (ODG does this with their R9) but would be more challenging for a waveguide where wide angles are more problematical.

      But going to 1080p would of course double the processing and double the bandwidth. It does lead to the curious issue of the HDMI lines going to the headset. Normally you send standard video format over HDMI lines with RGB sent simultaneously and with standard horizontal and vertical timing. I didn’t look carefully, but I think they would have standard ports on the graphics cards generating the image. Thus this would conducive to DLP “plane splitting” tricks on the computer system and thus they would have to be done locally in the headset. I would thus suspect they are sending either high frame rate (say 120Hz) focal plane images and/or higher resolution images (with multiple focal planes per frame of image) and splitting and buffering in the headset (this is just an educated guess). There is quite a bit of headroom in HDMI (particularly if they used 2.0 cables) when you consider it can go up to 4K resolution at 60FPS.

  6. A agree that both products use similar technology. But Avegant is, probably, like ML was 1,5 or 2 years ago. ML now has probably a much better and small product, better tracking capability, better FoV e better image quality (using waveguide, which makes the product looks more acceptable).
    Also, I think that must of the engineering on the company is not on the product itself, but in building the environment for its use (I would guess that only 20% of ML team is working on the product by now).

    • I can believe that with $1.4B to play with, Magic Leap is working in more areas. But there are companies focus on each element such as tracking that likely have more people, with more experience, and with more money in a specific area than Magic Leap. That is usually the folly of trying to invent everything as ML. They can only make so much progress in each area.

      Magic leaps big claim to fame is their “light field” (no really, but their perversion of the name) technology. In this case Avegant appears to have solve the problem for a fraction of the R&D cost.

      In the case of FOV, Avegant is in better position because they are not using waveguides. They can go essentially arbitrarily wide where there are “physics problems” for waveguides.

      In terms of image quality, I have yet to see a waveguide that that the image quality anywhere near simple reflective and refractive optics. Waveguides sacrifice image quality to get flatness.

      Magic Leap appears to be making a hundred steps in different directions and not getting very far very fast in any of them. They may only have 20% of their company working on the display itself, in which case they are likely building the other 80% on a faulty foundation.

      • They are very different things but in some ways related. It is important to understand that DAQRI’s Software Defined Light came from their acquisition of Two Tree Photonics back in March 2016. Unlike Hololens which uses “Marketing ONLY Holograms” Two Trees’ technology actually generates holograms using laser light. This all came from research work out of Cambridge (UK) that lead to Two Trees and another company called Light Blue Photonics. Both Two Trees and Light Blue used lasers with LCOS to phase modulate the laser light to create holograms.

        I’m very far from an expert on holograms, but I understand there are “2-D” and “3-D” volumetric holograms. Likely DAQRI HUD is using layers of 2-D holograms to create depth and not a single 3-D holograms which my understanding is would be extremely difficult to do (at least computationally; this is why you see in the article you linked to the idea of layers of images rather than a single 3-D image. Even here, generating even a single 2-D hologram laser is very complicated compared to spatial light modulator techniques.

        I’m guessing that the DAQRI/Two Trees display takes advantage of the face that most HUDs have “sparse content” as in simple images on clear/black backgrounds. I would assume this helps with the computational load. The basis of how Two Trees could generate a bright enough image with relatively low power lasers is that the use of Holograms results in them only needing laser light based on the content of the display, since HUD images are sparse content they only need to generate the light for the average pixel value; conceptually the hologram “steers” the light to where it is needed.

        The computational requirements would seem to me to keep true holograms out of head mounted displays for a long time. But both Avegant and Daqri are using planes of depth rather than “Light Fields” in the case of Avegant or 3-D holograms in the case of Daqri.

        For reference, below are some papers on the Cambridge/Two Trees/Light Blue Optics holograms:

        https://www.researchgate.net/profile/Jamieson_Christmas/publication/273047366_Holographic_Automotive_Head_Up_Displays/links/54f598860cf2ba615066f45f.pdf
        http://repositories.vnu.edu.vn/jspui/bitstream/123456789/15664/1/InTech-Computer_generated_phase_only_holograms_for_real_time_image_display.pdf
        http://lightblueoptics.rloc.eu/wp-content/uploads/2008/10/paper_final.pdf

      • Thanks for the reply!

        You don’t seem to think solving the V/A conflict problem is a game changer for AR/MR. In your opinion what would make AR/MR as compelling as smart phones?

        Also, would solving the V/A conflict problem have a significant impact on the quality of VR?

  7. Which could be the “no moving parts” varifocal element they use? The only static VFEs I can think of are polarizing beam splitters (by changing polarization you change the optical path) or non-linear crystals. Both types seems to be very limiting on the FOV. Any ideas?

    • I have some ideas. Note that DLP and LCOS reflect light and that the character of the light rays coming in is the same as they are going out. This is key to laser-illuminated LCOS (and DLP) being focus free.

      You could modify the characteristic of the light illuminating the DLP or LCOS. You could say have multiple light sources that illuminate the display device differently. You could also have some form of electrically alterable diffraction grating.

      • What is your impression of the Avegant focal fields, did they produce a smooth defocused image, or did it have some peculiar bokeh? Maybe it is indeed some multi-point illumination as you suggest.

  8. Karl your blog is awesome even though I do not understand much of the technical aspects but it adds an expert’s perspective I would not have gotten elsewhere. I have invested in Himax since 2011 and one lead to another, I ended up on your blog. I remembered you bought some Emagin shares and that was scared me. An expert buying shares in a competitor company in a field I have no idea of, but after I assess the reason I bought Himax it wasn’t because of Google glass or the rest. They have diversified their customers and products. It has good management and a great track record. That should speak for itself but I guess I don’t want Emagin, Microvision, Kopin or all the hundreds of companies to overtake Himax in this field of AR/VR and there’s no winner now but who knows. I believe in the company I invested in and hopefully it truly pays off some day for this medical student stumbling on his path to become a doctor. Thank you Karl your blog is very insightful and way better than many news articles. “A lesson here is that even small companies file patents on concepts that they don’t use.” sucks for all the inventors that actually make the product that collides with patented ideas.

    • Thanks,

      Hopefully you noted I put a lot of caveats around that eMagin stock purchase. There certainly is increasing use of OLED microdisplays in near eye designs but all but a very few can be traced to using Sony devices. OLEDs are still a poor choice for see-through (AR/MR) displays due to the lack nits available. OLEDs put out diffuse light and even with the same light output (lumens) they can’t come close to LCOS and DLP for nits which can deliver 100x the nits if needed.

      Hopefully you have also read my warnings that I think that the whole AR/MR market is being over-hyped. There are a number of seriously challenges that are yet to be solved and I don’t see these being solved for 10+ years. The combination of weight, resolution, power, cost, user-interface, and a dozen more serious problem are yet to be solved SIMULTANEOUSLY. They can sometime solve one problem, but only at the expense of many other. You will likely see a bunch of retreats from the mass market like happens with Google Glass and more recently Hololens.

      You also should understand that LCOS is a very small part of Himax. Their big business is flat panel display drivers. Also while LCOS can deliver the nits/brightness and is less expensive and lower power than DLP, Himax has not demonstrated what I would consider very good image quality (particularly contrast). They are the go-to manufacturer of LCOS these days, but that could be fleeting.

      • Karl, in which type of optic engine – contrast and image quality will be better?
        At the engine based on DLP or Himax Lcos panels?

      • Its a pretty complicated question as it turns out as it also depends on the optical design and not just the panel. The DLP engines cost more and consume more power and they typically have more expensive optical designs that are bigger. The Himax panels I have seen typically have lower contrast and lower image quality. So the simple answer for the headsets I have seen is that the DLP will have better image quality for same number of “real” pixels; but it is possible to do better than the Himax designs I have seen. It is possible to have better image quality with LCOS, witness some of the Sony SXRD (their name for LCOS) and JVC-Kenwood LCOS projectors, but it is tough to get this great image quality when you are on a budget and cramming it into a headsets.

  9. Karl, thank you for your interesting articles.

    Could you prepare article about Lumus?
    I thinking you had a demo latest prototype Lumus Dev-kit-50 on the CES
    https://lumusvision.com/products/dev-kit-50/
    https://www.androidheadlines.com/2017/01/lumus-unveils-dk-50-ar-development-kit-at-ces-2017.html

    Also, please review Meta Helmet.

    It will be very interesting, to read article from you about latest Lumus dev-kit, with conclusion, comparison table modern products on the market: ODG R9, HoloLens, Avegant, Daqri, Meta.

    • Thanks, but I don’t have time for a full article on Lumus right now. I also did not have the ability to fully analyze their technology. Below are some of the pros and cons based on what I know.

      Pros:
      1. Good real world transmissivity (about 80%)
      2. Flat and reasonably thin
      3. Expected (Lumus claims, but seems reasonable) much less expensive than other waveguide technologies
      4. Good image quality for a transmissive display (ALL transmissive technologies degrade image quality).
      5. Reasonably wide field of view (generally better than other waveguide technology).

      Cons.
      1. Very inefficient – lot of light lose through the layers of the prisms and with the high transmissivity. They requires a very bright light source.
      2. The image has (very thin) lines in at the segment boundaries. On the wide FOV there are diagonal lines.
      3. There is still “waveguide glow” due to stray/error light.

      Overall to me, it is the best of all the “flat”/waveguide technologies I have seen for AR use. Per your list:
      1. The ODG R9 is not really a see through display with only about 5% of the real world light getting through. This eliminates it as being really an AR display.
      2. Hololens – I think it will be less expensive with better FOV. I think the image quality overall is better BUT I have not been able to seriously compare them.
      3. Avegant – The Glyph is not see through nor flat. The “Light Field” was just a prototype and very big and bulky and hardly flat. It was also only about 50% transmissive.
      4. I have not evaluated Daqri, but it is very expensive.
      5. Meta’s design is very primitive and huge. It is not very see through. Really in a whole different class.
      6. I have not seen the Digilens image – I think their deal with BMW was only for a one-off prototype. I don’t know of anyone using it in production which suggests it may be expensive or have other practical issues.

  10. Dear Karl:

    There is a campany call SD optics, they use MEMS reflective mirror to control the focal depth. Do you think that Avgant apply such technology to control various focal plane?

    • The SD Optics is certainly aimed at doing this (and some of their videos show it). It looks like it would be VERY expensive. I also wonder about issues of resolution/segmenting/diffraction of the image. Effectively it is a variable Fresnel mirror with discontinuities that should cause image problems. It could be used in the illumination system to vary the angle of light, but I would worry about it in the image side of the optics.

      I would tend to doubt Avegant would be using this in their prototype but I don’t have any evidence.

      • Thanks for the suggestion,I also keep eye on watching another company “Deep optics”, with liquid crystal control matter. However, response time is a bit slow, 100ms per depth response. Have u seen any approaches by solving the VAC issue?

        Recently I saw an article with electrically control with membrine, but end up with large HMD form factor。

        Really wondering how Avegant solve that⋯

      • You can imagine that they would need something on the order of 100 microseconds (about 1,000 times faster than Deep Optics) and preferably faster as any time to switch the focus need to to be done and “settled” before the focus plane image can start being displayed. This pretty much rules out many potential electro-mechanical/electro-optical focus changing methods.

        Avegant has said that their method for changing focus is a fully electrical effect and does not involve any physical movement to change focus. My guess would be that they are doing something in the illumination path as LEDs (direct and not phosphor converted ones) have switching times in the order of a just several nanoseconds; more than fast enough.

  11. Dear Karl:

    I did had a chance to talk to Avergant about the device. You are correct, they confirm that the multiple images to form a discrete depth information is made by Time Division Multipleing method. They do not say how exactly reassign the image dynamically to different focal plane. However, they mention that there is a electrical switchable devices run at vey high speed, which can dynamically present the images into different. So I think they probability use Deformable Membrane Mirror Device (DMMD) to control.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading