Magic Leap: Focus Planes (Too) Are a Dead End

What Magic Leap Appears to be Doing

For this article I would like to dive down on the most likely display and optics Magic Leap (ML) is developing for their their Product Equivalent (PEQ). The PEQ was discussed in the “The Information” story “The Reality Behind Magic Leap.” As I explained in my  November 20, 2016 article Separating Magic and Reality (before the Dec 8th “The Information” story) the ML patent application US 2016/0327789 best fits the available evidence and if anything the “The Information” article reinforce that conclusion. Recapping the evidence:

  1. ML uses a “spatial light modulator” as stated in “The Information”
  2. Most likely an LCOS spatial light modulator and the Oct. 27th 2017 Inside Business citing “KGI Securities analyst Ming-Chi Kuo, who has a reputation for being tapped into the Asian consumer electronics supply chain” claims ML is using a Himax LCOS device.
  3. Focus planes to support vergence/accommodation per many ML presentations and their patent applications
  4. Uses waveguides which fit the description and pictures of what ML calls a “Photonics Chip”
  5. Does not have a separate focus mechanism as reported in the “The Information” article.
  6. Could fit the form factor as suggested in “The Information”
  7. Its the only patent that shows serious optical design that also uses what could be considered a “Photonics chip.”

I can’t say with certainty that the optical path is that of application 2016/0327789. It is just the only optical path in the ML patent applications that fits all the available evidence and and has a chance of working.

Field of View (FOV)

Rony Abovitz, ML CEO, is claiming a larger a larger FOV.  I would think ML would not want to be have lower angular resolution than Hololens. Keeping the same 1.7 arc minutes per pixel angular resolution as Hololens and ODG’s Horizon, this would give a horizontal FOV of about 54.4 degrees.

Note, there are rumors that Hololens is going to be moving to a 1080p device next year so ML may still not have an advantage by the time they actually have a product. There is a chance that ML will just use a 720p device, at least at first, and accept lower angular resolution of say 2.5 or greater to get into the 54+ FOV range. Supporting a larger FOV is not small trick with waveguides and is  one thing that ML might have over Hololoens; but then again Hololens is not standing still.

Sequential Focus Planes Domino Effect

The support of vergence/accommodation appears to be a paramount issue with ML. Light fields are woefully impractical for any reasonable resolution, so ML in their patent application and some of their demo videos show the concept of “focus planes.” But for every focus plane an image has to be generated and displayed.

The cost of having more than one display per eye including the optics to combine the multiple displays would be both very costly and physically large. So the only rational way ML could support focus planes is to use a single display device and sequentially display the focus planes. But as I will outline below, using sequential focus planes to address vergence/accommodation, comes at the cost of hurting other visual comfort issues.

Expect Field Sequential Color Breakup If Magic Leap Supports “Focus Planes”

Both high resolution LCOS and DLP displays use “field sequential color” where they have a single set of mirrors that display a single color plane at a time. To get the colors to fuse together in the eye they repeat the same colors multiple times per frame of an image. Where I have serious problems with ML using Himax LCOS is that instead of repeating colors to reduce the color breakup, they will be instead be showing different images to support Sequential Focus Planes. Even if they have just two focus planes as suggested in “The Information,” it means they will reduce the rate repeating of colors to help them fuse in the eye is cut in half.

The Hololens which also uses a field sequential color LCOS one can already detect breakup. Cutting the color update rate by 2 or more will make this problem significantly worse.

Another interesting factor is that field sequential color breakup tends to be more noticeable by people’s peripheral vision which is more motion/change sensitive. This means the problem will tend to get worse as the FOV increases.

I have worked many years with field sequential display devices, specifically LCOS. Based on this experience I expect that the human vision system  will do a poor job of “fusing” the colors at such slow color field update rates and I would expect people will see a lot of field sequential color breakup particularly when objects move.

In short, I expect a lot of color breakup to be noticeable if ML support focus planes with a field sequential color device (LCOS or DLP).

Focus Planes Hurt Latency/Lag and Will Cause Double Images

An important factor in human comfort is the latency/lag between any head movement and the display reacting can cause user discomfort. A web search will turn up thousands of references about this problem.

To support focus planes ML must use a display fast enough to support at least 120 frame per second. But to support just two focus planes it will take them 1/60th of a second to sequentially display both focus planes. Thus they have increase the total latency/lag from the time they sense movement until the display is updated by ~8.333 milliseconds and this is on top of any other processing latency. So really focus planes is trading off one discomfort issue, vergence/accommodation, for another, latency/lag.

Another issue which concerns me is how well sequential focus planes are doing to fuse in the eye. With fast movement the eye/brain visual system is takes its own asynchronous “snapshots” and tries to assemble the information and line it up. But as with field sequential color, it can put together time sequential information wrong, particularly if some objects in the image move and others don’t. The result will be double images, getting double images with sequential focus planes would be unavoidable with fast movement either in the virtual world or when a person moves their eyes. These problems will be compounded by color field sequential breakup.

Focus Planes Are a Dead End – Might Magic Leap Have Given Up On Them?

I don’t know all the behind the scenes issues with what ML told investors and maybe ML has been hemmed in by their own words and demos to investors. But as an engineer with most of my 37 years in the industry working with image generation and display, it looks to me that focus planes causes bigger problems than it solves.

What gets me is that they should have figured out that focus planes were hopeless in the first few months (much less if someone that knew what they were doing was there). Maybe they were ego driven and/or they built to much around the impression they made with their “Beast” demo system (big system using DLPs). Then maybe they hand waved away the problems sequential focus planes cause thinking they could fix them somehow or hoped that people won’t notice the problems. It would certainly not be the first time that a company committed to a direction and then felt that is had gone to far to change course. Then there is always the hope that “dumb consumers” won’t see the problems (in this case I think they will).

It is clear to me that like Fiber Scan Displays (FSD), focus planes are a dead end, period, full-stop. Vergence/accommodation is a real issue but only for objects that get reasonably close to the users. I think a much more rational way to address the issue is to use sensors to track the eyes/pupils and adjust the image accordingly as the eye’s focus changes relatively slowly it should be possible to keep up. In short, move the problem from the physical display and optics domain (that will remain costly and problematical), to the sensor and processing domain (that will more rapidly come down in cost).

If I’m at Hololens, ODG, or any other company working on an AR/MR systems and accept that vergence/accommodation is a problem needs to be to solve, I’m going to solve it with eye/pupil sensing and processing, not by screwing up everything else by doing it with optics and displays. ML’s competitors have had enough warning to already be well into developing solutions if they weren’t prior to ML making such a big deal about the already well known issue.

The question I’m left is if and when did Magic Leap figured this out and were they too committed by ego or what they told investors to focus planes to change at that point? I have not found evidence so far in their patent applications that they tried to changed course, but these patent applications will be about 18 months or more behind what they decided to do. But if they don’t use focus planes, they would have to admit that they are much closer to Hololens and other competitors than they would like the market to think.

Karl Guttag
Karl Guttag
Articles: 256

33 Comments

  1. Hi Karl,

    Great analysis, as usual!

    Years ago I built a simple two focal planes display using two LCoS panels, positioned on adjoining sides of a polarizing beam-splitter cube such that LED illumination with one polarization state passed through onto one panel and the other polarization state was reflected onto the other panel. No additional optics were required to combine them.

    One focal plane was presented at 1 m optical distance, the other at 3 m. The focus difference between 3 m and infinity is 0.333 Diopters which is barely noticeable to most people. So two planes were enough to present objects at “arm’s length” (1 m) and “far away” (>= 3 m.)

    If ML can tolerate the cost of a second LCoS panel per eye then I suppose they could take this approach too to avoid the color breakup and latency problems you mention. Am I missing something?

    Regards,
    Fergal.

    • Fergal, certainly you can do this for two focus planes. It does get expensive both in terms of the cost of two panels and the alignment; it is extremely important that the two panels are optically parallel or you will get focus run-out. It also can get a little big.

      I was looking for this in the ML patent literature but didn’t see them doing it.

      With only two focus planes it gets tricky to support the arbitrary real world. Very quickly you find yourself needing to know the eye tracking and then generate in focus and out of focus pixels accordingly. By the time you do all this, why don’t you just have one focus plane and do everything with sensing and software?

      I understand where it can cause interesting effects in a limited demo, but I don’t see how it supports the totally arbitrary case.

      • Hi Karl,

        Thanks. It’s difficult to argue against the simplicity of using eye tracking and a single focus plane.

        Maybe the variable focus mechanisms aren’t fast enough to keep up with eyes and that introduces another kind of latency/lag? That used to be the case which is why I vibrated my focus mechanism at a resonant frequency of around 1 kHz which was good for fast switching between multiple focus plane but not for a single stationary one.

        Fergal.

      • I’m not 100% sure but I know from my work in automotive HUD that the human focus “system” is relatively slow. That is one of the advantages of HUD is that they move the image into your far vision so you don’t have to refocus. I have read it takes on the order of 1/3rd of a second or more for the eye to go from infinity to near focus or back. That would be about six to seven 1/60ths of a second time periods which would seem to give a lot of time to get it close to right.

        I have not seen any studies on how perfect the focus would have to be to significantly reduce/eliminate the vergence/accommodation issue. It would seem you could keep there from being a great difference. A question I have is how well would “digital blurring” fool the whole brain/eye system to thinking that the focus had changed. Does the eye have more subtle mechanism for determining focus.

      • Apologies in advance if I mislead you or your readers with these half-remembered details from nearly 20 years ago when I was really interested in this stuff:

        * Does focus have to be perfect to avoid conflict in the vergence-accommodation system? No. Optometrists say that most people tolerate 0.25 to 0.5 diopter of error without problems.

        * Does the eye have subtle mechanisms for determining focus? Yes. Optical blur might actually be necessary. And I vaguely remember something about chromatic aberration being a stimulus too.

      • I think it is clear that you don’t have to be perfectly accurate. The problem comes when you have a big difference. The depth of focus of the eye is going to change with illumination as the eye stops down with more light (and a big reason wh people can see more clearly with more light).

  2. Karl,

    You wrote :

    “The Hololens which also uses a field sequential color LCOS one can already detect breakup. ……

    …. Another interesting factor is that field sequential color breakup tends to be more noticeable by people’s peripheral vision which is more motion/change sensitive. This means the problem will tend to get worse as the FOV increases.”

    It appears to me that Hololens pursuing a wider FOV in their next generation will only exacerbate the existing “color breakup ” problem using LCOC with their current optics .

    Furthermore , using diffractive waveguides adds nothing to a sleek form factor when a protective shield is used .

    What do you think are the chances that 2nd generation Hololens follows a similar route as ODG in using OLED displays and similar optics ?

    Microsoft looks like they could use a little help figuring things out :

    https://careers.microsoft.com/jobdetails.aspx?jid=223660&memid=1210520467&utm_source=Indeed

    • Frankenberry, I can count on you to make the case for micro-OLEDs :-).

      To some degree, Microsoft appears to have so to speak, “fallen in love with the shape of radiator grill and build an entire car around it no matter if it resulted in a lousy overall car.” They got the waveguide technology from Nokia and it seems to be the reason for doing Hololens. They want the “sunglasses look” even if the sunglasses are inside a motorcycle helmet (so to speak). But maybe they have realized they error of their ways (don’t know) but they did pay for a I.P. license $150M to ODG, which interestingly looks to be using this money to compete with Hololens (and just raised another $58M).

      I certainly have my ideas for what they should do. They certainly should not have been messing around with waveguides on their prototypes, but rather getting more systems out there.

      I like to think I could help them, but I’m not ready to move from the Austin area to Mountain View (where that job posting is located). I love visiting California but do not want to live there.

  3. Microsoft Paid Up To $150M To Buy Wearable Computing IP From The Osterhout Design Group

    MSFT owns the IP from ODG and granted ODG unlimited use of the original patents. ODG has new patents that they didn’t sell.

    https://techcrunch.com/2014/03/27/microsoft-paid-up-to-150m-to-buy-wearable-computing-ip-from-the-osterhout-design-group/

    Nokia didn’t sell their waveguide IP, MSFT paid $1.6B to license the IP.

    https://news.microsoft.com/2013/09/03/microsoft-to-acquire-nokias-devices-services-business-license-nokias-patents-and-mapping-services/#sm.000013gdrucfiqcwnqoibz0i739h2

    If MSFT bought the waveguide patents, why is Nokia still paying fees?

    https://www.google.com/patents/US7903921?dq=manufacturing+optical+waveguide+nokia&hl=en&sa=X&ved=0ahUKEwiF_vKVy_rQAhXLrFQKHdTzBnwQ6AEIHDAA

    Take a look at this Himax patent and see what you think.

    http://pdfaiw.uspto.gov/.aiw?docid=20160327722&PageNum=1&&IDKey=D89D0397CC56&HomeUrl=http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1%2526Sect2=HITOFF%2526d=PG01%2526p=1%2526u=/netahtml/PTO/srchnum.html%2526r=1%2526f=G%2526l=50%2526s1=20160327722.PGNR.%2526OS=%2526RS=

    • Thanks for the clarification on the ODG/Microsoft and Nokia/Microsoft IP arrangements.

      Thanks for the patent about Himax. It is an interesting take at making a “flat LCOS” module that would be similar in functionality to OLED. It would certainly simplify the rest of the optical design and would be more compact but the module itself appears to be somewhat complex. It is not clear what the optical/image quality and performance/efficiency would be. I would also wonder about the cost. Very importantly, it might be the kind of thing one would use if they are trying to make a very small near eye device, including Magic Leap.

      • The HImax technology in this patent can be applied to any of their CS LCoS and Himax has a 1080 they aren’t talking about. Lumus Optical have created a new waveguide optical engine with this tech.

        Further, Jordan Wu confirmed the expansion of their LCoS production line no their last EC for the next generation of technology they have already developed. Cost is a fraction of the cost of OLED and end products with this chip are in testing.

      • Thanks for all the information. I assume Himax had a 1080p device. Heck it has been about 6 years since they had samples of their 720p (circa 2010 or 2011).

        Clearly this reduces the optical path which would be good for Lumus, Hololens, and Magic Leap. Do you know anything about the image quality, in particular, illumination uniformity and contrast. Also do you know how bright they can get.

        While a lot will depend on how much Micro-OLED can come down in cost, there are other differentiating factors between LCOS and Micro-OLED. A lot of people will like that OLED are not FSC, they are generally high in contrast, and likely more power efficient. Micro-OLEDs generally are at a disadvantage on cost, max brightness, lifetime, and don’t work with diffractive/holographic optics. So even if Micro-OLED are more expensive, they will find their way into systems, and are bigger than FSC for the same resolution (generally much bigger). So price is only one significant factor.

        BTW – Please understand that I’m not just writing back to you with my responses as you probably know much of the above. I’m not trying to be pejorative, but rather I’m trying to add information for the other readers of these comments.

      • Karl,

        I always appreciate the insight that you provide for readers to help everyone better understand the technology.

        Here’s the link to the white paper from SID 2015 for color sequential front lit. (You will need an account for the full article with testing results). Himax applied for the provisional patent during product testing and fast tracked approval. They are now preparing for commercial release.

        One other point I’d like to make is Himax’s ability to drive the display at 360HZ.

        Here’s some older articles from Himax –

        LCOS Panel Using Novel Color Sequential Technology – From 2007

        http://onlinelibrary.wiley.com/doi/10.1889/1.2785249/full

        Temperature effects on viscosity were and issue in the past for 360HZ

        Liquid-Crystal-on-Silicon Backplane for Color-Sequential and Color-Filter Projector Applications

        http://www.cdr.ust.hk/publications/research_pub/2009/SID09_P172.pdf

        Here’s a good white paper that addresses the 360HZ solution. Himax was directly connected to this study.

        Fast-response liquid crystals for high image quality wearable displays

        https://www.osapublishing.org/DirectPDFAccess/D195FA79-ECE7-BC2B-3ADE5555B6EF5A1A_312060/ome-5-3-603.pdf?da=1&id=312060&seq=0&mobile=no

        “We have explored two LC mixtures with ultra-low rotational viscosity. These new LCs exhibit several attractive features for wearable displays based on field sequential color LCOS: (1) Submillisecond response time at room temperature while keeping vivid colors at −20°C. (2) Low power consumption by avoiding the need of a heating device. (3) High brightness and excellent ambient contrast ratio. (4) Suppressed color breakup with higher frame rate and fast LC response time. (5) Standard LCOS cell gap, which is easy for mass production. This fast-response LCOS is promising for next generation wearable displays.”

        Here’s where you need to look for 360HZ. Noted at the bottom “The authors would like to thank Dr. Simon Fan-Chiang of Himax Display and Fenglin Peng for helpful discussion, and AFOSR for partial financial support.”

        https://www.osapublishing.org/ome/fulltext.cfm?uri=ome-7-1-195&id=356157

      • Thanks for all the references. But I think there is a confusion of terms between “fields” and “frames”. A field is only 1 color. A “frame” would require all 3 color fields. The 2007 Himax paper gets the Field rate up to 360 which would only support 360/3 = 120 full frames per second. Each of the R, G, and B would only be repeated twice per 1/60th of a second. This is marginal for reducing FSC. If ML uses it for two focus planes, then each focus plane will only get one each of R, G, and B per 1/60th of a second.

        Looking briefly at the Himax 2015 “flat LCOS” paper, it looks like uniformity might be an issue. Still it is an interesting development.

        The U of Florida papers appear to be for a fast switching VA which is good news because VA is usually much slower than Twisted Nematic. VA has other desirable characteristics, including being more durable and having higher contrast (this is what Sony call SXRD). But this looks to only get VA into the same range of speed as Tn liquid crystal. They are getting it into the 360 fields range which is fast for VA but not for Tn.

      • Karl, sorry forgot to add the post

        Color Sequential Front-Lit LCOS for Wearable Displays

        http://onlinelibrary.wiley.com/doi/10.1002/sdtp.10215/epdf?r3_referer=wol&tracking_action=preview_click&show_checkout=1&purchase_referrer=www.google.com&purchase_site_license=LICENSE_DENIED_NO_CUSTOMER\

        Abstract A color sequential front-lit (CS-FL) LCOS optical design, which brings advantages of wide color gamut, high resolution, and high contrast for wearable applications, is presented. On the basis of our recent color filter front-lit (CF-FL) LCOS experimental result, which delivers 100 nits/mW (i.e., 17,000 nits at 30 mA driving current of a dual-chip LED), we demonstrated a new CS-FL design of 116 nits/mW using RGB LED and converted green phosphor. The color non-uniformity problem caused by separated individual RGB LED chips was solved by a hollow integrator rod and a sheet of diffuser. Furthermore, together with a mixer light guide (MLG) with grating, spatially averaged RGB light can have a much collimated angular intensity and uniform illuminance distribution to PBS light guide (PBSLG). Besides, the color luminance non-uniformity (standard deviation of color difference) of this newly designed CS-FL can reduced 11% compared to that of the original CF-FL.

  4. Why can’t they make the lcos way higher speed to offset the color breakup effect?

    Presumably if you could reach 240 fps then you could have two focal planes at 120 Hz each and then there would be no issue.

    • Good question. Understand that everyone making LCOS would WANT to go faster frame (RGB) rate if they could. What limits them:

      1. The Liquid Crystal (LC) takes time to switch/change. There are a range LC “blends/types” that switch faster and have the birefringence high enough to work in a thin gap and they are already using the faster switching LCs
      2. Generally you trade off some other desirable effects, such as contrast, as you go to faster LC blends/types. The highest contrast LC’s I have seen are the Vertically Align Nematics (VAN) but these are about 3x slower.
      3. You have to get the data in and out of the I.C. that controlled the LC and get the LC to settle. When you have 2 million (1080p) pixels this is a lot of data. Some LCOS devices are all digital and some are “hybid” digital in with analog pixel drive and getting these to work at high speed is a big issue. LCOS generally runs in technologies 3 to 5 generations behind “state of the art” CPU, this is because LC requires relative high voltages that require big transistors and to keep the display costs down.
      4. The power consumption of the display device and its associated control chips is roughly proportional to the frame rate so the power goes up.

      Even if they could run at 240 fps (3×240 = 720 color fields per second) this would not totally guarantee that there would be no issues with FSC. Even Hololens that does not do focus planes has issues.

      Besides, I don’t see where just 2 focus planes really solves much other than making some novel demos. Two are not enough to give enough depth illusion so they will have to use other information and tricks to get the depth they need. Why not just do all of it with sensors and software?

      • Per my just posted response; that is 360Hz Color fields (one each of R, G, or B) not 360hz of frames. They would only repeat R, G, and B twice per 1/60th which is marginal for reducing color field breakup.

  5. Karl,

    Hearing that MSFT put a hold on ordering more LCOS sensors from Himax because they want to re-think things about the 2nd gen Hololens. Do you think it could be because they want to see improvements to the LCOS or because of they want to use different technology?

    • Where did you here the order was on hold? I have not read this publicly. What I am seeing reported is that financial analyst are reducing their forecast for Hololens sales.
      If you want to respond privately. Please send and email to info@kguttag.com.

      Assuming this it is true, there could be many reasons (not all of which are bad) of course and impossible to know without other information. These would include (note, these are just speculation and none of these maybe be true):

      1. Finding a better LCOS solution (resolution, frame rate, and/or contrast).
      2. Issues with Himax’s LCOS and they want to wait until there is improvement.
      3. Himax has a new “flat” illumination module that eliminates the need for a beam splitter and they could want to change to that solution.
      4. Deciding that waveguides are too expensive and changing out the waveguides and the displays
      5. Delaying the Hololens program while they collect more information from the first generation
      6. Scaling back the Hololens program because of managements perception of the market and/or the time frame for when the technology will be ready for being a larger volume/consumer product.

      So changing technologies would be only one of many possibilities assuming your information is true.

    • Richie – you are wrong regarding Hololens. Himax has advanced LCoS technology that you need to read about, see my post above.

      • Hopefully this is all cleared up now. People get confused between “frame rate” and “field rate” all the time.

        Note they don’t have to repeat the colors an equal number of times. For with 480 fields they could do 3 each of red and green and only two of blue.

      • Let me propose one more question for you….

        If Himax is able to achieve 360HZ “frame rate” and uses (2) sets of RGB LED lights in their new front lit by using DIC-LC2 doesn’t that essentially element color breakup in just about every normal use case?

        We demonstrate two ultra-low viscosity liquid crystal mixtures to enable field-sequential-color wearable displays for low temperature operation, while keeping a wide color gamut. Our mixtures offer ~4X faster response time than a commercial material at 20°C and ~8X faster at −20°C. Other major attractive features include: (1) submillisecond response time at room temperature and vivid color even at −20°C without a heating device, (2) high brightness and excellent ambient contrast ratio, and (3) suppressed color breakup with 360Hz frame rate.

        We have explored two LC mixtures with ultra-low rotational viscosity. These new LCs exhibit several attractive features for wearable displays based on field sequential color LCOS: (1) Submillisecond response time at room temperature while keeping vivid colors at −20°C. (2) Low power consumption by avoiding the need of a heating device. (3) High brightness and excellent ambient contrast ratio. (4) Suppressed color breakup with higher frame rate and fast LC response time. (5) Standard LCOS cell gap, which is easy for mass production. This fast-response LCOS is promising for next generation wearable displays.

        We first calculate the case for 180Hz frame rate (T = 5.55ms) and 100% LED turn-on duty (m = 1). Figure 3(c) shows the color gamut at different temperatures. For JC-1041, the color gamut is ~80% at 20°C, but quickly shrinks to 17% at 0°C and 0% at −20°C. On the other hand, DIC-LC2 shows 85% color gamut at 20°C and still maintains 73% color gamut at −20°C. Therefore, its LCOS image quality can still be well preserved even at low temperatures.

        Reducing the LED turn-on duty m is another way to suppress color mixing as it increases the temporal separation of each color field [22]. Figure 3(d) plots the color gamut ratio for 180Hz frame rate and 90% LED duty. The temporal separation between the color frames is 0.56ms. At 20°C, the decay time of all three LC material is less than 0.56ms, so they can all obtain ~100% color gamut. …..DIC-LC2 can still maintain 97% color gamut coverage at −20°C. Usually LED color gamut can cover >110% AdobeRGB color gamut, therefore the FSC LCOS can deliver 110%*97% = 106.7% AdobeRGB color gamut even at such a low temperature.

        https://www.osapublishing.org/ome/fulltext.cfm?uri=ome-5-3-603&id=312060

      • Thanks for the article.

        First noted that what they call a “frame rate” I would call a “color field rate”. So the 360 they refer to is the “field rate”. So they get 360/3 – 120 frames of RGB per second or two frames per 60Hz. What they seem to have is a low viscosity LC at very low temperatures as in -20C. That is usually not the problem I have encountered. Very quickly the LEDs will heat up the LC. What I would worry about in practical designs is the contrast at say 30C to 50C. In my experience, LCs that work well at low temperatures often have contrast issues at higher temps.

        A 360Hz field rate is not fast enough to eliminate color field breakup, particular for a headset. With all the head movement and vibrations, you are much more susceptible to color field breakup. We were able to get over 540 fields per second at Syndiant when I left in 2011, but then this was assuming a minimum temp of about 20C. Syndiant used a digital drive (time based with an SRAM bit) where Himax uses an “analog” drive (voltage based with a capacitive DRAM bit). Syndiant could also vary the field times between colors as we were not waiting for the LC to “settle”.

    • Certainly a fast VAN would be interesting in the LCOS market. Typically VAN is about 1/2 to 1/3 the speed of Tn. But you have to look at all the various characteristics. There are many “knobs” to turn on LC mixtures and usually you get something you want at the expense of hurting something else. VAN is also trickier to drive than Tn in my experience, it is much more susceptible to “tailing” and lateral field effects.

Leave a Reply

Discover more from KGOnTech

Subscribe now to keep reading and get access to the full archive.

Continue reading