Archive for Technology History

Mira Prism and Dreamworld AR – (What Disney Should Have Done?)

That Was Fast – Two “Bug-Eye” Headsets

A few days ago  I published a story on the Disney Lenovo Optics and wondered why they didn’t use a much simpler “bug-eye” combiner optics similar to the Meta-2 (below right) which currently sells in a development kit version for $949. It turns out the very same day Mira announced their Prism Headset which is a totally passive headset with a mount for a phone and bug-eye combiners with a “presale price” of $99 (proposed retail $150). Furthermore in looking into what Mira was doing, I discovered that back on May 9th, 2017, DreamWorld announced their “DreamGlass” headset using bug-eye combiners that also includes tracking electronics which is supposed to cost “under $350” (see the Appendix for a note on a lawsuit between DreamWorld and Meta)

The way both of these work (Mira’s is shown on the left) is that the cell phone produces two small images, one for each eye, that reflects off the two curved semi-mirror combiners that are joined together. The combiners reflect part of the phone’s the light and move the focus of the image out in space (because otherwise human could not focus so close).

Real or Not?: Yes Mira, Not Yet Dreamworld

Mira has definitely built production quality headsets as there are multiple reports of people trying them on and independent pictures of the headset which looks to be near to if not a finished product.

DreamWorld has not demonstrated, at least as of their May 9th announcement, have a fully functional prototype per Upload’s article. What may appear to be “pictures” of the headset are 3-D renderings. Quoting Upload:

“Dreamworld’s inaugural AR headset is being called the Dreamworld Glass. UploadVR recently had the chance to try it out at the company’s offices but we were not allowed to take photos, nor did representatives provide us with photographs of the unit for this story.

The Glass we demoed came in two form factors. The first was a smaller, lighter model that was used primarily to show off the headset’s large field of view and basic head tracking. The second was significantly larger and was outfitted with “over the counter” depth sensors and cameras to achieve basic positional tracking. “

The bottom line here is that Mira’s appear near ready to ship whereas DreamWorld still has a lot of work left to do and at this point is more of a concept than a product.

DreamWorlds “Shot Directly From DreamWorld’s AR Glass” videos were shot through a combiner, but it may or may not be through their production combiner configured with the phone in the same place as the production design.

I believe views shown in the Mira videos are real, but they are, of course, shooting separately the people in the videos wearing the heaset and what the image look’s like through the headset. I will get into one significant problem I found with Mira’s videos/design later (see “Mira Prism’s Mechanical Interference” section below).

DreamWorld Versus Mira Optical Comparison

While both DreamWorld and Mira have similar optical designs, on closer inspection it is clear that there is a very different angle between the cell phone display and the combiners (see left). DreamWorld has the combiner nearly perpendicular to the combiner whereas Mira has the cell phone display nearly parallel. This difference in angle means that there will be more inherent optical distortion in the DreamWorld design whereas the Mira design has the phone more in the way of the person’s vision, particularly if they wear glasses (once again, see “Mira Prism’s Mechanical Interference” section below).

See-Through Trade-offs of AR

Almost all see-though designs waste most light of the display in combining the image with the real world light.  Most designs lose 80% to 95% (sometimes more) of the display’s light. This in turn means you want to start with a display 20 to as much as 100 times (for outdoor use) the brightness of a cell phone. So even an “efficient” optical design has serious brightness problems starting with a cell phone display (sorry this is just a fact). There are some tricks to avoid these losses but not if you are starting with the light from a cell phone’s display (broad spectrum and very diffuse).

One thing I was very critical of last time of the Disney-Lenova headset was that it appeared to be blocking about 75 to 80% of the ambient/real-world light which is equivalent to dark sunglasses. I don’t think any reasonable person would find blocking this much light to be acceptable for something claiming to be “see-through” display.

From several pictures I have of Mira’s prototype, I very roughly calculated that they are about 70% transparent (light to medium dark sunglasses) which means they in turn are throwing away 70+% of the cell phone’s light. On of the images from from Mira’s videos is shown below. I have outlined with a dashed line the approximate active FOV (the picture cuts it off on the bottom) which Mira claims to cover about 60 degees and you can see the edge of the combiner lens (indicated by the arrows).

What is important to notice is that the images are somewhat faded and don’t not “dominate”/block-out the real world. This appears true of all the through optics images in Mira’s videos. The room while not dark is also not overly brightly lit. This is going to be a problem for any AR device using a cell phone as its display. With AR optics you are both going to throw away a lot of the displays light to support seeing through to the real world and you have to compete with the light that is in the real world. You could turn the room lights out and/or look at black walls and tables, but then what is the point of being “see through.”

I also captured a through the optics image from DreamWorld’s DreamGlass video (below). The first thing that jumps out at me is how dark the room looks and that they have a very dark table. So while the images may look more “solid” than in the Mira video, most of this is due to the lighting of the room

Because the DreamWorld background is darker, we can also see some of the optical issues with the design. In particular you should notice the “glow” around the various large objects (indicated by red arrows). There is also a bit of a double image of the word “home” (indicated by the green arrow). I don’t have an equivalent dark scene from Mira so I can’t tell if they have similar issues.

Mira Prism’s Resolution

Mira (only) supports the iPhone 6/6s/7 size display and not the larger “Plus” iPhones which won’t fit. This gives them 1334 by 750 pixels to start with. The horizontal resolution first has to be split in half and then about 20% of the center is used to separate the two images and center the left and right views with respect to the person’s eye (this roughly 20% gap can be seen in Mira’s Video). This nets about (1334/2) X 80% = ~534 pixels horizontally. Vertically they may have slightly higher resolution of about 600 pixels.

Mira claims a FOV of “60 Degrees” and generally when a company does not specify the whether it is horizontal, vertical, or diagonal, they mean diagonal because it is the bigger number. This would suggest that the horizontal FOV is about 40 and the vertical is about 45 degrees. This nets to a rather chunky 4.5 arcminutes/pixel (about the same as Oculus Rift CV1 but with a narrower FOV). The “screen door effect” of seeing the boundaries between pixels is evident in Mira’s videos and should be noticeable when wearing.

I’m not sure that supporting a bigger iPhone, as in the Plus size models would help. This design requires that the left and right images be centered over the which limits where the pixels in the display can be located. Additionally, a larger phone would cause more mechanical interference issues (such as with glasses covered in the next section).

Mira Prism’s Mechanical Interference

A big problem with a simple bug-eye combiner design is the location of the display device. For the best image quality you want the phone right in front of the eye and as parallel as possible to the combiners. You can’t see through the phone so they have to move it above the eye and tilt it from parallel. The more they move the phone up and tilt it, the more it will distort the image.

If you look at upper right (“A”) still frame form Mira’s video below  you will see that the phone his just slightly above the eyes. The bottom of the phone holder is touching the top of the person’s glasses (large arrow in frame A). The video suggest (see frames “B” and “C”) that the person is looking down at something in their hand. But as indicated by the red sight line I have drawn in frames A and B the person would have to be looking largely below the combiner and thus the image would at best be cut-off (and not look like the image in frame C).

In fact, for the person with glasses in the video to see the whole image they would have to be looking up as indicated by the blue sight lines in frames A and B above. The still frame “D” shows how a person would look through the headset when not wearing glasses.

I can’t say whether this would be a problem for all types of glasses and head-shapes, but it is certainly a problem that is demonstrated in the Mira’s own video.

Mira’s design maybe a bit too simple. I don’t see any adjustments other than the head band size. I don’t see any way work around say running into a person’s glasses as happens above.

Cost To Build Mira’s Prism

Mira’s design is very simple. The combiner technology is well known and can be sourced readily. Theoretically, Mira’s Prism should cost about the same to make as a number of so called “HUD” displays that use a cell phone as the display device and a (single) curved combiner that sell for between $20 and $50 (example on right). BTW, these “HUD” are useless in the daylight as a cell phone is just not bright enough. Mira needs to have a bit more complex combiner and hopefully of better quality than some of the so-called “HUDs” so $99 is not totally out of line, but they should be able to make them at a profit for $99.

Conclusions On Simple Bug-Eye Combiner Optics With A Phone

First let me say I have discussed Mira’s Prism more than DreamWord’s DreamGlass above because there is frankly more solid information on the Prism. DreamGlass seems to be more of a concept without tangible information.

The Mira headset is about as simple and inexpensive as one could make an AR see-through headset assuming you can use a person’s smartphone. It does the minimum enabling a person to focus on a phone that is so close and combining with the real world. Compared to say Disney-Lenovo birdbath, it is going to make both the display and real world both more than 2X brighter. As Mira’s videos demonstrate, the images are still going to be ghostly and not very solid unless the room and/or background is pretty dark.

Simplicity has its downsides. The resolution is  low, image is going to be a bit distorted (which can be corrected somewhat by software at the expense of some resolution). The current design appears to mechanical interference problems with wearing glasses. Its not clear if the design can be adapted to accommodate glasses as it would seem to move the whole optical design around and might necessitate a bigger headset and combiners.  Fundamentally a phone is not bright enough to support a good see-through display in even moderately lit environments.

I don’t mean to be overly critical of Mira’s Prism as I think it is an interesting low cost entry product, sort of the “Google Cardboard” of AR (It certainly makes more sense than the Disney_Lenovo headset that was just announced). I would think a lot of people would want to play around with the Mira Prism and find uses for it at the $99 price point. I would expect to see others copying its basic design. Still, the Mira Prism demonstrates many of the issues with making a low cost see-though design.

DreamWorld’s DreamGlass on the surface makes much less sense to me. It should have all the optical limitations of the much less expensive Mira Prism. It it adding at lot of cost on top of a very limited display foundation using a smartphones display.

Appendix

Some History of Bug-Eye Optics

It should be noted that what I refer to as bug-eye combiners optics is an old concept. Per the picture on the left taken from a 2005 Links/L3 paper, the concept goes back to at least 1988 using two CRTs as the displays. This paper includes a very interesting chart plotting the history of Link/L3 headsets (see below). Links legacy goes all the way back to airplane training simulators (famously used in World War II).

A major point of L3/Link’s later designs,  is that they used corrective optics between the display and the combiner to correct for the distortion cause by the off-axis relationship between the display and the combiner.

Meta and DreamWorld Lawsuit

The basic concept of dual large combiners in a headset obviously and old idea (see above), but apparently Meta thinks that DreamWorld may have borrowed without asking a bit too much from the Meta-2. As reported in TechCrunch, “The lawsuit alleges that Zhong [Meta’s former Senior Optical Engineer] “shamelessly leveraged” his time at the company to “misappropriate confidential and trade secret information relating to Meta’s technologies”.

Addendum

Holokit AR

Aryzon AR

There are at least two other contenders for the title of “Google Cardboard of AR.” Namely the Aryzon and Holokit which both separate the job of the combiner from the focusing. Both put a Fresnel lens in between the phone and a flat semitransparent combiner. These designs are one step simpler/cheaper (and use cardboard for the structure) than Mira’s design, but are more bulky with the phone hanging out. An advantage of these designs is that everything is “on-axis” which means lower distortion, but they have chromatic aberrations (color separation) issues with the inexpensive Fresnel lenses that the Mira’s mirror design won’t have. There also be some Fresnel lens artifact issues with these designs.

Texas Instruments 99/4A and TMS9918 History

A little break from displays today to go back into my deep dark history. For my first 20 years in the industry, I was an I.C. designer and led led the architecture a number of CPUs and graphics devices.

I got a “shout out” of sorts in an IEEE article on the 99/4 computer by Wally Rhines, CEO of Mentor, about my work on the TMS9918 graphics unit which was my first design (started in 1977). Contrary to what the article states, I was NOT the only designer, back then it took 7 “whole engineers” (quite a few less than today) to design a graphics chip and I was the youngest person on the program. I think the 9918 took less than 1 year from raw concept to chip. Wally gave things from his perspective as a high level manager and he may be off in some details.

The 9918 coined the word “Sprites” and was used in the TI 99/4A, Colecovision, and the MSX computer in Japan. It was the first consumer chip to directly interface to DRAMs (I came up with the drive scheme). Pete Macourek and I figured out how to make the make the sprites work and then I did all the Sprite logic and control design.

A “Z80-like” register file compatible superset clone of the 9918 was used in both the Nintendo (Nintendo was a software developer for Coleco) and Sega Game systems among others.

After working on the TMS9918, I led the architecture and early logic design of the TMS9995 (which resulted in my spending 6 months in Bedford England) which is also mentioned in Wally’s article. If the TI Home Computer was not cancelled, I would have had a major part in the design of both the CPU and the Graphics chip on the 99/8 and 99/2.

Back in 1992 in the I was interviewed about the home computer in the days of BBS Bulletin Boards. This was only about 10 years after the events so they were more fresh in my mind. At the time of the 1992 interview, I was working on the first fully programmable media processor (and alluded to it in the interview) that integrated 4 DSP CPUs and a RISC processor on a single device (call the TMS320C80 or MVP). Another “little thing” that came out of that program was the Synchronous DRAM. You see I had designed the DRAM interface on the 9918 and the TMS340 graphics processor family and had worked on the Video DRAM (predecessor of today’s Graphics DRAMs) and was tired of screwing with the analog interface of DRAMs; so in a nutshell, I worked with TI’s memory group to define the first SDRAM (one of the patents can be found here). The 320C80 was the first processor to directly interface with SDRAM because it was co-designed with them.

For anyone interest, I wrote some more about my TI Home Computer and 9918 history on this blog back in the early days of this blog in 2011.

Near Eye Displays (NEDs): Gaps In Pixel Sizes

I get a lot of questions to the effect of “what is the best technology for a near eye display (NED).” There really is no “best” as every technology has its strengths and weaknesses. I plan to right a few articles on this subject as it is way too big for a single article.

Update 2017-06-09I added the Sony Z5 Premium 4K Cell Phone size LCD to the table. Their “pixel” is about 71% the linear dimension of the Samsung S8 or about half the area but still much larger than any of the microdisplay pixels. But one thing I should add is that most cell phone makers are “cheating” on what they call a pixel. The Sony Z5 Premium’s “pixel” really only has 2/3rds of an R, G, and B per pixel it counts. It also has them in a strange 4 pixel zigzag that causes beat frequency artifacts when displaying full resolution 4K content (GSMARENA’s Close Up Pixtures show of the Z5 Premium fails the show the full resolution in both directions). Note similarly Samsung goes with RGBG type patterns that only have 2/3rd the full pixels in the way they count resolution as well. These “tricks in counting are OK when viewed with the naked eye at beyond 300 “pixels” per inch, but become more problematical/dubious when used with optics to support VR. 

Today I want to start with the issue of pixel size as shown in the table at the top (you may want to pop the table out into a separate window as you follow this article). To give some context, I have also included a few major direct view categories of displays as well. I have grouped the technologies into the colored bands in the table. I have given the pixel pitch (distance between pixel centers) as well as the pixel area (the square of the pixel pitch assuming square pixels. Then to give some context for comparison I have compared the pitch and area relative to a 4.27-micron (µm) pixel pitch which is about the smallest being made in large volume. Finally there are columns showing how big the pixel would be in arcminutes when view from 25cm (250mm =~9.84inches) which is the commonly accepted near focus point. Finally there is a column showing how much the pixel would have to be magnified to equal 1-arcminute at 25cm which gives some idea about the optics required.

In the table, I tried to use smallest available pixel in a given technology that was being produced with the exception of “micro-iLED” for which I could not get solid information (thus the “?”). In the case of LCOS, the smallest field sequential color (FSC) pixel I know of is the 4.27µm one by my old company Syndiant used in their new 1080p device. For the OLED, I used the eMagin 9.3 pixel and for the DLP, their 5.4 micron pico pixel. I used the LCOS/smallest pixel as the baseline to give some relative comparisons.

One thing that jumps out in the table are the fairly large gaps in pixel sizes between the microdisplays versus the other technologies. For example you can fit over 100 4.27µm LCOS pixels in the area of a single Samsung S8 OLED pixel or 170 LCOS pixels in the area of a the pixel used in the Oculus CV1. Or to be more extreme you can fit over 5,500 LCOS pixels in one pixel of a 55-inch TV pixel.

Big Gap In Near Eye Displays (NEDs)

The main point of comparison for today are the microdisplay pixels which range from about 4.27µm to about 9.6µm in pitch to the direct view OLED and LCD displays in 40µm to 60µm that have been adapted with optics to be used in VR headsets (NEDs). Roughly we are looking at one order of magnitude in pixel pitch and two orders of magnitude in area. Perhaps the most direct comparison is the microdisplay OLED pixel at 9.3 microns versus the Samsung S8 at 4.8X linear and a 23x area difference.

So why is there this huge gap? It comes down to making the active matrix array circuitry to drive the technology. Microdisplays are made on semiconductor integrated circuits while direct view displays are made on glass and plastic substrates using comparatively huge and not very good transistor. The table below based on one in an article from 2006 by Mingxia Gu while at Kent State University (it is a little out of date, but gives lists the various transistors used in display devices).

The difference in transistors largely explains the gap. With the microdisplays using transistors made in I.C. fabs whereas direct view displays fabricate their larger and less conductive transistors on top of glass or plastic substrates at much lower temperatures.

Microdisplays

Within the world of I.C.’s, microdisplays used very old/large transistors often using nearly obsolete semiconductor processes. This is both an effort to keep the cost down and the fact that most display technologies need higher voltages than would be supported by smaller transistor sizes.

There are both display physics and optical diffraction reasons which limit making microdisplay pixels much smaller than 4µm. Additionally, as the pixel size gets below about 6 microns, the optical cost of enlarging the pixel to be seen by the human start to escalate so headset optics makers want 6+ micron pixels which are much more expensive to make. To a first order, microdisplay costs in volume are a function of area of the display so smaller pixels means less expensive devices for the same resolution.

The problem for microdisplays is even using old I.C. fabs, the cost per square millimeter is extremely high compared to TFT on glass/plastic, and yields drop as the size of the device grows so doubling the pixel pitch could result in an 8X or more increase in cost. While is sounds good to be using old/depreciated I.C. fabs, it may also mean they may not have the best/newest/highest yielding equipment or worse yet, they close down the facilities as being obsolete.

The net result is that microdisplays are no where near cost competitive with “re-purposed” cell phone technology for VR if you don’t care about size and weight. They are the only way to do a small lightweight headsets and really the only way to do AR/see through displays (save the huge Meta 2 bug-eye bubble).

I hope to pick up this subject more in some future articles (as each display type could be a long article in and of itself. But for now, I want to get onto the VR systems with larger flat panels.

Direct View Displays Adapted for VR

Direct View VR (ex. Oculus, HTC Vive, and Google Cardboard) have leveraged direct view display technologies developed for cell phones. They then put simple optics in front of the display so that people can focus the image when the display is put so near the eye.

The accepted standard for human “near vision” is 25cm/250mm/9.84-inches. This is about as close as a person can focus and is used for comparing effective magnification. With simple (single/few lens) optics you are not so much making the image bigger per say, but rather moving the display closer to the eye and then using the optics to enable the eye to focus. A typical headset uses a roughly 40mm focal length lens and then put the display at the focal lens or less (e.g. 40mm or less) from the lens.  Putting the display at the focal length of the lens makes the image focus at infinity/far away.

Without getting into all the math (which can be found on the web) the result is that with a 40mm focal length nets an angular magnification (relative to viewing at 25cm) of about 6X. So for example looking back at the table at the top, the Oculus pixel (similar in size to the HTC Vive) which would be about 0.77 arcminutes at 25cm end up appearing to cover about 4.7 arcminutes (which are VERY large/chunky pixels) and about a 95 degree FOV (depends on how close the eye gets to the lens — for a great explanation of this subject and other optical issues with the Oculus CV1 and HTC Vive see this Doc-Ok.org article).

Improving VR Resolution  – Series of Roadblocks

For reference, 1 arcminute per pixel is consider near the limit of human vision and most “good resolution” devices try to be under 2 arcminutes per pixel and preferably under 1.5. So let’s say we want to keep the ~95 FOV but improve the angular resolution by 3x linearly to about 1.5 arcminutes, we have several (bad) options:

  1. Get someone to make a pixel that is 3X smaller linearly or 9X smaller in area. But nobody makes a pixel this size that can support about 3,000 pixels on a side. A microdisplay (I.C. based) will cost a fortune (like over $10,000/eye if it could be made at all) and nobody makes transistors that a cheap and compatible with displays that are small enough. But let’s for a second assume someone figures out a cost effective display, then you have the problem that you need optics that can support this resolution and not the cheap low resolution optics with terrible chroma aberrations, god rays, and astigmatism that you can get away with 4.7 arcminute pixels
  2. Use say the Samsung S8 pixel size (a little smaller) and make two 3K by 3K displays (one for each eye). Each display will be about 134mm or about 5.26 inches on a side and the width of the two displays plus the gap between them will end up at about 12 inches wide. So thing in terms of strapping an large iPad Pro in front of your face only, it now has to be about 100mm (~4 inches) in front of the optics (or about 2.5X as far away at on the current headsets). Hopefully you are starting to get the picture, this thing is going to huge and unwieldy and you will probably need shoulder bracing in addition to head straps. Not to mention that the displays will cost a small fortune along with the optics to go with them.
  3. Some combination of 1 and 2 above.
The Future Does Not Follow a Straight Path

I’m trying to outline above the top level issue (there are many more). Even if/when you solve the display cost/resolution problem, lurking behind that is a massive optical problem to sustain that resolution. These are the problems “straight line futurists” just don’t get; they assume everything will just keep improving at the same rate it has in the past not realizing they are starting to bump up against some very non-linear problems.

When I hear about “Moore’s Law” being applied to displays I just roll my eyes and say that they obviously don’t understand Moore’s Laws and the issued behind it (and why it kept slowing down over time). Back in November 2016 Oculus Chief Scientist Michael Abrash made some “bold predictions” that by 2021 we would have 4K (by 4K) per eye and 140 degree FOV with 2 arcminutes per pixel. He upped my example above by 1.33x more pixels and upped the FOV by almost 1.5X which introduces some serious optical challenges.

At times like this I like to point out the Super Sonic Transport or SST of the 1960’s. The SST seemed inevitable for passenger trave, after all in less than 50 years passenger aircraft when from nothing to the jet age; yet today, over 50 years later, passenger aircraft still fly at about the same speed. Oh by the way, in the 1960’s they were predicting that we would be vacationing on the moon by now and having regular fights to Mars (heck, we made it to the moon in less than 10 years). We certainly could have 4K by 4K displays per eye and 140 degree FOV by 2021 in a head mounted display (it could be done today if you don’t care how big it is), but expect it to be more like the cost of flying supersonic and not a consumer product.

It is easy to play arm chair futurist and assume “things will just happened because I want them to happen. The vastly harder part is to figure out how it can happen. I lived through I.C. development in the late 1970’s through the mid 1990’s so I “get” learning curves and rates of progress.

One More Thing – Micro-iLED

I included in the table at the top Micro Inorganic LEDs, also known as just Micro-LEDs (I’m using iLED to make it clear these are not OLEDs). They are getting a lot of attention lately, particularly after Apple bought LuxVue and Oculus bought InfiniLED. These essentially use very small “normal/conventional” LEDs that are mounted (essentially printed) on a substrate. The fundamental issue is that red requires a very different crystal from blue and green (and even they have different levels of impurities). So they have to make individual LEDs and then combine them (or maybe someday grow the dissimilar crystals on the common substrate).

The allure is that iLEDs have some optics properties that are superior to OLEDs. They have tighter color spectrum, more power efficient, can be driven much brighter, less issues with burn in, and in some cases have less diffuse (better collimated) light.

These Micro-iLEDs are being used in two ways, one to make very large displays by companies such as Sony, Samsung, and NanoLumens or supposedly very small displays (LuxVue and InfiniLED). I understand how the big display approach works, there is lots of room for the LED and these displays are very expensive per pixel.

With the small display approach, they seem to have to double issue of being able to cut very small LEDs and effectively “print” the LEDs on a TFT substrate similar to say OLEDs. What I don’t understand is how these are supposed to be smaller than say OLEDs which would seem to be at least as easy to make on similar TFT or similar transistor substrates. They don’t seem to “fit” in near eye, but maybe there is something I am missing at this point in time.

Everything VR & AR Podcast Interview with Karl Guttag About Magic Leap

With all the buzz surrounding Magic Leap and this blog’s technical findings about Magic Leap, I was asked to do an interview by the “Everything VR & AR Podcast” hosted by Kevin Harvell. The podcast is available on iTunes and by direct link to the interview here.

The interview starts with about 25 minutes of my background starting with my early days at Texas Instruments. So if you just want to hear about Magic Leap and AR you might want to skip ahead a bit. In the second part of the interview (about 40 minutes) we get into discussing how I went about figuring out what Magic Leap was doing. This includes discussing how the changes in the U.S. patent system signed into law in 2011 with the America Invents Act help make the information available for me to study.

There should be no great surprises for anyone that has followed this blog. It puts in words and summarizes a lot that I have written about in the last 2 months.

Update: I listen to the podcast and noticed that I misspoke a few times; it happens in live interviews.  An unfathomable mistake is that I talked about graduating college in 1972 but that was high school; I graduated from Bradley University with a B.S. in Electrical Engineering in 1976 and then received and MSEE from The University of Michigan in 1977 (and joined TI in 1977).  

I also think I greatly oversimplified the contribution of Mark Harward as a co-founder at Syndiant. Mark did much more than just have desigeners, he was the CEO, an investor, and and the company while I “played” with the technology, but I think Mark’s best skill was in hiring great people. Also, Josh Lund, Tupper Patnode, and Craig Waller were co-founders. 

 

Magic Leap – Separating Magic and Reality

The Goal – Explain What is Magic Leap Doing

Magic Leap has a way of talking about what they hope to do someday and not necessarily what they can do anytime soon.  Their patent applications are full of things that are totally impossible or impractical to implement.  I’ve been reading well over a thousand pages of Magic Leap (ML) patents/applications, various articles about the company, watching ML’s “through the optics” videos frame by frame, and then applying my own knowledge of display devices and the technology business to develop a picture of what Magic Leap might produce.

Some warnings in advance

If you want all happiness and butterflies, as well as elephants in your hand and whales jumping in auditoriums, or some tall tale of 50 megapixel displays and of how great it will be someday, you have come to the wrong place.  I’m putting the puzzle together based on the evidence and filling in with what is likely to be possible in both the next few years and for the next decade.

Separating Fact From Fiction

There have been other well meaning evaluations such as “Demystifying Magic Leap: What Is It and How Does It Work?“,  “GPU of the Brain“, and the videos by “Vance Vids” but these tend to start from the point of believing the promotion/marketing surrounding ML and finding support in the patent applications rather than critically evaluating them. Wired Magazine has a series of articles as well as Forbes and others have covered ML, but these have been are personality and business pieces that make no attempt to seriously understand or evaluate the technology.

ml-array-picAmong the biggest fantasies surrounding Magic Leap is the Arrayed Fiber Scanning Displays (FSD); many people think this is real. ML Co-founder and Chief Scientist, Brian Schowengerdt, develop this display concept at the University of Washington based off an innovative endoscope technology and it features prominently in a number of ML assigned patent applications.  There are giant issues in scaling up FSD technology to high resolution and what it would require.

In order to get on with what ML is most likely doing, I have moved to the Appendix why FSDs, light fields, and very complex waveguides are not what Magic Leap is doing. Once you get rid of all the “noise” of the impossible things in the ML patents, you are left with a much better picture of what they are actually could be doing.

What left is enough to make impressive demos and it may be possible to produce at a price that at least some people could afford in the next two years. But ML still has to live by what is possible to manufacture.

Magic Leaps Optical “Magic” – Focus Planes

Fm: Journal of Vision 2009

At the heart all of ML optical related patents is the concept eye vergence-accomodation where the focus of the of the various parts of a 3-D image should agree with their distances or it will cause eye/brain discomfort. For more details about this subject see this information about Stanford’s work in this area and their approach of using quantized (only 2 level) time sequential light fields.

There are some key similarities in that between the Stanford and Magic Leap’s approaches.  They both quantize to a few levels to make them possible to implement and they both present their images time sequentially and they rely on the eye/brain to both fill in between the quantizated levels and integrate a series of time sequential images. Stanford’s approach is decidedly not a “see through” with an Oculus-like setup with two LCD flat panel displays in series where Magic Leap’s goal is to merge the 3-D images with the real world with Mixed Reality (MR).

ml-focus-planesMagic Leap uses the concept of “focus planes” where they conceptually break up a 3-D image into quantized focus planes based on the distance of the virtual image.  While they show 6 virtual planes in Fig. 4 from the ML application above, that is probably what they would like to do but they are doing fewer planes (2 to 4) due to practical concerns.

Magic Leap then renders the parts of an image image into the various planes based on the virtual distance.  The ML optics make it planes appear to the eye like they are focus based their corresponding virtual distance. These planes are optically stacked on top of each other give the final image and they rely on the person’s eye/brain to fill in for the quantization.

Frame Sequential Focus Planes With SLMs

ml-slm-vfe-biocular-systemMagic Leap’s patents/applications show various ways to generate these focus planes, the most fully form concepts use a single display per eye and present the focus planes time sequentially in rapid succession, what ML refers to as “frame-sequential“where there is one focus plane per “frame.”

Both due to the cost and size multiple displays per eye and their associated optics including those to align and overlay them, the only possible way ML could build a product for even a modest volume market is by using frame sequential methods using a a high speed spatial light modulator (SLM) such a DLP, LCOS, or OLED microdisplay.

Waveguides and Focus Planes

Light rays that coming from a far away point that make into the eye are essentially parallel (collimated) and light rays from a near point have a wider set angles.  These differences in angles is what makes them focus differently, but at the same time creates problems for existing waveguide optics, such as what Hololens is using.

The very flat and thin optical structures call “waveguides” will only work with collimated light entering them because of how total light totally internally reflects to stay in the light guide and the the way the diffraction works to make the light exits.  So a simple waveguide would not work for ML.

ml-angle-mirror-deviceSome of ML’s concepts use use one or more beam splitting mirrors type optics rather than waveguides for this reasons. Various ML’s patent applications show using a single large beam splitter or multiple smaller ones (such as at left), but these will be substantially thicker than a typical waveguide.

magic-leap-combiner-cropWhat Magic Leap calls a “Photonics Chip” looks to be at least one layer of diffractive waveguide. There is no evidence of mirror structures, and because it bends the wood in the background (if it were just a simple plate of glass, the wood in the background would not be bent), it appears to be a diffractive optical structure.

Because ML is doing focus planes, they need to have not one, but a stack of waveguides, one per focus plane. The waveguides in ML’s patent applications show collimated light entering the each waveguide in the stack like a normal waveguide, but then the exit diffraction gratings both causes the light to exit also imparts the appropriate focus plane angle to the light.

To be complete, Magic Leap has shown in several patent applications shown some very thick “freeform optics” concepts, but none of this would look anything like the “Photonics Chip” that ML shows.  ML’s patent applications show many different optical configurations and they have demoed a variety of different designs. What we don’t know is if the Photonics Chip they are showing is what they hope to use in the future or if this will be in their first products.

Magic Leaps Fully Formed Designs In Their Recent Patent Applications

Most of Magic Leaps patent applications showing optics have more like fragments of ideas.  There are lots of loose ends and incomplete concepts.

More recently (one publish just last week) there are patent applications assigned to Magic Leap with more “fully formed designs” that look much more like they actually tried to design and/or build them.  Interestingly, these applications don’t include as inventors the founders Rony Abovitz, the CEO, nor even Brian T. Schowengerdt, Chief Scientist, while they may use ideas from those prior “founders patent application.”

While the earlier ML applications mention Spatial Light Modulators (SLMs) using DLP, LCOS, and OLED microdisplays and talk about Variable Focus Element (VFEs) for time sequentially generating focus planes, they don’t really show how to put them together to make anything (a lot is left to the reader).

freeform-opticsPatent Applications 2016/0011419 (left) and 2015/0346495 (below) show straight forward ways to achieve field sequential focus planes using a Spatial Light Modulator (SLM) such as DLP, LCOS or OLED microdisplay.  ml-vfe-with-dlp-003b

As focus plane is created by setting the a variable focus element (VFE) to a one focus point and then generating the image by the SLM. Then the VFE focus is then changed and a second focus plane is displayed by the SLM.  This process can be repeated to generate more focus planes and limited by how fast the SLM can generate image and by level of motion artifact that can be tolerated.

These are clearly among the simplest way to generate focus planes. All that is added over a “conventional” design is the VFE.  When I first heard about Magic Leap many months ago, I heard they were using DLPs with multiple focus depths but a more recent Business Insider is reporting ML is using using Himax LCOS.  Both of these could easily be adapted to support OLED microdisplays.

The big issue I have with the straight forward optical approaches are the optical artifacts I have seen in the videos and the big deal ML makes out of their Photonics Chip (waveguide).  Certainly their first generation might use a more straightforward optical design and then save the Photonics Chip for the next generation.

Magic Leaps Videos Show Evidence of Waveguide Optics

As I wrote last time, there is a lot of evidence from the videos ML has put out that they are using a waveguide at least for the video demos.  The problem is when you bend light in a short distance using diffraction gratings or holograms is that some of the light does not get bent correctly and this shows up colors not lining up (chroma aberrations) as well as what I have come to call the “waveguide glow”.  If at R2D2 below (you may have to click on the image see it clearly) you should see a blue/white glow around R2D2.  I have seen this kind of glow in every diffractive and holographic waveguide I have seen.  I have heard that the glow might be eliminated someday with laser/very narrow bandwidth colors and holographic optics.ml-r2d2-glow2

The point here is that there is a lot of artifact evident that ML was at least using some kind of waveguide in their videos.  This makes it more likely that their final product will also use waveguides and at the same time may have some or all of the same artifacts.

Best Fit Magic Leap Application with Waveguides

If you drew a venn diagram of all existing information, the one patent application that fits best it all is the very recent US 2016/0327789. This is no guarantee that it is what they are doing, but it fits the current evidence best. It combines the a focus plane sequential LCOS SLM (although it shows it could also support DLP but not OLED) with waveguide optics.

The way this works is that for every focus plane there are 3 Waveguides (RED, Green,and Blue) and spatial separate set of LEDs Because the are spatially separate,  they will illuminate the LCOS device at a different angle and after going through the beam splitter the waveguide “injection optics” will cause the light from the different spatially separated LEDs to be aimed at a different waveguide of the same color. Not shown in the figure below is that there is an exit grating that both causes the light to exit the waveguide and imparts an angle to the light based on the focus associated with that give focus plane.  I have coloring in the “a” and “b” spatially separated red paths below (there are similar pairs for blue and green).

With this optical configuration, the LCOS SLM is driven with the image date for a given color for a given focus plane and then the associated color LED for that plane is illuminated.  This process then continues with a different color and/or focus plane until all 6 waveguides for the 3 colors by 2 planes have been illuminated.  ml-slm-beam-splitter-lcos-type-optics-color

The obvious drawbacks with this approach:

  1. There are a lot of layers of waveguide with exit diffraction gratings that the user will be looking through and the number of layers grows by 3 with each added focus plane.  That is a lot of stuff to be looking though and it is bound to degrade the forward view.
  2. There are a lot of optical devices that all the light is passing through and even small errors and leak light builds up.  This can’t be good for the overall optical quality.  These errors have their effect on resolution/blurring, chroma aberrations, and glowing/halo effects.
  3. Being able to switch though all the colors and focus planes fast enough to avoid motion artifacts where the colors and/or the focus planes break up.  Note this issue exist with using any approach that both does field and focus plan sequential.   Obviously this issue becomes worse with more focus planes.

The ‘789 patent show an alternative implementation for using a DLP SLM. Interestingly, this arrangement would not work for OLED Microdisplays as they generate their own illumination so you would not be able to get the spatially separated illumination.

So what are they doing?  

Magic Leap is almost certainly using some form of spatial light modulator with field sequential focus planes (I know I will get push-back form the ML fans that want to believe in the FSD — see the Appendix below); but this is the only way I could see them going to production in the next few years.  Based on the Business Insider information, it could very well be an LCOS device in the production unit.

The the 2015/0346495 with the simple beam splitter would be what I would have choose for a first design provide there is an appropriate variable focus element (VFE) available.  It is by far the simplest design and would seem to have the lowest risk. The downside is that the angled large beamsplitter will make it thicker but I doubt that much more so.   Not only is it lower risk (if the VFE works) but the image quality will likely be better using a simple beam splitter and spherical mirror-combiner than many layers diffractive waveguide.

The 2016/0327789 application touches all the basis based on available information.  The downside is that they need 3 waveguides per focus plane.  So if they are going to say support just 3 focus planes (say infinity, medium, and short focus) they are going to have 9 (3×3) layers waveguides to manufacture and pay for and 9 layers to look through to see the real world.  Even if each layer is extremely good quality, the error will build up in so many layers of optics.  I have heard that the Waveguide in Hololens has been a major yield/cost item and what ML would have to build would seem to be much more complex.   

While Magic Leap certainly could have something totally different, but they can’t be pushing on all fronts at once.  They pretty much have to go with a working SLM technology and get their focus planes time sequentially to build an affordable product.

I’m fond to repeating the 90/90 rule that “it takes 90%  of the effort to get 90% of the way there, then it takes the other 90% to do the last 10%” and someone quipped back, it can also be 90/90/90. The point being is that you can have something that look pretty good and impresses people, but solving the niggling problems, making it manufacturable and cost effective almost always takes more time, effort, and money than people want to think. These problems tend to become multiplicative if you take on too many challenges at the same time.

Comments on Display Technologies

As far as display technologies go each of the spatial light technologies has it pro’s and cons.

  1. LCOS seems to be finding the widest acceptance due to cost.  It is generally lower power in near eye displays than DLP.   The downside is that it has a more modest field rate which could limit the number of focus planes.  It could also be used in any of the 3 prime candidate optical system.  Because the LEDs are separate from the display, they can support essentially any level of brightness.
  2. DLP has the fastest potential field rate which will support more focus planes.  With DLPs they could trade color depth for focus planes.  DLPs will also tend to have higher contrast.  Like LCOS, brightness will not an issue as the LEDs can provide more than enough light.  DLP tends to be higher in cost and power and due to the off axis illumination, tend to have a little bigger optical system that LCOS in near eye applications.
  3. OLED – It has a lot of advantages in that it does not have to sequentially change the color fields, but the current devices still have a slower frame rate than DLP and LCOS can support.  What I don’t know, is how much the field rate is limited by the OLED designs to date versus what they could support if pressed.   The lack of control of the angle of illumination such as used in the ‘789 application.  OLEDs put out rather diffuse with little angle control and this could limit its usefulness with respect to focus plane where you need to  control the angles of light.
  4. FSD Per my other comment and the Appendix below, don’t hold your breath waiting for FSDs.
Image Quality Concerns

I would be very concerned about Magic Leap’s image quality and resolution beyond gaming applications. Forget all those magazine writers and bloggers getting all geeked out over a demo with a new toy, at some point reality must set in.

Looking at what Magic Leap is doing and what I have seen in the videos about the effective resolution and image quality it is going to be low compared to what you get even on a larger cell phone.  They are taking a display device that could produce a good image (either 720p or maybe 1080p) under normal/simple optics and putting it through a torture test of optical waveguides and whatever optics used to generate their focus planes at a rational cost; something has to give.

I fully expect to see a significant resolution loss no matter what they do plus chroma aberrations, and waveguide halos provide they use waveguides.  Another big issue for me will be the “real world view” through whatever it takes to create the focus planes and how will it effect you say seeing you TV or computer monitor through the combiner/waveguide optics.

I would also be concerned about field sequential artifacts and focus plane sequential artifacts.  Perhaps these are why there are so many double images in the videos.

Not to be all doom and gloom.  Based on casual comments from people that have seen it and the fact that some really smart people invested in Magic Leap,  it must provide an interesting experience and image quality is not everything for many applications. It certainly could be fun to play with at least for a while. After all, Oculus rift has a big following and its angular resolution is so bad that they cover up by blurring and it has optical problems like “god rays.”

I’m more trying to level out the expectations.   I expect it to be a long way from replacing your computer monitor, as one reporter suggested, or even your cell phone, at least for a very long time. Remember that this has so much stuff in that in addition to the head worn optics and display you are going to have a cable down to the processor and battery pack (a subject I have only barely touched on above).

Yes, Yes, I know Magic Leap has a lot of smart people and a lot of money (and you could say the same for Hololens), but sometime the problem is bigger than all the smart people and money can solve.

Appendix: 

The Big Things Magic Leap is NOT Going To Make in Production Anytime Soon

The first step in understand Magic Leap is to remove all the clutter/noise that ML has generated.  As my father use to often say, there are to ways to hide information, you can remove it from view or your can bury it.” Below is a list of the big things that are discussed by ML themselves and/or in their patents that are either infeasible or impossible any time soon.

It would take a long article on each of these to give all the reasons why they are not happening, but hopefully the comments below will at least outline the why:

ml-array-pic

A) Laser Fiber Scanning Display (FSD) 

A number of people of picked up on this particularly because the co-founder and Chief Scientist, Brian Schowengerdt, developed this at the University of Washington.  The FSD comes in two “flavors” the low resolution single FSD and the Arrayed FSD

1) First, you pretty limited on the resolution of a single mechanically scanning fiber (even more so than Mirror scanners). You can only make them spiral so fast and they have their own inherent resonance. They make an imperfectly space circular spiral that you then have to map a rectangular grid of pixels onto. You can only move the fiber so fast and you can trade frame rate for resolution a bit but you can’t just make the fiber move faster with good control and scale up the resolution. So maybe you get 600 spirals but it only yields maybe 300 x 300 effective pixels in a square.

2) When you array them you then have to overlap the spirals quite a bit. According to ML patent US 9,389,424 it will take about 72 fibers scanner to made a 2560×2048 array (about 284×284 effective pixels per fiber scanner) at 72 Hz.

3) Lets say we only want 1920×1080 which is where the better microdisplays are today or about 1/2.5 of 72 fiber scanners or about 28 of them. This means we need 28 x 3 (Red, Green, Blue) = 84 lasers. A near eye display typical outputs between 0.2 and 1 lumen of light and you divide this then by 28. So you need a very large number really tiny lasers that nobody I know of makes (or may even know how to make). You have to have individual very fast switching lasers so you can control them totally independently and at very high speed (on-off in the time of a “spiral pixel”).

4) So now you need to convince somebody to spend hundreds of millions of dollars in R&D to develop very small and very inexpensive direct green (particularly) lasers (those cheap green lasers you find in laser pointers won’t work because they switch WAY to slow and are very unstable). Then after they spend all that R&D money they have to then sell them to you very cheap.

5) Laser Combining into each fiber. You then have the other nasty problem of getting the light from 3 lasers into a single fiber; it can be done with dichroic mirrors and the like but it has to be VERY precise or you miss the fiber. To give you some idea of the “combining” process you might want to look at my article on how Sony combined 5 lasers (2 Red, 2 Green, and 1 Blue for brightness) for a laser mirror scanning projector http://www.kguttag.com/2015/07/13/celluonsonymicrovision-optical-path/. Only now you don’t do this just once but 28 times. This problem is not impossible but requires precision and precision cost money. Maybe if you put enough R&D money into it you can make it on a single substrate.  BTW, It looks to me that in the photo you see of Magic Leap prototype (https://www.wired.com/wp-content/uploads/2016/04/ff_magic_leap-eric_browy-929×697.jpg) it looks like they didn’t bother combining the lasers into single fibers.

6) Next to get the light injected into a waveguide you need to collimate the arrays of cone shaped light rays. I don’t know of any way, even with holographic optics that you can Collimate this light because you have overlapping rays of light going in different directions.  You can’t collimate the individual cones of light rays or there is not way to get them to overlap to make a single image without gaps in it. I have been looking through the ML patent applications an they never seem to say how they will get this array of FSDs injected into a waveguide. You might be able to build this in a lab for one that is horribly inefficient by diffusing the light first but it would be horribly inefficient.

7) Now you have the issue of how are you going to support multiple focus planes. 72Hz is not fast enough to do it Field Sequentially so you have to put in either parallel ones so multiply by the number of focus planes. The question at this point is how much more than a Tesla Model S (starting at $66K) will it cost in production.

I think this is a big ask when you can buy an LCOS engine at 720p (and probably soon 1080p) for at about $35 per eye. The theoretical FSD advantage is that it might be able to be scaled it up to higher resolutions but you are several miracles away from that today.

ml-wavefrontB) Light Fields, Light Waves, etc.

 There is no way to support any decent resolution with Light Fields that is going to fit on anyone’s head.  It takes about 50 to 100 times the simultaneous image information to support the same resolution with a light field.  Not only can’t you afford to display all the information to support good resolution, it would take and insane level of computer processing. What ML is doing is a “shortcut” of multiple focus planes which is at least possible.  The “light wave display” is insane-squared, it requires the array of fibers to be in perfect sync among other issues.

ml-multi-displayC) Multiple Displays Driving the Waveguides

ML patents show passive waveguides with multiple displays (fiber scanning or conventional) driving them. It quickly becomes cost prohibitive to support multiple displays (2 to 6 as the patents show) all with the resolution required.

ml-vfe-compensation4) Variable Focus Optics on either side of the Waveguides

Several of their figures show electrically controlled variable focus elements (VFE) optics on either side of the waveguides with one set changing the focus of a frame sequential image plane compensating while a second set of VFE compensates so the  “real world” view remains in focus. There is zero probability of this working without horribly distorting the real world view.

What Magic Leap Is Highly Unlikely to Produce

multiplane-waveguideActive Switching Waveguides – ML patents applications show many variations they drawn attention from other articles. The complexity of making them and the resultant cost is one big issue.  There would likely be serious the degradation to the view all the layers and optical structures through to the real world.  Then you have the cost both in terms of displays and optics to get images routed to the various planes of the waveguide.  ML’s patent applications don’t really say how the switching would work other than saying they might use liquid crystal or lithium niobate but nothing so show they have really thought it through.   I put this in the “unlikely” category because companies such as DigiLens have built switchable Bragg Gratings.

Magic Leap Video – Optical Issues and a Resolution Estimate

As per my previous post Magic Leaps display technology what Magic Leap is using in their YouTube through the lens demos may or may not be what they will use in the final product. I’m making an assessment of their publicly available videos and patents.  There is also the possibility that Magic Leap is putting out deliberately misleading videos to throw off competitors and whomever else is watching.

Optical Issues: Blurry, Chroma Aberrations, and Double Images

I have been looking at a lot of still frames from the ML’s “A New Morning” video that according the ML is “Shot directly through Magic Leap technology on April 8, 2016 without use of special effects or compositing.”  I chose this video because it has features like text and lines (known shapes) that can better reveal issues with the optics. The overall impression looking at the images they are all somewhat blurry with a number of other optical issues.

Blurry

resolution-01b-cropThe crop of a frame at 0:58 on the left shows details that include real world stitching of a desk organizer with 3 red 1080p pixel dots added on top of two of the stitches. The two insets show 4X pixel replicated blow-ups so you can see the details.

Looking at the “real world” stitches, the camera has enough resolution to capture the cross in the “t” in “Summit” and the center of the “a” in Miura” if they were not blurred out by the optics.

Chroma Abberations

If you look at the letter “a” in the top box, you should notice the blue blur on the right side that extends out a number of 1080p pixels.  These chroma aberrations are noticeable throughout the the frame, particularly at the edges of white objects.  These aberrations indicate that the R, G, and B colors are not all focused and add to the blurring.

The next question is whether the chroma aberration is cause by the camera or the ML optics. With common camera optics, chroma aberrations get worst the further you get away from the center.

resolution-01b-chroma-cropIn the picture on the left, taken from the same 0:53 frame the name “Hillary” (no relation to the former presidential candidate) is near the top of the screen and “Wielicki” is near the middle. Clearly the the name “Wielicki” has significantly worse chroma aberration even though it is near the center of the image. This tends to rule out the camera as the source of the aberration as it is getting worse from top (outside) to the center. Based on this fact, it appears that the chroma aberrations are caused by the ML optics.

resolution-01b-full-frameFor those that want to see the whole frame, click on the image a the right.

Double Images

Consistently during the entire video there are double images the further down and further left you look at the image. These are different from the frame update double images from last time. as they appear when there is no movement and they are dependent on location.

Below I have gone through a sequence of different frames to capture similar content in the upper left, center, and right (UL, UC, UR), as well as the Middle (M), and Lower (L) left, center, and right and put them side by side. I did the best I could to get the best image I could find in each region (using different content for the lower left).  I have done this over a number of frame checking for focus issues and motion blur and the results are the same, the double image is always worse in the bottom and far left.ml-new-morning-upper-lower-crops2

The issue seen are not focusing nor movement problems. Particularly notice, in the lower left (LL) image how the “D” is a double image is displaced slightly higher and to the right. A focus problem would blur it concentrically and not in a single direction.

Usually double images of the the same size are result of reflections off of flat plates.  Reflections off a curved surface, such as a camera lens pr curved mirror would magnify or reduce the reflection.   So this suggests that the problem has something to do with flat or nearly plates which could be a flat waveguide or a flat tilted plate combiner.

The fact that the image gets worse the further down and left would suggest (this is somewhat speculative) that the image in coming from near the top right corner.   Generally an image will degrade more the further it has to go through a waveguide or other optics.

One more thing to notice particularly on the images on the three on the right side are “jaggies” in the horizontal line below the text.

What, there are Jaggies? A clue to the resolution which appears to be about 720p

Something I was not expecting to see were the stair step effect of a diagonally drawn line, particularly through the blurry optics.  Almost all modern graphics rendering does “antialiasing”/smooth edge rendering with gray scale values that smooth out these steps, and after the losses due to the optics and camera I was not expecting to see any jaggies.  There are no visible jaggies for all the lines and text in the image with the notable exception for the lines under the text of “TODAY” and “YESTERDAY” associated with the notification icons.

In watching the video playing it is hard to miss these lines as the jaggies move about drawing your eye to them.  The jaggies’ movement it also a clue that they are moving the drawn image as the camera moves slightly.

Below I have taken one of those lines with jaggies and then below it I have simulated the effect in Photoshop with 4 lines below it.  The results have been magnified by 2X and you may want to click in the image below to see the detail.  One thing you may notice in the ML Video line is that in addition to the jaggies, it appears to have thick spots in it.  These thick spots between jaggies are caused by the line being both at an angle and with slight perspective distortion which causes the top and bottom of a wider than one pixel thick line be rendered at slightly different angles which causes the jaggies to occur in different places on the top and bottom and results in the thick sections.  In the ML Video line there are 3 steps on the top (pointed to by the green tick marks) and 4 on the bottom (indicated by red tick marks).
resolution-jaggies-02

Below the red line, I simulated the effect using Photoshop on the 1080p image and copied the color of background to be the background for the simulation.  I started with a thin rectangle that was 4 pixels high and then scaled it by making it very slightly trapezoidal (about 1 degree difference between the top and bottom edge) and then rotated it to the same angle as the line in the video, using “nearest neighbor” (no smoothing/antialiasing) scaling; this produced the 3rd line “Rendered w/ jaggies”.   I then applied a Gaussian with a 2.0 pixel radius to simulate the blur from the optics to produce the “2.0 Gaussian of Jaggies” line that matches the effect seen in the ML video. I did not bother with simulating the chroma aberrations (the color separation above and below the white line) that would further soften/blur the image.

Looking at result you will see the thick and thin spots just like the ML video.  But note there are about 7 steps (at different places) on the top and bottom.  Since the angle of my simulated line and the angle of the line in the ML Video are the same and making the reasonable assumption that the jaggies in the video are 1 pixel high, the resolution should differ by the ratio of the jaggies or about 4/7 (ratio of the ML versus the 1080p jaggies).

Taking 1080 (lines) times 4/7 give about 617 lines which what you would expect right if they slightly cropped a 720p image.  This method while very rough and assumes they have not severely cropped the image with the camera (to make themselves look bad).

For completeness to show the difference with what would happen if the light was rendered with antialiasing, I produced the “AA rendered” version and then use did the same Gaussian blur on it. This results, similar to all the other lines in the video where there are no detectable jaggies nor any changing in the apparent thickness of the line.

OK, I can here people saying, “But the Magazine Writers Said It Looked “Good/Great”

I have often said that for a video demo, “If I can control the product or control the demo content, I choose controlling the content.” This translates to “choose demo content that looks good on your product and eliminate content that will expose its weaknesses.”

If you show videos with a lot of flashy graphics and action with no need to look for detail, with smooth rendering, only imaging experts might notice that the resolution is low and/or there are issues with the optics.  If you put up text, use a larger font so that it is easily readable, most people will think you have high resolution sufficient for reading documents; in the demo you are not giving them a page of high resolution text to read if you don’t have high resolution.

I have been working with graphics and display devices for about 38 years and see a LOT of demos.  Take it from me, the vast majority of people can’t tell anything about resolution, but almost everyone thinks they can.  For this reason, I highly discount report from non display experts that have a chance to seriously evaluate a display. Even an imaging experts can be fooled by a quick well done demo or a direct or indirect financial motive.

Now, I have not seen what the article writers and the people that invested money (and their experts) have seen.  But what I hopefully have prove to you is that the what Magic Leap has shown in their YouTube videos is of pretty poor image quality by today’s standards.

Magic Leap Focus Effects
ml-out-of-focus

0:41 Out of Focus

ml-in-of-focus

0:47 Becoming In Focus

ml-in2-of-focus

1:00 Sharpest Focus

ml-back-out-of-focus

1:05 Back Out Of Focus

Magic Leap makes a big point of the importance of “vergence,” which means that the apparent focus agrees with the apparent distance in 3-D space. This is the key difference between Magic Leap and say Microsoft’s Hololens.

With only one lens/eye you can’t tell the 3-D stereo depth so they have to rely on how the camera focuses.  You will need to click on the thumbnails above to see the the focus effects in the various still captures.

They demonstrate the focus effects with “Climbing Everest” sequence in the video.  ML was nice enough to put some Post-It (TM) type tabs curled up in the foreground (in particular watch the yellow smiley face in the lower left) and a water bottle and desk organizer (with small stitches in the background.

Toward the end of the sequence (click on the 1:05 still) you can see the Mount Everest information which is at an angle relative to the camera is highly out of focus on the left hand side and gets better the right hand side, while the “Notices” information which appears to be further away is comparatively is in-focus. Also notice how the stitches in the desk organizer in the real world and which appear to be roughly the same angle as the Everest Information goes from out of focus on the left to more in-focus on the right agreeing with what is seen in the projected image.

This focus rake appears to be conclusive proof that that there is focus depth in the optical system in this video.  Just to be complete, it would be possible to the fake effect just for the video is by blurring the image by the computer synchronously with the focus rake.  But I doubt they “cheated” in this way as outsiders have reported seeing the focusing effect in  live demos.

In the 1:05 frame capture the “15,000 ft” in the lower left is both out of focus and has a double image which makes it hard to tell which are deliberate/controllable focusing effects and which are just double images due to poor optics. Due to the staging/setup, the worst part of the optics matches what should be the most out of focus part of the image.  This could be a coincidence or they may have staged it that way.

Seeing the Real World Through Display

Overall, seeing the real world through the display looks very good and without significant distortion.  It didn’t get any hints as to the waveguide/combiner structure.   It would be interesting to see what say a computer monitor would look like through the display or other light source shining through it.

The the lighting in the video is very dark; the white walls are dark gray due to a lack of light except where some lamps act as spotlights on them.  The furniture and most of the other things on the desk are black or dark (I guess the future is going to be dark and have a lot of black furniture and other things in it). This setup helps the generated graphics stand out. In a normally lit room with white wall, the graphics will have to be a lot brighter to stand out and there are limits to how much you can crank up the brightness without hurting people’s eyes or there will have to be a darkening shades as seen with Hololens.

Conclusion

The resolution appears to be about 720p and the optics are not up to showing that resolution.  I have been quite  of the display quality because it really is not good. There are image problems that are many pixels wide.

On the plus side, they are able to demonstrate the instantaneous depth of field with their optical solution and the view of the real world looks good so far as they have shown.  There may be issues with the see-through viewing that are not visible in these videos in a fairly dark environment.

I also wonder how the resolution translates into the FOV versus angular resolution, and how they will ever support multiple simultaneous focus planes.  If you discount a total miracle from their fiber scanned display happening anytime soon (to be covered next time), 720p to at most 1080p is about all that is affordable in a microdisplay today, particularly when you need one for each eye, in any production technology (LCOS, DLP, or Micro-OLED) that will be appropriate for a light guide.  And this is before you consider that to support multiple simultaneous focus planes, they will need multiple displays or a higher resolution display that they cut down. To me as a technical person who studied displays for about 18 years, this is a huge ask.

Certainly Magic Leap must have shown something that impressed some very big name investors, to invest $1.4B.  Hopefully it is something Magic Leap has not shown yet.

Next Time: Magic Leap’s Fiber Scanned Display

I have been studying the much promoted Magic Leap Fiber Scan Display (FSD).  It turns out there patents suggest two ways of using this technology:

  1. A more conventional display that can be used in combination with a waveguide with multiple focus layers.
  2. To directly generate a light fields from an array of FSDs

I plan to discuss the issues with both approaches next time.  To say the least, I’m high doubtful that either method is going to be in volume production any time soon and I will try and outline my reasons why.

Asides: Cracking the Code
Enigma

I was wondering whether the jaggies were left in as an “image generation joke” for insiders or just sloppy rendering. They are a big clue as to the native resolution of the display device that came through the optical blur and the camera’s resolving power.

It is a little like when the British were breaking of the Enigma code in WWII. A big help in breaking Enigma was sloppy transmitting operators giving them what they called “cribs” or predictable words or phrases. On a further aside, Bletchley Park where the cracked the Enigma Code is near Bedford England where I worked with and occasionally lived for over an 16 year period. Bletchley Park is a great place to visit if you are interested in computer history (there is also a computer museum at the same location).  BTW, the movie the “The Imitation Game” is an enjoyable movie but lousy history.

Solving the Display Puzzles

Also, I am not claiming to be infallible in trying to puzzle out what is going on with the various technologies. I have change my mind/interpretation of what I am seeing in the videos a number of times and some of my current conclusions may have alternative explanations. I definitely appreciate readers offering their alternative explanations and I will try and see if I think they fit the facts better.

Magic Leap’s work is particularly interesting because they have made such a big claims, raised so much money, are doing something different, and have released tantalizingly little solid information.  It also seems that a good number of people are expecting Magic Leap to do a lot more with their product than may be feasible at volume price point or even possible at any cost, at least for a number of years.

Highway1 Incubator

Those that follow my blog are probably wondering what has happened to me these past months.   I have away from home for most of the last 4 months at an “incubator” program for start-ups called Highway1.   Navdy, for which I recently became CTO, was selected as one of 11 companies from over 100 applicants for the very first class of the Highway1  program sponsored by PCH International.

What makes Highway1 different from almost all other incubator programs these days is that it is totally focused on helping hardware start-ups.   Highway1 recognizes that hardware start-ups have special needs, are more difficult to get started, and have have to deliver a physical product unlike software companies.

The Highway1 office is in the Mission District of San Francisco where most of the time is spent, but the program also includes spending two weeks in Shenzhen China where many of the electronic products used around the world are made.   During the program companies are introduced to mentors from other companies and experts in the field as well as helped with introductions to angle and venture investment firms.

While in Shenzhen, the companies were introduced to manufacturers who could eventually be making their products.   Additionally our company received some very crucial support from PCH in Shenzhen in locating a company that could manufacture a critical component of our system.

Along the way, the people at the various 11 companies became friends and helped each other out.  Respecting each other was particularly important as the companies were cranking out prototypes sharing first on one and later two 3-D printers for making prototypes (as demo day neared, the those 3-D printers were pretty much running non-stop).   There was some incredible talent  technically, marketing, and business wise at these companies.

At the end of the program was “Demo Day” where more than 200 venture capitalists, investors, press, and technologist pack a large room at PCH’s U.S. Headquarters in San Francisco.  It was a chance for investors and the press to see what the companies had developed.   While Navdy presented, details of our product and plans were not released to the press because we are planning on launching our product later this year.  Navdy did receive serious interest from a number of VC’s with our demo after the formal presentations.

The whole Highway1 program was the dream of Liam Casey the founder and CEO of PCH, a company with over $700M in revenue.  You may not know the PCH name, but it is very likely that you have brand name products that they helped get to your home or office (be it anywhere in the world).   Liam was personally there to greet us at the beginning of the program and at key points along the way, and he told some great business stories.  The whole of the PCH team, be it the people from San Francisco, China, or Ireland, were always awesome to work with and incredibly nice reflecting PCH’s founder.

Comment: I don’t usually use the word “awesome” but the word was ubiquitous in San Francisco and it seemed to fit the people at PCH.

“If you haven’t tested it, it doesn’t work”

1994 Derek Roskell

Derek Roskell (circa 1994) of TI MOS Design Bedford, UK (Formal Photo – not how I remember him)

When I started this blog, I intended to write about more than displays and include some of my personal IC history.   Today’s story is about Derek Roskell of Texas Instrument’s who led the UK-based design teams I worked with between 1979 and 1997 on a number of the most complex I.C.s done up to that point including the 9995 16-bit CPU, 34010 and 34020 Graphics CPU’s, and the extremely complex 320C80 and 320C82 image processors with a 32-bit RISC CPU and 4 (C80) and 2 (C82) advanced DSP processors on one chip.  Every one of these designs quickly went from first silicon to product.

Having one successful design after the other may not seem so special in today’s era of logic synthesis and all the other computer tools, but back in 1979 we drew logic on paper and  transistors on sheets of frosted Mylar plastic with color pencils that then were then digitized by hand.  We then printed out large “composites” plots on giant flat-bed pen plotters (with each layer of the I.C. in a different color) and then verified all the circuitry by hand and eye (thank goodness by the mid 1980’s we got computer schematic verification).

In those days it all could go very wrong and it did for a 16-bit CPU call the 9940 and a spinoff version the 9985 that were design in Houston Texas in 1977-1978.   It went so bad that the both the 9940 and 9985 were never fully functional, causing the designer to be discredited (whether at fault or not) and many people to leave.

In the wake of the 9940/9985 disaster, in 1979 management pick me, the young hotshot only 1.5 years out of college, to lead the architecture and logic design of a new CPU, the TMS9995, to replace the failed TMS9985.   There was one hitch, they wanted to use a  TI design group in Bedford England.  So after some preliminary work, I packed up for a 6 month assignment in Bedford where I first met Derek Roskell.

Derek in circa 2010 DSCN1430

Derek more “In Character” but taken years later

To say Derek is a self-deprecating is a gross understatement.  The U.S. managers at TI at the time were more the self-assertive, aggressive, “shoot from the hip,” cut corners (which resulted in the 9940/9985 debacle) and generally didn’t take well to Derek’s “English working class” (said with great affection) style with the all too frequent laugh at the “wrong” time.

When I first met Derek he was this “funny old guy” who at had worked on “ancient” TTL technology.  He  was around 40 and seem like an old man in a world of engineers in their 20’s and early 30’s who he led.   As it turned out, Derek was the steady hand that guided a number of brilliant people who worked under him.   He made sure my “brilliant” architecture and logic design actually worked.  You don’t have one successful design after another, particularly back then, by accident.

Upper management  was always pressuring to get thing done faster which could only be accomplished by cutting corners.  They called Bedford a “country club” for resisting the pressure.  Derek was willing to take the heat and do things the “right way” because he understood the consequences of cutting corners.

For most engineers fun part of engineering is doing the original design work.  That is the “creative stuff” and the stuff that gets you noticed.   Also most engineers have big egos and think, “of course what I designed works.”  But when you are designing these massive I.C.’s with hundreds of thousand and later millions of transistors, even if 99.99% of the design is correct, there will be a hopeless number errors to debug and correct.  Most of what it takes to make sure a design works is the tedious process of “verification.”

A couple of months back I had a small reunion in Bedford with some friends from the old days including Derek.   Everyone remembered Derek for one thing he constantly chided the designers with, “If you haven’t tested it, it doesn’t work.”  Pretty good advice.

Epilog

TI, like most companies today, in their search for “shareholder value” closed the large Bedford UK site around 1995 but still kept Bedford MOS designers who had so many proven successes and moved them to a rental building Northhampton.   Through the years TI kept “consolidating/downsizing” and finally 2011 it shut down the last vestiges of their design operation in England and losing a number of extremely talented (and by then) senior people.

Below is a picture taken of the design team in Bedford that worked with me on the 320C80.

320C80 Bedford Team cropped and Sharpend

320C80 Bedford Design Team (1994)

Kopin Displays and Near Eye (Followup to Seeking Alpha Article)

Kopin Pixel compared to LCOS

Kopin’s smallest transmissive color filter pixel is bigger than nine of the smallest field sequential color LCOS pixels

After posting my discovery of a Himax LCOS panel on a Google Glass prototype, I received a number of inquiries about Kopin including a request from Mark Gomes of SeekingAlpha the give my thoughts about Kopin which were published in “Will Kopin Benefit From the Glass Wars?”  In this post I am adding morel information to supplement what I wrote for the  Seeking Alpha article.

First, a little background on their “CyberDisplay® technology would be helpful.   Back in the 1990’s Kopin developed a unique “lift-off” process to transfer transistor and other circuitry from a semiconductor I.C. onto a glass plate to make a transmissive panel which they call the CyberDisplay®.  Kopin’s “lift-off” technology was amazing for that era. This technology allowed Kopin to apply very (for its day) small transistors on glass to enable small transmissive devices that were used predominantly in video and still camera viewfinders. The transmissive panel has 3 color dots (red, green, blue) that produce a single color pixel similar to a large LCD screen only much smaller. In the late 1990’s Kopin could offer a simple optical design with the transmissive color panel that was smaller than existing black and white displays using small CRTs.  This product was very successful for them, but it has become a commoditized (cheap) device these many years later.

CyberDisplay pixel is large and blocks 98.5% of the light

While the CyberDisplay let Kopin address the market for what are now considered low resolution displays cost effectively, the Achilles’ heel to the technology is that it does not scale well to higher resolution because the pixels are so large relative to other microdisplay technologies.  For example Kopin’s typical transmissive panel is15 by 15 microns and is made up of three 5 by 15 color “dots” (as Kopin calls them).    But what makes matters worse; even these very large pixel devices have an extremely poor light throughput of 1.5% (blocks 98.5% of the light) and scaling the pixel down will block even more light!

While not listed on the website (but included in a news release), Kopin has an 8.7 x 8.7 micron color filter pixel (that I suspect is used in their Golden-i head mount display) but it blocks even more light than the 15×15 pixel as the pixel gets smaller.    Also to be fair, there are CyberDisplay pixels that block “only” 93.5% of the light but they give up contrast and color purity in exchange for light throughput which is not usually desirable.

There are many reasons why the transmissive color filter LCOS light throughput is so poor.  To begin with, the color filters themselves which are going to block more than 2/3rds of the light (blocking the other 3 primary colors plus other losses).    Because it is transmissive, the circuitry and the transistor to control each pixel block the light which becomes significant as the pixel becomes small.

But perhaps the biggest factor (but most complex to understand, I will only touch on it here) is that the electric field for controlling the liquid crystal for a given color dot extent into the neighboring color dots thus causing the colors to bleed together and loose all color saturation/control.  To reduce this problem they can use less light throughput efficient liquid crystal materials that are less susceptible to the neighboring electric fields and use black masks (which block light)  surrounding the each color dot to hide the area where the colors bleed together.

Field Sequential Color – Small Pixels and 80+% light throughput

With reflective LCOS, all the wires and circuitry are hidden behind the pixel mirror so that non of the transistors and other circuitry block the light.  Furthermore the liquid crystal layer is usually less than half as thick which limits the electric field spreading and allows pixels to be closer together without significantly affecting each other.  And of course there are no color filters which waste more than 2/3rds of the light.    The down side to field sequential color is the color field breakup where when the display move quickly relative to the eye, the colors may not line up for a split second.   The color breakup effects can be reduce by going to higher field sequential rates.

Kopin’s pixesl are huge when compared to those of field sequential LCOS devices (from companies such as Himax, Syndiant, Compound Photonics, and Citizen Finetech Miyota) that today can easily have pixels 5 by 5 microns and with some that are smaller than 3 by 3 microns.   Therefore FSC LCOS can have about 9 times the pixel resolution for roughly the same size device!  And the light throughput of the LCOS devices is typically more than 80% which becomes particularly important for outdoor use.

So while a low resolution Kopin CyberDisplay might be able to produce a low resolution image in a headset as small as Google Glass, they would have to limit the device in the future to a low resolution device – – – not a good long-term plan.  I’m guessing that the ability to scale to higher resolutions was at least one reason why Google went with a field sequential color device rather than starting with a transmissive panel that would have at least initially been easier to design with.  Another important factor weight in advantage of LCOS over a transmissive panel is the light throughput so that the display is bright enough for outdoor use.

I don’t want to be accused of ignoring Kopin’s 2011 acquisition of Forth Dimension Displays (FDD) which makes a form of LCOS.  This is clearly a move by Kopin move into reflective FSC LCOS.   It so happens back in 1998 and 1999 I did some cooperative work with CRL Opto (that later became FDD) and they even used I design I worked on for their silicon backplane in their first product.  The FSC LCOS that FDD makes is considerably different in both the design of the device and the manufacturing process required for a high volume product.

Through FDDs many years of history (and several name changes) FDD has drifted to a high end specialized display technology with a large 8+ micron pixels.   For a low volume niche applications FDD is servicing, there was no need to develop more advance silicon to support a very small device and drive electronics.  Other companies aiming more at consumer products (such as Syndiant where I was CTO) have put years of efforts into building “smarter” silicon that enabled minimizing the not only the size of the display;  reducing the number of connection wires going between the display and the controller; and reduced the controller to one small ASIC.

Manufacturing Challenge for Kopin

To cost effectively assemble small pixel LCOS devices requires manufacturing equipment and methods that are almost totally different from what Kopin does with their CyberDisplay or FDD with their large pixel LCOS.   Almost every step in the process is done with an eye to high volume manufacturing cost.   And it is not like a they can just buy the equipment and be up and running, it usually takes over a year to get the yields up to an acceptable level from the time the equipment is installed.  Companies such as Himax have reportedly spent around $300M in developing their LCOS devices and I know of multiple other companies having spend over $100M and many years of effort in the past.

Conclusion

For at least the reasons given above, I don’t see Kopin as currently positioned well to build a competitive high volume head mounted displays that are to meet the future needs of the market as I think all roads lead to higher resolution, yet small devices.  It would seem to me that they would need a lot time, effort, and money to field a long-term competitive product.

Augmented Reality / Head Mounted Displays (Part 1 Real or Not?)

Augmented Reality (AR) Head Mounted Displays (HMD) [aka Near Eye Displays,  Wearable Computing, and other many other names] has gotten a big boost in the public mindset with the Feb. 22, 2012 New York Times (NYT) article/leak about Google Glasses with Android.    The NYT article result in a flurry of commentary on the web and television (Google search “Google Glasses Augmented Reality“).  Reportedly Google Glasses will be available in very limited/test market release later in 2012 with a price between $250 and $600 (US).

Augmented Reality (AR) is the concept of combining computer information with the real world.   Head Mounted Displays (HMD) is any display device that in some way attached to the head (including the eye, such as a contact lens).   You can have AR on say a cell phone with a camera where computer information is put on top of the video you see without a HMD.  Similarly you can have an HMD that is only a display device without any AR capability.   But often AR and HMD are combined together and this series of articles is mostly going to be talking about the combined use of AR and HMD.

Some History

Augmented reality/HMDs have found their way into many films as a plot element and this has to some degree already primed the public’s interest.   It turns out it is much easier to make it work in the movies than in real life.  Attempts at augmented reality go back at least as far as the 1960’s.   Below is a montage of just a few of the over 100 attempts at making a head mount display which range from lab experiments to many failed product in the market (they failed so badly that most people don’t even know they existed).

The Airplane Test

So far HMDs have failed what I call the “I don’t see them on airplanes test.”  If there is anyplace you should see HMDs today, it would be by people sitting on airplanes, but have you ever seen someone using one on an airplane?  Why I consider this as a “metric” is that the people who regularly fly are typically middle to upper middle class, are more into small electronic gadgets (just look at what they sell in the on-board catalogs), and the environment stilling on an airplane is one that you would think would be ideal for a HMD.

Back when the iPad came out, you could tell that they were taking off just by the number iPads you saw people using on airplanes (mostly to watch movies).  Interestingly, I have seen HMDs sold in Best Buy vending machines at airports, but I have never seen one “in the wild” on an airplane.   The other place I would have expected to see HMDs is on the Tokyo subways and trains, but I have not see them there either.  One has to conclude that the “use model” for the iPad works in a way that it does not for a HMD.

Augmented Reality (AR)Topics

There are so many topics/issues with Augment Reality Glasses (or which every name you prefer) that there is too much to cover in just one blog post.   In terms of implementation are the technical issues with the display devices and optics, the physical human factor issues like size and weight (and does it cause nausea), and the user interface or use-model issues and feature set (including wireless connectivity).   Then there are a whole number of social/political/legal issues  such as privacy, safety (distractive driving/walking) user tracking, advertisements, etc.   AR is a very BIG topic.

Making a practical Augmented Reality HMD is deceptively difficult.  It is a daunting task to make an HMD device that fits, is light enough to wear, small enough to go with you, produces an acceptable image, and doesn’t cost too much.  And making a display that works and is cost effective is really only the starting point, the rest of the problem is making one that is useful for a variety of applications from watching movies to real time head-up displays.

There are a number of user interface problems related to the fact that a HMD is in some way strapped to your head/eye that make is “not work right” (acts unnaturally)  in terms of human interfaces.    These human interface issues are probably going to be a bigger problem than the physical design of the display devices themselves.   Making a HMD  that “works” and is cost effective is only the starting point.

Will Google Glasses succeed where others failed?

The answer is likely that they will not have a big success with their first device even if it is a big improvement on past efforts.  Even the rumors state it is a “test market” type device meaning that even Google is looking more to learn from the experience than sell a lot of units.   I’m sure Google has many smart people on the device, but sometimes the problem is bigger than even the smartest people can solve.

The idea of a display device that can appear/disappear at will is in compelling to many people and why it keeps being tried both in the movies and television as a plot element and by companies trying to build products.  My sense is that we are still at least a few more technology turns of the screw away from the concept becoming an everyday device.   In future articles I plan on discussing both the technical and user-interface challenges with HMDs.