Archive for Startups

Everything VR & AR Podcast Interview with Karl Guttag About Magic Leap

With all the buzz surrounding Magic Leap and this blog’s technical findings about Magic Leap, I was asked to do an interview by the “Everything VR & AR Podcast” hosted by Kevin Harvell. The podcast is available on iTunes and by direct link to the interview here.

The interview starts with about 25 minutes of my background starting with my early days at Texas Instruments. So if you just want to hear about Magic Leap and AR you might want to skip ahead a bit. In the second part of the interview (about 40 minutes) we get into discussing how I went about figuring out what Magic Leap was doing. This includes discussing how the changes in the U.S. patent system signed into law in 2011 with the America Invents Act help make the information available for me to study.

There should be no great surprises for anyone that has followed this blog. It puts in words and summarizes a lot that I have written about in the last 2 months.

Update: I listen to the podcast and noticed that I misspoke a few times; it happens in live interviews.  An unfathomable mistake is that I talked about graduating college in 1972 but that was high school; I graduated from Bradley University with a B.S. in Electrical Engineering in 1976 and then received and MSEE from The University of Michigan in 1977 (and joined TI in 1977).  

I also think I greatly oversimplified the contribution of Mark Harward as a co-founder at Syndiant. Mark did much more than just have desigeners, he was the CEO, an investor, and and the company while I “played” with the technology, but I think Mark’s best skill was in hiring great people. Also, Josh Lund, Tupper Patnode, and Craig Waller were co-founders. 

 

Evergaze: Helping People See the Real World

Real World AR

Today I would like to forget about all the hype and glamor near eye products to have fun in a virtual world. Instead I’m going to talk a near eye device aimed at helping people to see and live in the real world.  The product is called the “seeBoost®” and it is made by the startup Evergaze in Richardson, Texas. I happen to know the founder and CEO Pat Antaki from working together on a near eye display back in 1998, long before it was fashionable. I’ve watched Pat bootstrap this company from its earliest days and asked him if I could be the first to write about seeBoost on my blog.

The Problem

Imagine you get Age Related Macular Degeration (AMD) or Diabetic Retinopathy. All your high-resolution vision and best color vision of the macular (and where high resolution fovea resides) is gone and you see something like the picture on the right. All you can use is your peripheral vision which is low in resolution, contrast, and color sensitivity. There are over 2 million people in the U.S that can still see but have worse than 20/60 vision in their better eye.

What would you pay to be able to read a book again and do other normal activities that require the ability to have “functional vision?” So not only is Evergaze aiming to help a large number of people, they are going after a sizable and growing market.

seeBoost Overview

seeBoost has 3 key parts, the lightweight near-to-eye display, a camera with high speed autofocus, and proprietary processing in an ASIC that remaps what the camera sees onto the functioning part of the user’s vision. They put the proprietary algorithms in hardware so they could have the image remapping and contrast enhancement performed with extremely low latency so that there is no perceptible delay when a person moves their head. As anyone that has used VR headsets will know, this important for wearing the device for long periods of time to avoid headaches and nausea.

A perhaps subtle but important point is that the camera and display are perfectly coaxial, so there is no parallax error as you move the object closer to your eye. The importance of centering the camera with the user’s eye for long term comfort was a major point made AR headset user and advocate Steve Mann in his March 2013, IEEE Spectrum article, “What I’ve learned from 35 years of wearing computerized eyewear”. Quoting from the article, “The slight misalignment seemed unimportant at the time, but it produced some strange and unpleasant result.” And in commenting on Google Glass Mr. Mann said, “The current prototypes of Google Glass position the camera well to the right side of the wearer’s right eye. Were that system to overlay live video imagery from the camera on top of the user’s view, the very same problems would surely crop up.”

Unlike traditional magnifying optics like a magnifying glass, in addition to being able to remap the camera image to the parts of the eye that can see, the depth of field and magnification amount are decoupled: you can get any magnification (from 1x to 8x) at any distance (2 inches to infinity). It also has digital image color reversal (black-to-white reversal, useful for reading pages with a lot of white). The device is very lightweight at 0.9 oz. including cable. The battery pack supports for 6 hours of continual use on a single charge.

Use Case

Imagine this use scenario: playing bridge with your friends. To look at the cards in your hand you may need 2x mag at 12 inches’ distance. The autofocus allows you to merely move the cards as close to your face as you like, the way a person would naturally use to make something larger. Having the camera coaxial with the display makes this all seem natural versus say having a camera above the eye. Looking at the table to see what cards are placed there, maybe you need 6x mag. at 2 feet. To see other people’s eyes and facial expressions around the table, you need 1-2x at 3-4 feet.

seeBoost is designed to help people see so they can better take part in the simple joys of normal life. The lightweight design mounts on top of a user’s prescription glasses and can help while walking, reading signs and literature, shopping, watching television, recognizing faces, cooking, and even playing sports like golf.

Another major design consideration was the narrow design so that it does not cover-up lateral and downwards peripheral vision of the eye.  This turns out to be important for people who don’t want to further lose peripheral vision. In this application, monocular(single eye) is for better situational awareness and peripheral vision.

seeBoost is a vision enhancement device rather it essentially a computer (or cell phone) monitor that you must plug into something. The user simply looks at the screen (through seeBoost), as seeBoost improves their vision for whatever they’re looking at, be it an electronic display or their grandchildren’s faces.

Assembled in the USA and Starting to Ship

This is not just some Kickstarter concept either. Evergaze has been testing prototypes with vision impaired patients for over a year and have already finished a number of studies. What’s more they recently started shipping product. To the left is an image that was taken though the seeBoost camera via its display and optics.

What’s more this product is manufactured in the US at a production line Evergaze set up in Richardson, TX. If you want to find out more about the company you can go their their YouTube Channel or if you know someone that needs a seeBoost, you can contact Pat Antaki via email: pantaki@evergaze.com

Magic Leap & Hololens: Waveguide Ego Trip?

ml-and-hololens-combiner-cropThe Dark Side of Waveguides

Flat and thin waveguides are certainly impressive optical devices. It is almost magical how you can put light into what looks a lot like thin plates of glass and an small image will go on one side and then with total internal reflection (TIR) inside the glass, the image comes out in a different place. They are coveted by R&D people for their scientific sophistication and loved by Industrial Designers because they look so much like ordinary glass.

But there is a “dark side” to waveguides, at least every one that I have seen. To made them work, the light follows a torturous path and often has to be bent at about 45 degrees to couple into the waveguide and then by roughly 45 degrees to couple out in addition to rattling of the two surfaces while it TIRs. The image is just never the same quality when it goes through all this torture. Some of the light does not make all the turns and bends correctly and it come out in the wrong places which degrade the image quality. A major effect I have seen in every diffractive/holographic waveguid  is I have come to call “waveguide glow.”

Part of the problem is that when you bend light either by refraction or using diffraction or holograms, the various colors of light bend slightly differently based on wavelength. The diffraction/holograms are tuned for each color but invariably they have some effect on the other color; this is particularly problem is if the colors don’t have a narrow spectrum that is exactly match by the waveguide. Even microscopic defects cause some light to follow the wrong path and invariably a grating/hologram meant to bend say green, will also affect the direction of say blue. Worse yet, some of the  light gets scattered, and causes the waveguide glow.

hololens-through-the-lens-waveguide-glowTo the right is a still frame from a “Through the lens” video” taken through the a Hololens headset. Note, this is actually through the optics and NOT the video feed that Microsoft and most other people show. What you should notice is a violet colored “glow” beneath the white circle. There is usually also a tendency to have a glow or halo around any high contrast object/text, but it is most noticeable when there is a large bright area.

For these waveguides to work at all, they require very high quality manufacturing which tends to make them expensive. I have heard several reports that Hololens has very low yields of their waveguide.

I haven’t, nor have most people that have visited Magic Leap (ML), seen though ML’s waveguide. What  ML leap shows most if not all their visitors are prototype systems that use non-waveguide optics has I discussed last time. Maybe ML has solved all the problems with waveguides, if they have, they will be the first.

I have nothing personally against waveguides. They are marvels of optical science and require very intelligent people to make them and very high precision manufacturing to make. It is just that they always seem to hurt image quality and they tend to be expensive.

Hololens – How Did Waveguides Reduce the Size?

Microsoft acquired their waveguide technology from Nokia. It looks almost like they found this great bit of technology that Nokia had developed and decided to build a product around it. hololensBut then when you look at Hololens (left) there is this the shield to protect the lenses (often tinted but I picked a clear shield so you could see the waveguides). On top of this there is all the other electronic and frame to mount it on the user’s head.

The space savings from the using waveguides over much simpler flat combiner  is a drop in the bucket.

ODG Same Basic Design for LCOS and OLED

I’m picking Osterhout Design Group’s for comparison below because because they demonstrate a simpler, more flexible, and better image quality alternative to using a waveguide. I think it makes a point. Most probably have not heard of them, but I have know of them for about 8 or 9 years (I have no relationship with them at this time). They have done mostly military headsets in the past and burst onto the public scene when Microsoft paid them about $150 million dollars for a license to their I.P. Beyond this they just raised another $58 million from V.C.’s. Still this is chump change compared to what Hololens and Magic Leap are spending.

Below is the ODG R7 LCOS based glasses (with the one of the protective covers removed). Note, the very simple flat combiner. It is extremely low tech and much lower cost compared to the Hololens waveguide. To be fair, the R7 does not have as much in the way of sensors and processing as the as Hololens.

odg-r-with-a-cover-removed

The point here is that by the time you put the shield on the Hololens what difference does having a flat waveguide make to the overall size? Worse yet, the image quality from the simple combiner is much better.

Next, below is ODG’s next generation Horizon glasses that use a 1080p Micro-OLED display. It appears to have somewhat larger combiner (I can’t tell if it is flat or slightly curved from the available pictures of it) to support the wider FOV and a larger outer cover, but pretty much the same design. The remarkable thing is that they can use the a similar optical design with the OLEDs and the whole thing is about the same size where as the Hololens waveguide won’t work at all with OLEDs due broad bandwidth colors OLEDs generate.

odg-horizons-50d-fov

ODG put up a short video clip through their optics of the Micro-OLED based Horizon (they don’t come out and say that it is, but the frame is from the Horizon and the image motion artifacts are from an OLED). The image quality appears to be (you can’t be too quantitative from a YouTube video) much better than anything I have seen from waveguide optics. There is not of the “waveguide glow”. odg-oled-through-the-optics-002

They even were willing to show text image with both clear and white backgrounds that looks reasonably good (see below). It looks more like a monitor image except for the fact that is translucent. This is the hard content display because you know what it is supposed to look like so you know when something is wrong. Also, that large white area would glow like mad on any waveguide optics I have seen. odg-oled-text-screen-002

The clear text on white background is a little hard to read at small size because it is translucent, but that is a fundamental issue will all  see-though displays. The “black” is what ever is in the background and the “white” is the combination of the light from the image and the real world background.  See through displays are never going as good as an opaque displays in this regards.

Hololens and Magic Leap – Cart Before the Horse

It looks to me like Hololens and Magic Leap both started with a waveguide display as a given and then built everything else around it. They overlooked that they were building a system. Additionally, they needed get it in many developers hands as soon as possible to work out the myriad of other sensor, software, and human factors issues. The waveguide became a bottleneck, and from what I can see from Hololens was an unnecessary burden. As my fellow TI Fellow Gene Frantz and I used to say when we where on TI’s patent committeed, “it is often the great new invention that causes the product to fail.”

I (and few/nobody outside of Magic Leap) has seen an image through ML’s production combiner, maybe they will be the first to make one that looks as good as simpler combiner solution (I tend to doubt it, but it not impossible). But what has leaked out is that they have had problems getting systems to their own internal developers. According the Business Insider’s Oct. 24th article (with my added highlighting):

“Court filings reveal new secrets about the company, including a west coast software team in disarray, insufficient hardware for testing, and a secret skunkworks team devoted to getting patents and designing new prototypes — before its first product has even hit the market.”

From what I can tell of what Magic Leap is trying to do, namely focus planes to support vergence/accommodation, they could have achieved this faster with more conventional optics. It might not have been as sleek or “magical” as the final product, but it would have done the job, shown the advantage (assuming it is compelling) and got their internal developers up and running sooner.

It is even more obvious for Hololens. Using a simple combiner would have added trivially to the the design size while reducing the cost and getting the the SDK’s in more developer’s hands sooner.

Summary

It looks to me that both Hololens and likely Magic Leap put too much emphasis on the using waveguides which had a domino effect in other decisions rather than making a holistic system decision. The way I see it:

  1. The waveguide did not dramatically make Hololens smaller (the case is still out for Magic Leap – maybe they will pull a rabbit out of the hat). Look at ODG’s designs, they are every bit as small.
  2. The image quality is worse with waveguides than simpler combiner designs.
  3. Using waveguides boxed them in to using only display devices that were compatible with their waveguides. Most notably they can’t use OLED or other display technology that emit broader spectrum light.
  4. Even if it was smaller, it is more important to get more SDKs in developers (internal and/or external hand) sooner rather than later.

Hololens and Magic Leap appear to be banking on getting waveguides into volume production in order to solve all the image quality and cost problems with them. But it will depend on a lot of factors, some of which are not in their control, namely, how hard it is to make them well and at a price that people can afford. Even if they solve all the issues with waveguides, it is only a small piece of their puzzle.

Right now ODG seems to be taking more the of the original Apple/Wozniak approach; they are finding elegance in a simpler design. I still have issues with what they are doing, but in the area of combining the light and image quality, they seem to be way ahead.

Magic Leap – No Fiber Scan Display (FSD)

Sorry, No Fiber Scan Displays

For those that only want my conclusion, I will cut to the chase. Anyone that believes Magic Leap (ML) is going to have a Laser Fiber Scanned Display (FSD) anytime soon (as in the next decade) is going to be sorely disappointed. FSDs is one of those concepts that sounds like it would work until you look at it carefully. Developed at the University of Washington in the mid to late 2000’s, they were able to generate some very poor quality images in 2009 and as best I can find, nothing better since.

The fundamental problem with this technology is that wiggling a fiber is very hard to control accurately enough to make a quality display. This problem is particularly true when the scanning fiber has to come to near rest in the center of the image. It is next to impossible (and impossible at a rational cost) to have the wiggling fiber tip with finite mass and its own resonate frequency follow a highly accurate and totally repeatable path.

Magic Leap has patents applications related to FSDs showing two different ways to try and increase the resolution, provide they could ever make a decent low resolution display in the first place. Effectively, they have patents that doubled down on FSD, one was the “array of FSDs” which I discussed in the Appendix of my last article that would be insanely expensive and would not work optically in a near eye system and the other doubles down on a single FSD that ML calls “Dynamic Region Resolution” (DRR) which I will discuss below after discussing the FSD basics.

The ML patent applications on the subject of FSD read more like technical fairy tales of what they wished they could do with a bit of technical detail and drawing scattered in to make it sound plausible. But the really tough problems of making it work are never even discussed, no less solutions proposed.

Fiber Scanning Display (FSD) Basics

ml-spiral-scanThe concept of the Fiber Scanning Display (FSD) is simple enough, two piezoelectric vibrators connected to one side of an optical fiber cause the fiber tip follow a spiral path starting from the center and a working its way out. The amplitude of the vibration starts at zero in the center and then gradually increases in amplitue causing the fiber to both speed up and follow a spiral path. At the fiber tip accelerates the tip moves outward radially. The spacing of each orbit is a function of the increase in speed.

ml-fiber-scanning-basic

Red, Green, and Blue (RGB) lasers are combined and coupled into the fiber at the stationary end. As the fiber moves, the lasers turn on an off to create “pixels” that come out the spiraling end of the fiber. At the end a scan, the lasers are turned off and drive is gradually reduced to bring the fiber tip back to the starting point under control (if they just stopped the vibration, it would wiggle uncontrollably).  This retrace period while faster than the scan takes a significant amount of time since it is a mechanical process.

An obvious issue is how well they can control a wiggling optical fiber. As the documents point out, the fiber will want to oscillate based on its resonance frequency that can be stimulated by the piezoelectric vibrators. Still, one would expect that the motion will not be perfectly stable, particularly at the beginning when it is moving slowly and has no momentum.  Then there is the issue as how well it will follow the exactly the same path from frame to frame when the image is supposed to be still.

One major complication I did not see covered in any of the ML or University of Washington (which originated the concept) documents or applications is what it takes to control the laser accurately enough. The fiber speeding up from near zero at its center to maximum speed as the end of the scan. At the center of the spiral the tip moving very slowly (near zero speed). If you turned a laser on for the same amount of time and brightness as the center, pixels would be many times closer together and brighter at the center than the periphery. The ML applications even recognize that increasing the resolution of a single electromechanical  FSD is impossible for all practical purposes.

Remember that they are electromechanically vibrating one end of the fiber to cause the tip to move in a spiral to cover the area of a circle. There is a limit to how fast they can move the fiber, how well they can control it, and the fact that they want fill a wide rectangular area so a lot of the circular area will be cut off.

Looking through everything I could find that was published on the FSD, including Schowengerdt (ML co-founder and Chief Scientist) et al’s SID 2009 paper “1-mm Diameter, Full-color Scanning Fiber Pico Projector” and SID2010 paper, “Near-to-Eye Display using Scanning Fiber Display Engine” only low resolution still images are available and no videos. Below are two images from the SID 2009 paper along with the “Lenna” standard image reproduced in one of them, perhaps sadly, these are best FSD images I could find anywhere. What’s more, there has never been a public demonstration of it producing a video which I believe would show additional temporal and motion problems. 2009-fsd-images2

What you can see in both of the actual FSD images is that the center is much brighter than the periphery. From the Lenna FSD image you can see how distorted the image is particularly in the center (look at Lenna’s eye in the center and the brim of the hat for example). Even the outer parts of the image are pretty distorted. They don’t even have an decent brightness control of the pixels and didn’t even attempt to show color reproduction (requiring extremely precise laser control). Yes the images are old, but there are a series of extremely hard problems outlined above that are likely not solvable which is likely why we have not seen any better pictures of an FSD from ANYONE (ML or others) in the last 7 years.

While ML may have improved upon the earlier University of Washington work, there is obviously nothing they are proud enough to publish, no less a video of it working. It is obvious that non of the released ML videos use a FSD.

Maybe ML had improved it enough to show some promise to get investors to believe it was possible (just speculating). But even if they could perfect the basic FSD, by their own admission in the patent applications, the resolution would be too low to support a high resolution near eye display. They would need to come up with a plausible way to further increase the effective resolution to meet the Magic Leap hype of “50 Mega Pixels.”

“Dynamic Region Resolution (DRR) – 50 Mega Pixels ???

Magic Leap on more than one occasion has talked about the need to 50 Megapixels to support the field of view (FOV) they want with the angular resolution of 1-arcminute/pixel that they say is desirable. Suspending the disbelief that they could even make a good low resolution FSD, they doubled down with what they call “Dynamic Region Resolution” (DRR).

US 2016/0328884 (‘884) “VIRTUAL/AUGMENTED REALITY SYSTEM HAVING DYNAMIC REGION RESOLUTION” shows the concept. This would appear to answer the question of how ML convinced investors that having a 50 megapixel equivalent display could be plausible (but not possible).

ml-variable-scan-thinThe application shows what could be considered to be a “foveated display”, where various area’s of the display varies in image density based on where it will be projected onto the human’s retina. The idea is to have high pixel density where the image will project on the highest resolution part of the eye, the fovea, and that resolution is “wasted” on the parts of the eye that can’t resolve it.

The concept is simple enough as shown in ‘884’s figures 17a and 17b (left). The concept is to track the pupil to see where the eye is looking (indicated by the red “X” in the figures) and then adjust the scan speed, line density, and sequential pixel density based on where the eye is looking. Fig 17a show the pattern for when the eye is looking at the center of the image where they would accelerate more slowly in the center of the scan. In Fig. 17b they show the scanning density to be higher where the eye is looking at some point in the middle of the image. They increase the line density in a ring that covers where the eye is looking.

Starting at the center the fiber tip is always accelerating.  For denser lines they just accelerate less, for less dense areas they accelerate at a higher rate so this sound plausible. The devil is in the details in how the fiber tip behaves as it acceleration rate changes.

Tracking the pupil accurately enough seems very possible with today’s technology. The patent application discusses how wide the band of high resolution needs to be to cover a reasonable range of eye movement from frame to frame which make it sound plausible. Some of the obvious fallacies with this approach include:

  1. Control the a wiggling fiber with enough precision to meet the high resolution and to do it repeatedly from scan to scan. They can’t even do it at low resolution with constant acceleration.
  2. Stability/tracking of the fiber as it increase and decreases its acceleration.
  3. Controlling the laser brightness accurately at both the highest and lowest resolution regions.  This will be particularly tricky as the the fiber increases or decreases it acceleration rate.
  4. The rest of the optics including any lenses and waveguides must support the highest resolution possible for the use to be able to see it. This means that the other optics need to be extremely high precision (and expensive)
What about Focus Planes?

Beyond the above is the need to support ML’s whole focus plane (“poor person’s light field”) concept.  To support focus planes they need 2 to 6 or more images per eye per frame time (say 1/60th of a second). The fiber scanning process is so slow that even producing a single low resolution and highly distorted image in 1/60th is barely possible, no less multiple images per 1/60th of a second to support the plane concept.  So to support the focus plane concept they would need a FSD per focus plane with all its associated lasers and control circuitry; the size and cost to produce would become astronomical.

Conclusion – A Way to Convince the Gullible

The whole FSD appears to me to be a dead end other than to convince the gullible that it is plausible. Even getting a FSD to produce a single low resolution image would take more than one miracle.  The idea of a DRR just doubles down on a concept that cannot produce a decent low resolution image.

The overall impression I get from the ML patent applications is that they were written to impress people (investors?) that didn’t look at the details too carefully. I can see how one can get sucked into the whole DRR concept as the applications gives numbers and graphs that try and show it is plausible; but they ignore the huge issues that they have not figured out.

Magic Leap Video – Optical Issues and a Resolution Estimate

As per my previous post Magic Leaps display technology what Magic Leap is using in their YouTube through the lens demos may or may not be what they will use in the final product. I’m making an assessment of their publicly available videos and patents.  There is also the possibility that Magic Leap is putting out deliberately misleading videos to throw off competitors and whomever else is watching.

Optical Issues: Blurry, Chroma Aberrations, and Double Images

I have been looking at a lot of still frames from the ML’s “A New Morning” video that according the ML is “Shot directly through Magic Leap technology on April 8, 2016 without use of special effects or compositing.”  I chose this video because it has features like text and lines (known shapes) that can better reveal issues with the optics. The overall impression looking at the images they are all somewhat blurry with a number of other optical issues.

Blurry

resolution-01b-cropThe crop of a frame at 0:58 on the left shows details that include real world stitching of a desk organizer with 3 red 1080p pixel dots added on top of two of the stitches. The two insets show 4X pixel replicated blow-ups so you can see the details.

Looking at the “real world” stitches, the camera has enough resolution to capture the cross in the “t” in “Summit” and the center of the “a” in Miura” if they were not blurred out by the optics.

Chroma Abberations

If you look at the letter “a” in the top box, you should notice the blue blur on the right side that extends out a number of 1080p pixels.  These chroma aberrations are noticeable throughout the the frame, particularly at the edges of white objects.  These aberrations indicate that the R, G, and B colors are not all focused and add to the blurring.

The next question is whether the chroma aberration is cause by the camera or the ML optics. With common camera optics, chroma aberrations get worst the further you get away from the center.

resolution-01b-chroma-cropIn the picture on the left, taken from the same 0:53 frame the name “Hillary” (no relation to the former presidential candidate) is near the top of the screen and “Wielicki” is near the middle. Clearly the the name “Wielicki” has significantly worse chroma aberration even though it is near the center of the image. This tends to rule out the camera as the source of the aberration as it is getting worse from top (outside) to the center. Based on this fact, it appears that the chroma aberrations are caused by the ML optics.

resolution-01b-full-frameFor those that want to see the whole frame, click on the image a the right.

Double Images

Consistently during the entire video there are double images the further down and further left you look at the image. These are different from the frame update double images from last time. as they appear when there is no movement and they are dependent on location.

Below I have gone through a sequence of different frames to capture similar content in the upper left, center, and right (UL, UC, UR), as well as the Middle (M), and Lower (L) left, center, and right and put them side by side. I did the best I could to get the best image I could find in each region (using different content for the lower left).  I have done this over a number of frame checking for focus issues and motion blur and the results are the same, the double image is always worse in the bottom and far left.ml-new-morning-upper-lower-crops2

The issue seen are not focusing nor movement problems. Particularly notice, in the lower left (LL) image how the “D” is a double image is displaced slightly higher and to the right. A focus problem would blur it concentrically and not in a single direction.

Usually double images of the the same size are result of reflections off of flat plates.  Reflections off a curved surface, such as a camera lens pr curved mirror would magnify or reduce the reflection.   So this suggests that the problem has something to do with flat or nearly plates which could be a flat waveguide or a flat tilted plate combiner.

The fact that the image gets worse the further down and left would suggest (this is somewhat speculative) that the image in coming from near the top right corner.   Generally an image will degrade more the further it has to go through a waveguide or other optics.

One more thing to notice particularly on the images on the three on the right side are “jaggies” in the horizontal line below the text.

What, there are Jaggies? A clue to the resolution which appears to be about 720p

Something I was not expecting to see were the stair step effect of a diagonally drawn line, particularly through the blurry optics.  Almost all modern graphics rendering does “antialiasing”/smooth edge rendering with gray scale values that smooth out these steps, and after the losses due to the optics and camera I was not expecting to see any jaggies.  There are no visible jaggies for all the lines and text in the image with the notable exception for the lines under the text of “TODAY” and “YESTERDAY” associated with the notification icons.

In watching the video playing it is hard to miss these lines as the jaggies move about drawing your eye to them.  The jaggies’ movement it also a clue that they are moving the drawn image as the camera moves slightly.

Below I have taken one of those lines with jaggies and then below it I have simulated the effect in Photoshop with 4 lines below it.  The results have been magnified by 2X and you may want to click in the image below to see the detail.  One thing you may notice in the ML Video line is that in addition to the jaggies, it appears to have thick spots in it.  These thick spots between jaggies are caused by the line being both at an angle and with slight perspective distortion which causes the top and bottom of a wider than one pixel thick line be rendered at slightly different angles which causes the jaggies to occur in different places on the top and bottom and results in the thick sections.  In the ML Video line there are 3 steps on the top (pointed to by the green tick marks) and 4 on the bottom (indicated by red tick marks).
resolution-jaggies-02

Below the red line, I simulated the effect using Photoshop on the 1080p image and copied the color of background to be the background for the simulation.  I started with a thin rectangle that was 4 pixels high and then scaled it by making it very slightly trapezoidal (about 1 degree difference between the top and bottom edge) and then rotated it to the same angle as the line in the video, using “nearest neighbor” (no smoothing/antialiasing) scaling; this produced the 3rd line “Rendered w/ jaggies”.   I then applied a Gaussian with a 2.0 pixel radius to simulate the blur from the optics to produce the “2.0 Gaussian of Jaggies” line that matches the effect seen in the ML video. I did not bother with simulating the chroma aberrations (the color separation above and below the white line) that would further soften/blur the image.

Looking at result you will see the thick and thin spots just like the ML video.  But note there are about 7 steps (at different places) on the top and bottom.  Since the angle of my simulated line and the angle of the line in the ML Video are the same and making the reasonable assumption that the jaggies in the video are 1 pixel high, the resolution should differ by the ratio of the jaggies or about 4/7 (ratio of the ML versus the 1080p jaggies).

Taking 1080 (lines) times 4/7 give about 617 lines which what you would expect right if they slightly cropped a 720p image.  This method while very rough and assumes they have not severely cropped the image with the camera (to make themselves look bad).

For completeness to show the difference with what would happen if the light was rendered with antialiasing, I produced the “AA rendered” version and then use did the same Gaussian blur on it. This results, similar to all the other lines in the video where there are no detectable jaggies nor any changing in the apparent thickness of the line.

OK, I can here people saying, “But the Magazine Writers Said It Looked “Good/Great”

I have often said that for a video demo, “If I can control the product or control the demo content, I choose controlling the content.” This translates to “choose demo content that looks good on your product and eliminate content that will expose its weaknesses.”

If you show videos with a lot of flashy graphics and action with no need to look for detail, with smooth rendering, only imaging experts might notice that the resolution is low and/or there are issues with the optics.  If you put up text, use a larger font so that it is easily readable, most people will think you have high resolution sufficient for reading documents; in the demo you are not giving them a page of high resolution text to read if you don’t have high resolution.

I have been working with graphics and display devices for about 38 years and see a LOT of demos.  Take it from me, the vast majority of people can’t tell anything about resolution, but almost everyone thinks they can.  For this reason, I highly discount report from non display experts that have a chance to seriously evaluate a display. Even an imaging experts can be fooled by a quick well done demo or a direct or indirect financial motive.

Now, I have not seen what the article writers and the people that invested money (and their experts) have seen.  But what I hopefully have prove to you is that the what Magic Leap has shown in their YouTube videos is of pretty poor image quality by today’s standards.

Magic Leap Focus Effects
ml-out-of-focus

0:41 Out of Focus

ml-in-of-focus

0:47 Becoming In Focus

ml-in2-of-focus

1:00 Sharpest Focus

ml-back-out-of-focus

1:05 Back Out Of Focus

Magic Leap makes a big point of the importance of “vergence,” which means that the apparent focus agrees with the apparent distance in 3-D space. This is the key difference between Magic Leap and say Microsoft’s Hololens.

With only one lens/eye you can’t tell the 3-D stereo depth so they have to rely on how the camera focuses.  You will need to click on the thumbnails above to see the the focus effects in the various still captures.

They demonstrate the focus effects with “Climbing Everest” sequence in the video.  ML was nice enough to put some Post-It (TM) type tabs curled up in the foreground (in particular watch the yellow smiley face in the lower left) and a water bottle and desk organizer (with small stitches in the background.

Toward the end of the sequence (click on the 1:05 still) you can see the Mount Everest information which is at an angle relative to the camera is highly out of focus on the left hand side and gets better the right hand side, while the “Notices” information which appears to be further away is comparatively is in-focus. Also notice how the stitches in the desk organizer in the real world and which appear to be roughly the same angle as the Everest Information goes from out of focus on the left to more in-focus on the right agreeing with what is seen in the projected image.

This focus rake appears to be conclusive proof that that there is focus depth in the optical system in this video.  Just to be complete, it would be possible to the fake effect just for the video is by blurring the image by the computer synchronously with the focus rake.  But I doubt they “cheated” in this way as outsiders have reported seeing the focusing effect in  live demos.

In the 1:05 frame capture the “15,000 ft” in the lower left is both out of focus and has a double image which makes it hard to tell which are deliberate/controllable focusing effects and which are just double images due to poor optics. Due to the staging/setup, the worst part of the optics matches what should be the most out of focus part of the image.  This could be a coincidence or they may have staged it that way.

Seeing the Real World Through Display

Overall, seeing the real world through the display looks very good and without significant distortion.  It didn’t get any hints as to the waveguide/combiner structure.   It would be interesting to see what say a computer monitor would look like through the display or other light source shining through it.

The the lighting in the video is very dark; the white walls are dark gray due to a lack of light except where some lamps act as spotlights on them.  The furniture and most of the other things on the desk are black or dark (I guess the future is going to be dark and have a lot of black furniture and other things in it). This setup helps the generated graphics stand out. In a normally lit room with white wall, the graphics will have to be a lot brighter to stand out and there are limits to how much you can crank up the brightness without hurting people’s eyes or there will have to be a darkening shades as seen with Hololens.

Conclusion

The resolution appears to be about 720p and the optics are not up to showing that resolution.  I have been quite  of the display quality because it really is not good. There are image problems that are many pixels wide.

On the plus side, they are able to demonstrate the instantaneous depth of field with their optical solution and the view of the real world looks good so far as they have shown.  There may be issues with the see-through viewing that are not visible in these videos in a fairly dark environment.

I also wonder how the resolution translates into the FOV versus angular resolution, and how they will ever support multiple simultaneous focus planes.  If you discount a total miracle from their fiber scanned display happening anytime soon (to be covered next time), 720p to at most 1080p is about all that is affordable in a microdisplay today, particularly when you need one for each eye, in any production technology (LCOS, DLP, or Micro-OLED) that will be appropriate for a light guide.  And this is before you consider that to support multiple simultaneous focus planes, they will need multiple displays or a higher resolution display that they cut down. To me as a technical person who studied displays for about 18 years, this is a huge ask.

Certainly Magic Leap must have shown something that impressed some very big name investors, to invest $1.4B.  Hopefully it is something Magic Leap has not shown yet.

Next Time: Magic Leap’s Fiber Scanned Display

I have been studying the much promoted Magic Leap Fiber Scan Display (FSD).  It turns out there patents suggest two ways of using this technology:

  1. A more conventional display that can be used in combination with a waveguide with multiple focus layers.
  2. To directly generate a light fields from an array of FSDs

I plan to discuss the issues with both approaches next time.  To say the least, I’m high doubtful that either method is going to be in volume production any time soon and I will try and outline my reasons why.

Asides: Cracking the Code
Enigma

I was wondering whether the jaggies were left in as an “image generation joke” for insiders or just sloppy rendering. They are a big clue as to the native resolution of the display device that came through the optical blur and the camera’s resolving power.

It is a little like when the British were breaking of the Enigma code in WWII. A big help in breaking Enigma was sloppy transmitting operators giving them what they called “cribs” or predictable words or phrases. On a further aside, Bletchley Park where the cracked the Enigma Code is near Bedford England where I worked with and occasionally lived for over an 16 year period. Bletchley Park is a great place to visit if you are interested in computer history (there is also a computer museum at the same location).  BTW, the movie the “The Imitation Game” is an enjoyable movie but lousy history.

Solving the Display Puzzles

Also, I am not claiming to be infallible in trying to puzzle out what is going on with the various technologies. I have change my mind/interpretation of what I am seeing in the videos a number of times and some of my current conclusions may have alternative explanations. I definitely appreciate readers offering their alternative explanations and I will try and see if I think they fit the facts better.

Magic Leap’s work is particularly interesting because they have made such a big claims, raised so much money, are doing something different, and have released tantalizingly little solid information.  It also seems that a good number of people are expecting Magic Leap to do a lot more with their product than may be feasible at volume price point or even possible at any cost, at least for a number of years.

Magic Leap – The Display Technology Used in their Videos

So, what display technology is Magic Leap (ML) using, at least in their posted videos?   I believe the videos rule out a number of the possible display devices, and by a process of elimination it leaves only one likely technology. Hint: it is NOT laser fiber scanning prominently shown number of ML patents and article about ML.

Qualifiers

Magic Leap, could be posting deliberately misleading videos that show technology and/or deliberately bad videos to throw off people analyzing them; but I doubt it. It is certainly possible that the display technology shown in the videos is a prototype that uses different technology from what they are going to use in their products.   I am hearing that ML has a number of different levels of systems.  So what is being shown in the videos may or may not what they go to production with.

A “Smoking Gun Frame” 

So with all the qualifiers out of the way, below is a frame capture from Magic Leaps “A New Morning” while they are panning the headset and camera. The panning actions cause temporal (time based) frame shutter artifact in the form of partial ghost images as a result of the camera and the display running asynchronously and/or different frame rates. This one frame along with other artifacts you don’t see when playing the video, tells a lot about the display technology used to generate the image.

ml-new-morning-text-images-pan-leftIf you look at the left red oval you will see at the green arrow a double/ghost image starting and continuing below that point.  This is where the camera caught the display in its display update process. Also if you look at the right side of the image you will notice that the lower 3 circular icons (in the red oval) have double images where the top one does not (the 2nd to the top has a faint ghost as it is at the top of the field transition). By comparison, there is not a double image of the real world’s lamp arm (see center red oval) verifying that the roll bar is from the ML image generation.

ml-new-morning-text-whole-frameUpdate 2016-11-10: I have upload for those that would want to look at it.   Click on the thumbnail at left to see the whole 1920×1080 frame capture (I left the highlighting ovals that I overlaid).

Update 2016-11-14 I found a better “smoking gun” frame below at 1:23 in the video.  In this frame you can see the transition from one frame to the next.  In playing the video the frame transition slowly moves up from frame to frame indicating that they are asynchronous but at almost the same frame rate (or an integer multiple thereof like 1/60 or 1/30th)

ml-smoking-gun-002

 In addition to the “Smoking Gun Frame” above, I have looked at the “A New Morning Video” as well the “ILMxLAB and ‘Lost Droids’ Mixed Reality Test” and the early “Magic Leap Demo” that are stated to be “Shot directly through Magic Leap technology . . . without use of special effects or compositing.”through the optics.”   I was looking for any other artifacts that would be indicative of the various possible technologies

Display Technologies it Can’t Be

Based on the image above and other video evidence, I think it save to rule out the following display technologies:

  1. Laser Fiber Scanning Display – either a single or multiple  fiber scanning display as shown in Magic Leaps patents and articles (and for which their CTO is famous for working on prior to joining ML).  A fiber scan display scans in a spiral (or if they are arrayed an array of spirals) with a “retrace/blanking” time to get back to the starting point.  This blanking would show up as diagonal black line(s) and/or flicker in the video (sort of like an old CRT would show up with a horizontal black retrace line).  Also, if it is laser fiber scanning, I would expect to see evidence of laser speckle which is not there. Laser speckle will come through even if the image is out of focus.  There is nothing to suggest in this image and its video that there is a scanning process with blanking or that lasers are being used at all.  Through my study of Laser Beam Scanning (and I am old enough to have photographed CRTs) there is nothing in the still frame nor videos that is indicative of a scanning processes that has a retrace.
  2. Field Sequential DLP or LCOS – There is absolutely no field sequential color rolling, flashing, or flickering in the video or in any still captures I have made. Field sequential displays, display only one color at a time very rapidly. When these rapid color field changes beat against the camera’s scanning/shutter process.  This will show up as color variances and/or flicker and not as a simple double image. This is particularly important because it has been reported that Himax which makes field sequential LCOS devices, is making projector engines for Magic Leap. So either they are not using Himax or they are changing technology for the actual product.  I have seen many years of DLP and LCOS displays both live and through many types of video and still cameras and I see nothing that suggest field sequential color is being used.
  3. Laser Beam Scanning with a mirror – As with CRTs and fiber scanning, there has to be a blanking/retrace period between frames will show up in the videos as roll bar (dark and/or light) and it would roll/move over time.  I’m including this just to be complete as this was never suggested anywhere with respect to ML.
UPDATE Nov 17, 2016

Based on other evidence that as recently come in, even though I have not found video evidence of Field Sequential Color artifacts in any of the Magic Leap Videos, I’m more open to thinking that it could be LCOS or (less likely) DLP and maybe the camera sensor is doing more to average out the color fields than other cameras I have used in the past.  

Display Technologies That it Could Be 

Below are a list of possible technologies that could generate video images consistent with what has been shown by Magic Leap to date including the still frame above:

  1. Mico-OLED (about 10 known companies) – Very small OLEDs on silicon or similar substrates. As list of the some of the known makers is given here at OLED-info (Epson has recently joined this list and I would bet that Samsung and others are working on them internally). Micro-OLEDs both A) are small enough toinject an image into a waveguide for a small headset and B) has the the display characteristics that behave the way the image in the video is behaving.
  2. Transmissive Color Filter HTPS (Epson) – While Epson was making transmissive color filter HTPS devices, their most recent headset has switch to a Micro-OLED panel suggesting they themselves are moving away.  Additionally while Meta first generation used Epson’s HTPS, they moved away to a large OLED (with a very large spherical reflective combiner).  This technology is challenged in going to high resolution and small size.
  3. Transmissive Color Filter LCOS (Kopin) – is the only company making Color Filter Transmissive LCOS but they have not been that active as of last as a component supplier and they have serious issues with a roadmap to higher resolution and size.
  4. Color Filter reflective LCOS– I’m putting this in here more for completeness as it is less likely.  While in theory it could produce the images, it generally has lower contrast (which would translate into lack of transparency and a milkiness to the image) and color saturation.   This would fit with Himax as a supplier as they have color filter LCOS devices.
  5. Large Panel LCD or OLED – This would suggest a large headset that is doing something similar to the Meta 2.   Would tend to rule this out because it would go against everything else Magic Leap shows in their patents and what they have said publicly.   It’s just that it could have generated the image in the video.
And the “Winner” is I believe . . . Micro-OLED (see update above) 

By a process of elimination including getting rid of the “possible but unlikely” ones from above, it strongly points to it being Micro-OLED display device. Let me say, I have no personal reason to favor it being Micro-OLED, one could argue it might be to my advantage based on my experience for it to be LCOS if anything.

Before I started any serious analysis, I didn’t have an opinion. I started out doubtful that it was field sequential or and scanning (fiber/beam) devices due to the lack of any indicative artifacts in the video, but it was the “smoking gun frame” that convince me that that if the camera was catching temporal artifacts, it should have been catching the other artifact.

I’m basing this conclusions on the facts as I see them.  Period, full stop.   I would be happy to discuss this conclusion (if asked rationally) in the comments section.

Disclosure . . . I Just Bought Some Stock Based on My Conclusion and My Reasoning for Doing So

The last time I played this game of “what’s inside” I was the first to identify that a Himax LCOS panel was inside Google Glass which resulted in their market cap going up almost $100M in a couple of hours.  I had zero shares of Hixmax when this happened, my technical conclusion now as it was then was based on what I saw.

Unlike my call on Himax in Google Glass I have no idea which company make the device Magic Leap appears to using nor if Magic Leap will change technologies for their production device.  I have zero inside information and am basing this entirely on the information I have given above (you have been warned).   Not only is the information public, but it is based on videos that are many months old.

I  looked at companie on the OLED Microdisplay List by www.oled-info.com (who has followed OLED for a long time).  It turned out all the companies were either part of a very large company or were private companies, except for one, namely eMagin.

I have know of eMagin since 1998 and they have been around since 1993.  They essentially mirror Microvision doing Laser Beam Scanning and was also founded in 1993, a time where you could go public without revenue.  eMagin has spent/loss a lot of shareholder money and is worth about 1/100th from their peak in March 2000.

I have NOT done any serious technical, due diligence, or other stock analysis of eMagin and I am not a stock expert. 

I’m NOT saying that eMagine is in Magic Leap. I’m NOT saying that Micro-OLED is necessarily better than any other technology.  All I am saying is that I think that someone’s Micro-OLED technology is being using the Magic Leap prototype and that Magic Leap is such hotly followed company that it might (or might not) affect the stock price of companies making Micro-OLEDs..

So, unlike the Google Glass and Himax case above, I decided to place a small “stock bet” (for me) on ability to identify the technology (but not the company) by buying some eMagin stock  on the open market at $2.40 this morning, 2016-11-09 (symbol EMAN). I’m just putting my money where my mouth is so to speak (and NOT, once again, stock advice) and playing a hunch.  I’m just making a full disclosure in letting you know what I have done.

My Plans for Next Time

I have some other significant conclusions I have drawn from looking at Magic Leap’s video about the waveguide/display technology that I plan to show and discuss next time.

Wrist Projector Scams – Ritot, Cicret, the new eyeHand

ritot-cicret-eyehand-001Wrist Projectors are the crowdfund scams that keeps on giving with new ones cropping up every 6 months to a year. When I say scam, I mean that there is zero chance that they will ever deliver anything even remotely close what they are promising. They have obviously “Photoshopped”/Fake pictures to “show” projected images that are not even close to possible in the the real world and violate the laws of physics (are forever impossible). While I have pointed out in this blog where I believe that Microvision has lied and mislead investors and showed very fake images with the laser beam scanning technology, even they are not total scammers like Ritot, Cicret, and eyeHand.

According to Ritot’s Indiegogo campaign, they have taken in $1,401,510 from 8917 suckers (they call them “backers”).   Cicret according to their website has a haul of $625,000 from 10,618 gullible people.

Just when you think that Ritot and Cicret had found all the suckers for wrist projectors, now CrowdFunder reports that eyeHand has raised $585,000 from individuals and claims to have raised another $2,500,000 in equity from “investors” (if they are real then they are fools, if not, then it is just part of the scam). A million here, $500K there, pretty soon you are talking real money.

Apparently Dell’s marking is believing these scams (I would hope their technical people know better) and has show video Ads that showed a similar impossible projectors.  One thing I will give them is that they did a more convincing “simulation” (no projecting “black”) and they say in the Ads that these are “concepts” and not real products. See for example the following stills from their Dell’s videos (click to see larger image).  It looks to me like they combined a real projected image (with the projector off camera and perpendicular to the arm/hand) and then add fake projector rays to try and suggest it came from the dummy device on the arm): dell-ritots-three

Ritot was the first of these scams I was alerted to and I help contribute some technical content to the DropKicker article http://drop-kicker.com/2014/08/ritot-projection-watch/. I am the “Reader K” that they thanked in the author’s note at the beginning of the article.  A number of others have called out the Ritot and Cicret as being scams but that did not keep them from continuing to raise money nor has it stopped the new copycat eyeHand scam.

The some of key problems with the wrist projector:

  1. Very shallow angle of projection.  Projectors normally project on a surface that is perpendicular to the direction of projection, but the wrist projectors have to project onto a surface that is nearly parallel to the direction of projection.  Their concepts show a projector that is only a few (2 to 4) millimeters above the surface. When these scammers later show “prototypes” they radically change the projection distance and projection angle.
  2. Extremely short projection distance.  The near side of the projection is only a few millimeters away while the far side of the image could be 10X or 50X further away.  There is no optics or laser scanning technology on earth that can do this.  There is no way to get such a wide image at such a short distance from the projector.  As light falls off with the square of distance, this results in an impossible illumination problem (the far side being over 100X dimmer than the near side).
  3. Projecting in ambient light – All three of the scammers show concept images where the projected image is darker than the surrounding skin.  This is absolutely impossible and violates the laws of physics.   The “black” of the image is set by the ambient light and the skin, the projector can only add light, it is impossible to remove light with a projector.  This shows ignorance and/or a callous regard for the the truth by the scammers.
  4. The blocking of the image by hairs, veins, and muscles.  At such a shallow angle (per #1 above) everything is in the way.
  5. There is no projector small enough.  These projector engines with their electronics that exist are more than 20X bigger in volume than what would be required to fit.
  6. The size of the orifice through with the light emerges is too small to support the size of the image that they want to project
  7.  The battery required to make them daylight readable would be bigger than the whole projector that they show.  These scammers would have you believe that a projector could work off a trivially small battery.
  8. Cicret and eyeHand show “touch interfaces” that won’t work due to the shallow angle.  The shadows cast by fingers working the touch interface would block the light to the rest of the image and made “multi-touch” impossible.   This also goes back to the shallow angle issue #1 above.

The issues above hold true whether the projection technology uses DLP, LCOS, or Laser Beam Scanning.

Cicret and Ritot have both made “progress reports” showing stills and  videos using projectors more than 20 times bigger and much higher and farther away (to reduce the projection angle) than the sleek wrist watch models they show in their 3-D CAD models.   Even then they  keep off-camera much/most of the electronics and battery/power-supply necessary needed to drive the optics that the show.

The image below is from a Cicret “prototype” video Feb of 2015 where they simply strapped a Microvision ShowWX+ HMDI upside down to a person’s wrist (I wonder how many thousand dollars they used engineering this prototype). They goofed in the video and showed enough of the projector that I could identify (red oval) the underside of the Microvision projector (the video also shows the distinctive diagonal roll bar of a Microvision LBS projector).  I have show the rest of the projector roughly to scale in the image below that they cropped off when shooting the video.  What you can’t tell in this video is that the projector is also a couple of inches above the surface of the arm in order to project a reasonable image.

cicret-001b

So you might think Cicret was going to use laser beam scanning, but no, their October 2016 “prototype” is showing a panel (DLP or LCOS) projector.  Basically it looks like they are just clamping whatever projector they find to a person’s wrist, there is no technology they are developing.  In this latest case, it looks like what they have done is found a small production projector taken its guts out and put it in a 3-D printed case.  Note the top of the case is going to be approximately 2 inches above a person’s wrist and how far away the image is from the projector.

cicret-002e

Ritot also has made update to keep their suckers on the hook.   Apparently Indiegogo only rule is that you much keep lying to your “backers” (for more on the subject of how Indiegogo condones fraud click here).  These updates at best show how little these scammers understood projection technology.   I guess one could argue that they were too incompetent to know they were lying.  ritot-demo-2014

On the left is a “demo” Ritot shows in 2014 after raising over $1M.  It is simply an off the shelf development system projector and note there is no power supply.  Note they are showing it straight on/perpendicular to the wrist from several inches away.

ritot-2015By 2015 Rito had their own development system and some basic optics.  Notice how big the electronics board is relative to the optics and that even this does not show the power source.

By April 2016 they showed an optical engine (ONLY) strapped to a persons wrist.  ritot-2016-04-20-at-25sCut off in the picture is the all the video drive electronics (see the flex cable in the red oval) that is off camera and likely a driver board similar to the one in the 2015 update  and the power supplies/battery.

In the April 2016 you should notice how the person’s wrist is bent to make make it more perpendicular to the direction of the projected image.  Also not that the image is distorted and about the size of an Apple watch’s image.   I will also guarantee that you will not have a decent view-able image when used outdoors in daylight.

The eyeHand scam has not shown anything like a prototype, just a poorly faked (projecting black) image.  From the low angle they show in their fake image, the projected would be blocked by the base of the thumb even if the person hold their hand flat.  To make it work at all they would have to move the projector well up the person’s arm and then bend the wrist, but then the person could not view it very well unless they hold their arm at an uncomfortable angle.  Then you have the problem of keeping the person from moving/relaxing their wrist and loosing the projection surface.   And of course it would not be view-able outdoors in daylight.

It it not like others have been trying to point out that these projectors are scams.  Google search “Ritot scam” or “Cicret scam” and you will find a number of references.  As best I can find, this blog is the first to call out the eyeHand scam:

  • The most technically in depth article was by Drop-Kicker on the Ritot scam
  • Captain Delusional has a  comic take on the Cicret scam on YouTube – He has some good insights on the issue of touch control but also makes some technical mistakes such as his comments on laser beam scanning (you can’t remove the laser scanning roll-bar by syncing the camera — also laser scanning has the same fall-off in brightness due do the scanning process).
  • Geek Forever had an article on the Ritot Scam 
  • A video about the Ritot Scam on Youtube
  • KickScammed about Ritot from 2014

The problem with scam startups is that they tarnish all the other startups trying to find a way to get started.  Unfortunately, the best liars/swindlers often do the best with crowdfunding.  The more they are willing to lie/exaggerate, the better it makes their product sound.

Indiegogo has proven time and again to have extremely low standards (basically if the company keep posting lies, they are good to go – MANY people tried to tell Indiegogo about the Ritot Scam but to no avail before Ritot got the funds). Kickstarter has some standards but the bar is not that large but at least I have not see a wrist projector on Kickstarter yet. Since the crowdfunding sites get a cut of the action whether the project delivers or not, their financial incentives are on the side of the companies and the people funding. There is no bar for companies that go with direct websites, it is purely caveat emptor.

I suspect that since the wrist projector scam has worked at least three (3) times so far, we will see other using it.   At least with eyeHand you have a good idea of what it will look like in two years (hint – like Ritot and Cicret).

Desperately Seeking the Next Big Thing – Head Mounted Displays (HMDs) — Part 1

Untitled-2With Microsoft’s big announcement of HoloLens and spending a reported $150 million just for HMD IP from the small Osterhout Design Group, reports of Facebook spending about $2 billion for Oculus Rift, and the mega publicity surrounding Google Glass and the hundreds of millions they have spent, Head Mounted Displays (HMD) are certainly making big news these days.

Most of the articles I have seen pretty much just parrot the company press releases and hype up these as being the next big thing.   Many of the articles have, to say the least, dubious technical content and at worst give misinformation.   My goal is to analyze the technology and much of what I am seeing and hearing does not add up.

The question is whether these are lab experiments with big budgets and companies jumping the gun that are chasing each other or whether HMDs really are going to be big in terms of everyone using them?    Or are the companies just running scared that they might miss the next big thing after cell phones and tablets.   Will they reach numbers rivaling cell phone (or at least a significant fraction)?    Or perhaps is there a “consolation prize market” which for HMDs would be to take significant share of the game market?

Let me get this out-of-the-way:  Yes, I know there is a lot of big money and smart people working on the problem.   The question is whether the problem is bigger than what is solvable?  I know I will hear from all the people with 20/20 hindsight all the successful analogies (often citing Apple) but for every success there many more that failed to catch on in a big way or had minor success and then dived.   As an example consider the investment in artificial intelligence (AI) and related computing in the 1980’s and the Intel iAPX 432 (once upon a time Intel was betting the farm on the 432 to be replacement for the 8086 until the IBM PC took off).    More recently and more directly related, 3-D TV has largely failed.  My point here is that big companies and lots of smart people make the wrong call on future markets all the time; sometimes the problems is bigger than all the smart people and money can solve.

Let me be clear, I am not talking about HMDs used in niche/dedicated markets.  I definitely see uses for HMDs applications where hands-free use is a definite.  A classic example is military applications where a soldier has to keep his hands free, is already wearing a helmet that messes up their hair and they don’t care what they look like, and they spend many hours in training.   There are also uses for HMD in the medical field for doctors as a visual aid and for helping people with impaired vision.  What I am talking about is whether we are on the verge of mass adoption.

Pardon me for being a bit skeptical, but on the technical side I still see some tremendous obstacles to HMD.    As I pointed out on this blog soon after Google Glass was announced http://www.kguttag.com/2012/03/03/augmented-reality-head-mounted-displays-part-1-real-or-not/ HMDs have a very long history of not living up to expectations.

I personally started working on a HMD in 1998 and learned about many of the issues and problems associated with them.    There are the obvious measurable issues like size, weight, fit/comfort and can you wear them with your glasses, display resolution, brightness, ruggedness, storage, and battery life.   Then there are what I call the “social issues” like how geeky it looks, does it mess up a person’s hair, and taking video (a particularly hot topic with Google Glass).   But perhaps the most insidious problems are what I lump into the “user interface” category which include input/control, distraction/safety, nausea/disorientation, and what I loosely refer to “as it just doesn’t work right.”   These issues only just touch on what I sometime joking refer to as “the 101 problems with HMDs.”

A lot is made of the display device itself, be it a transmissive LCD, liquid crystal on silicon (LCOS), OLED, or TI’s DLP.    I have about 16 years of history working on display devices, particularly LCOS, and I know the pro’s and con’s on each one in some detail.   But as it turns out, the display device and its performance is among the least of the issues with HMDs, I had a very good LCOS device way back in 1998.   As with icebergs, the biggest problems are the ones below the surface.

This first article is just to set up the series.  My plan is to go into the various aspects and issue with HMDs trying to be as objective as I can with a bit of technical analysis.    My next article will be on the subject of “One eye, two eyes, transparent or not.”

Navdy Launches Pre-Sale Campaign Today

Bring Jet-Fighter Tech to Your Car with NavdyIts LAUNCH Day for Navdy as our presale campaign starts today. You can go to the  Navdy site to see the video.  It was a little over a year ago that Doug Simpson contacted me via this blog asking about how to make a aftermarket heads up display (HUD) for automobiels.     We went through an incubator program called Highway1 sponsored by PCH International that I discussed in my last blog entry.

The picture above is a “fancy marketing image” that tries to simulate what the eye sees (which is impossible to do with a camera as it turns out).   We figures out how to do some pretty interesting stuff and the optics works better than I thought was possible when we started.    The image image focuses beyond the “combiner/lens” to help with the driver seeing the images in the far vision is about 40 times brighter (for use in bright sunlight) than an iPhone while being very efficient.

Navdy Office

Being CTO at a new start-up has kept me away from this blog (a start-up is very time consuming).  We have raise some significant initial venture capital to get the program off the ground and the pre-sale campaign takes it to the next level to get products to market.  In the early days it was just me and Doug but now we have about a dozen people and growing.

Karl

Highway1 Incubator

Those that follow my blog are probably wondering what has happened to me these past months.   I have away from home for most of the last 4 months at an “incubator” program for start-ups called Highway1.   Navdy, for which I recently became CTO, was selected as one of 11 companies from over 100 applicants for the very first class of the Highway1  program sponsored by PCH International.

What makes Highway1 different from almost all other incubator programs these days is that it is totally focused on helping hardware start-ups.   Highway1 recognizes that hardware start-ups have special needs, are more difficult to get started, and have have to deliver a physical product unlike software companies.

The Highway1 office is in the Mission District of San Francisco where most of the time is spent, but the program also includes spending two weeks in Shenzhen China where many of the electronic products used around the world are made.   During the program companies are introduced to mentors from other companies and experts in the field as well as helped with introductions to angle and venture investment firms.

While in Shenzhen, the companies were introduced to manufacturers who could eventually be making their products.   Additionally our company received some very crucial support from PCH in Shenzhen in locating a company that could manufacture a critical component of our system.

Along the way, the people at the various 11 companies became friends and helped each other out.  Respecting each other was particularly important as the companies were cranking out prototypes sharing first on one and later two 3-D printers for making prototypes (as demo day neared, the those 3-D printers were pretty much running non-stop).   There was some incredible talent  technically, marketing, and business wise at these companies.

At the end of the program was “Demo Day” where more than 200 venture capitalists, investors, press, and technologist pack a large room at PCH’s U.S. Headquarters in San Francisco.  It was a chance for investors and the press to see what the companies had developed.   While Navdy presented, details of our product and plans were not released to the press because we are planning on launching our product later this year.  Navdy did receive serious interest from a number of VC’s with our demo after the formal presentations.

The whole Highway1 program was the dream of Liam Casey the founder and CEO of PCH, a company with over $700M in revenue.  You may not know the PCH name, but it is very likely that you have brand name products that they helped get to your home or office (be it anywhere in the world).   Liam was personally there to greet us at the beginning of the program and at key points along the way, and he told some great business stories.  The whole of the PCH team, be it the people from San Francisco, China, or Ireland, were always awesome to work with and incredibly nice reflecting PCH’s founder.

Comment: I don’t usually use the word “awesome” but the word was ubiquitous in San Francisco and it seemed to fit the people at PCH.