Magic Leap – How Rigged Was The Rolling Stone Demo?

Today I have a brief article based on a tidbit I picked up at CES last week.

As told to me from a trustworthy source, Magic Leap’s demo room has picture frames rigged with “markers.” When looking at the sigur rós / Magic Leap video linked to in the December 20th, 2017 Rolling Stone article, the room is very dark, but you can just make out the gold frames on the wall in the background (see still image from 0:05 on the left). Enhancing the exposure of the still image makes four picture frames visible (red arrows).

As reported to me, the markers are there to lock pictures to the frames. Since these picture frames are not used directly in the sigur rós video, it indicates that the video was shot in Magic Leap’s demo room as described.

Making the room unnaturally dark is a common trick to cause virtual images to appear to be more solid looking as well as to hide other things you don’t want the user to see. Adding markers to the frames is a crutch to make up for poor registration and SLAM (simultaneous localization and mapping). It also makes me suspect that there are other sensors hidden around the room and under the table.

Magic Leap is practicing classic “demoware,” something that will only work in contrived conditions with limited content in tightly-controlled setups. To find out if it works as “Augmented Reality,” you need to take it to a typically lit room that Magic Leap has not had a chance to rig with sensors/markers. Otherwise, this is just a variation of VR with a limited FOV for people that live in dark rooms shut off from the real world.

If Magic Leap is close to having a product, one would expect them to have demos that are not highly rigged.

24 comments

  1. I dont know man. I have a sibling who works in the prototype lab at magic leap. He works with the device and while he can’t tell me details (big time NDA) I do not get the sense that their device relies on crutches to work. They have a working developer device like the press release shows. Not sure how well the tracking works or whether they are ironing out technical problems, but from what I have heard they are excited to show the world, not insanely stressed that their device won’t perform.

    That said, I get very little details but I’ll try to get some info out of him, get a sense of how they feel about it living up to expectations.

    The markers on the frames may be there, I don’t know the reason, but I would wait for the release of the product, I’m telling you, magic leap is in very high spirits and excited to show the world what the hype is about. 😎

    1. Working on and having perfected are two different things. Magic Leap back in around 2013 was telling people they would have a product in 2015 and here we are in 2017.

      The bigger point is why are they demoing in a dark room with “crutches?” It shows the technology is at best “fragile” and not ready for prime time. If AR is going to be a big thing, then it needs to work in the real world without specially prepared and constructed rooms. Otherwise, it will just be a small part of the VR market.

      I’ve been in the industry about 40 years. People are always excited to see what they are working on getting attention. It often has little correlation with the product being successful (although if they are not excited, that is a really bad sign).

      I don’t want you to get your sibling in any trouble so I would not press him/her. Let them keep getting paid until the money runs out (Ok, a bit of a jab).

      1. Its not glasses ,its still an ugly headset tied to a computer you have to carry(imagine the cords hanging) which is a downgrade from hololens design

      2. emagin proponent here …. 2 years ago , when Emagin wanted to rush out the new
        2 x 2 , they used a few still photos of room to demo .. the objective was to show
        what can be done, at a point when the heavy lifting has been done and final improvements
        are work not science …
        that being said , i think that the best strategy for any of these companies is get demos
        setup all over , like at every Best Buy and a couple of other chains , work out a deal
        with AMC movies to reserve 1 theater at each location to be a VR experience , no big
        screen , just a room with comfy loungers , snacks , and superior headsets , just view ….
        For most people , the move to ar and vr and mr , will not be cheap , but will be worth it ,
        with the proper equipment … i would definitely want to test drive a rig before
        buying it ….
        We can already see how descriptive terms have become bastardized and we know
        how marketing folks work and can become very confused .. let me try on a few headsets
        and I can make up my mind … garce

        1. The biggest problem is the “use model” and how people feel when wearing near to eye displays. People seem to prefer to look at a display than wear one. Also, the cost per pixel is much higher for microdisplays than direct view displays. At CES, a company named LUCI had a very nice OLED headset with dual 1080p displays in it (they would not say which brand other than it was not eMagin or Sony) but it was supposed to cost $1,200.

          In spite of eMagine’s claims of brighter OLEDs, they do not have the bright ones in production. Even the OLEDs microdisplays they have in production are very expensive. Any see-through type AR display (where truly see-through is defined as greater than 80% transparent) is going to waste/lose about 90% of the display light, so you need to start with a very bright display, thus they use LCOS or DLP and there are no color OLED available that are bright enough.

          Everyone in the industry knows about all the technology and marketing tricks. The problem is that the technologies are either too expensive or users don’t like the compromises of wearing a headset.

  2. Hi Karl,
    Based on everything you have seen in the last year or so (and including your observations of the CES 2018), would you say that the mainstream roll-out and adoption of AR in everyday life is at least 5 years away?
    I am sure that we will have some cute apps even in a few months. But for AR to become a “must have” (for AR to solve important problems that cannot be as well tackled any other way), some major challenges remain – regardless on whether user’s engaging via an HMD or a handheld tablet/phone. What do you think? Appreciate your insights, as always and thanks for your work.

    1. Yes, it is going to be at least 5 years before AR lives up anywhere near its expectation and becomes mainstream. There are just too many problems to solve from image quality, to form factor, to user interface. There may be some interesting and useful products in the meantime and certainly, there will be the “Poke-mon Go” type things that will happen (perhaps what you mean by “cute apps.”), that leverage existing technology.

      AR has a lot of major challenges. They all start out talking about “Ray Ban Sunglasses” and end up looking like Hololens or the Magic Leap One that is both big and ugly and has a cable to a computer and battery pack. Nothing is portable in the way you can slip even a plus-size phone into your pocket. If you take off a Hololens or Magic Leap One, you will need a backpack with a foam case to carry it in. The image quality by any objective measure is terrible compared to a cheap TV. And can (and have) go on.

      1. You really think you will be going out and about on a subway with this big cable coming out of the headset and winding down to the computer and battery.Tell me, what happens WHEN you snag the cable between the headset and the computer: Does it A) rip the headset off your head and break it?, B) choke the user? C) the cables have breakaways that keep coming disconnecting as you move around?

        Magic Leap traded making the headset “all in one” which becomes big and bulky for a larger overall system broken into several parts. They also sacrificed allowing people to use their own glasses to make the headset smaller. They also severely limited the user’s peripheral vision which is not safe.

        They didn’t “solve” the AR glasses problem, they just moved the problems around. And that assumes that it works as they intended.

    2. Apple says 2 years … LOL …
      as a student of industrial history , i can tell you that the paths for ar and vr and mr will
      mirror any other product that has changed the course of history .. .
      cars , phonograhps , pcs and anything else ..
      some of these start out with great opposition ,some with “who needs it” , and most
      wind up finding their ultimate uses years after the things have become mainstream ..
      ar/vr/mr will all follow the same patterns ..
      forget what the major companies are predicting will be utility of these things , once
      people get their hands on them .. the uses will grow into things that we cannot
      currently comprehend ..
      it’s gonna be a fun ride… garce

      1. There is a danger predicting the future with wishful thinking. In the 1960’s they expected that everyone would be flying Supersonic by the end of the decade, only Concord made it and it lost a lot of money for the government backers. With some problems, the “physics” is very hard. Optics does not have a “Moore’s Law.”

        1. >>Apple says 2 years …<<

          To me the major issues is more what will the function be, vs is it possible. It certainly seems possible Apple could deliver a better Vuzix Blade style device that is really the "Apple Watch" of Smartglasses, and that device would enhance your everyday life:
          – Can see an important text as it comes .
          – Can see walking directions to a new city without looking down and feeling like a tourist.
          – Can see your cab status while scanning the horizon.
          – Can see social media updates as you eat lunch.
          – Can see what is next in your calendar while working with your hands.
          – Can see who is calling without breaking away from a conversation.

          Apple can then add hardware features to it as time goes on and add the better displays, 3D scanning, etc. It is exactly same strategy they did with iPhone. This simpler glasses is a Trojan horse to them keep adding on to it (again, iPhone) and owning the market / ecosystem.

          Magic Leap and Hololens are tackling the incredibly complicated problem of trying to get artificial things to feel real in the actual world, which there is even an unsure demand of a mass market wanting that (at least for a while) . It is doubtful there will be a version in next couple of years of Magic Leap /Hololens that is as capable of AR as Vive / Oculus is for VR, and think of the problem VR has with getting an audience.

          Vuzix right now seems to be the only one chasing the right market / fit for where the technology is aligned to (and the hands on press from CES seems pretty pleased with it) , but it really needs a major player (Apple, Google, Amazon, Facebook) to prove this direction out.

          1. Even Apple can’t change physics (you can quote me on that :-)). So far there is a huge loss in image quality to make optics small an light. Diffractive waveguides are never (due to physics) going to have great image quality. Lumus image quality seems significantly better (but I have not had the chance to do a detailed evaluation), but you still get this gray bands across your eyes.

            Then you get into all the other issues of how to make it work for a person with glasses and how you make the virtual image and the real world both focus and correct for astigmatism. Then you have other realities like battery life and what you will put in the headset for computing and/or communication. Every one of these things adds weight which can’t be supported comfortably by the nose and ears (just a few ounces becomes unbearable over time).

            Vuzix is doing a whole waveguide based product, but it is not for everyone. Vuzix recognizes that there is a market for a more basic industrial product. Lumus, WaveOptix, and Digilens are building thin waveguides for use on other company’s end products. RealWear and Daqri have products that don’t use waveguides but are aimed at similar “enterprise” markets. These things are still expensive and the image quality is far from perfect; they serve a near but not for the broad consumer market. The Googles, Amazons, and Facebooks what a broad consumer product and the technology is not up to it, at least not yet.

  3. Slightly off-topic: If I’ve read your previous articles correctly, the light entering the waveguide must be collimated, so changing the angle of the light in order to achieve focus planes must happen as it exits the guide. This means the angle leaving each guide is physically fixed, meaning an individual guide for each focus plane, correct?

    1. That is generally correct. You can’t impart “focus” until the exit of a waveguide. This is all through the Magic Leap patents. Note that for color diffraction waveguide (as opposed to a partial mirror waveguide like Lumus) what is often considered to be a “single guide” is really 3 guides, one for each color (sometimes two colors are done with a single diffraction grating with a loss in color aberrations).

      Something like Deepoptics works by changing the focus of the light on the exit. In theory, something electronically controlled could be built into the exit.

      1. By using something like deep optics we should be only able to get discrete single focus plane in one time simultaneously, but ideally we need continuous curved focus surface (only attainable by holograms or lightfieds?), Facebook/Oculus paper in 2017 used some trick for that.

        1. Yes, they can only support a single focus point at a time. It’s not clear that something like Magic Leaps or Avegant’s focus planes don’t/won’t have problems if they try and display more than one plane at a time. The 2017 published Magic Leap patents suggest a displaying more than one focus plane at the same time causes problems. This would imply that Oculus could really only support one focus surface at a time without issues. The focus surfaces, at least the Oculus ones, have a low resolution of depth and so would seem to have a problem with the edges of objects.

          Lightfields the way say Nvidia did them seem to be many decades away from being practical. They give up far too much resolution to gain some depth. The same goes for true holograms.

  4. hmm sure …..but there is no movement/tracking in the 6sek clip 😛
    ….btw their tracking in the jellyfish demo was a bit wobbly

    1. I studied the Leia/Red Hydrogen One a little when it was announced but I was busy with some contract work at the time and didn’t get an article up.

      It is interesting how Leic uses what I call “illumination side” control of the switching between the two modes. By controlling the angle fo the light with the illumination they can switch between the two modes.

      I would be curious to compare if this causes some compromises of the main display’s image quality. For example, it might make the display more “directional” limiting the viewing angle (just a possibility).

      The Leia, based on their information, appears to be a true light field. I could not find the number of sub-images but typically people use about 64 (8 rows and columns) to 120 (8 x 15 was used in Nvidia’s Light Field R&D prototype). With the number of sub-images, you are trading resolution for a larger sweet spot and/or less hopping and blurring as your eye moves between sub-images.

      While it is technically interesting, I think it is pretty much a gimmick. You have to sacrifice so much resolution to get the auto-3D effect that I don’t see much practical use beyond the initial wow. I would also wonder how well the 3-D effect work, the sweet spot size, and if there is hopping between sub-images. The illumination side switching might be more broadly useful. If you really need auto-stereo-3d, then you would be better off with a purpose-built system.

  5. For me, the sigor ros demo was an obvious demoware job. Making the room very dark is very effective way to cover a series of undesirable effects. But I also think that using the very emotive music would also help cover for imperfections/discontinuities in the visual field. Effectively, by giving the user another sense to focus on.

    I also have a nagging suspicion that the Gimble robot demo (as demoed by the senior systems engineer Sam Miller) was in some way a contrivance. At the far end of a large room, Miller steps into projected image of Gimble, while the user stays still. It does seem a like they would have had an opportunity to use some “Wizard of Oz”-type tricks. Like the sigor ros demo (which was in yet another room), I think the Gimble room could similarly be set up with any number of tricks to lead the user to the belief that the ML One somehow solved the problem of occlusion (“your eyes simply ignored Miller”, or so says the CEO). I doubt it has. I think it just did a demo under ideal circumstances. If they demoed Gimble outdoors, then this would be a different story.

    1. Yep, this is “demoware 101.” You always should be wary of tightly controlled demos.

      We used to joke back in the 1980’s, “good audio improves the quality of a display.”

      I see zero evidence that Magic Leap has done anything to solve “hard-edge occlusion” (in-focus pixel accurate occlusion of the real world). They have acquired the patent rights to the University of Arizona concept, but that is both totally impractical and only “works” if the real world is almost 2D (flat) which is not realistic. Nothing in their patent trail suggest they know anything more than anyone else in hard-edge occlusion. Hard-edge occlusions is a much more difficult problem than vergence-accommodation (VAC) and from what it looks like Magic Leap is doing, they are not going to be doing a good job of VAC either.

  6. KarlG, most of your criticism of AR/MR headsets seems to be based on the fact that waveguide tech has failed us, falling way short of whats needed for proper MR. And rightly so – its seems to be lots of hot air and billions wasted. But that being the case, why is passthrough VR being ignored ? I do understand there are resolution issues and it doesnt solve the VAC problem but in the interim, till we get retinal implants or something, why not make use the rapidly evolving camera tech and combine a stereo camera module (ZED/OvrVisionPro etc), an IR depth sensor camera (PMD) and a good OLED panel based headset and be able to do proper MR with fully opaque objects + depth occlusion ? Now these are off the shelf products but a hardware company can source even better camera modules and custom processors for depth / merge etc and take this one step forward. And we basically skip the waveguide era all together ?

    1. Some things that are hard with “Optical AR” do become easier with passthrough AR. Unfortunately, the reverse is also true.

      Merging the real and virtual worlds with digital with “hard edge occlusion” becomes almost trivial with pass-through AR. You don’t have all the image loses associated with the combiner.

      But passthrough AR has severe drawbacks particularly if you are going to venture forth in the real world, and you are not just using it for gaming in the relative safety of a room. In addition to the resolution issues you mentioned, some of the issues with pass-through AR include:
      – Blocking uses peripheral vision (big safety issue) and other isolation issues
      – Lag from real world (can cause disorientation and motion sickness among other things)
      – Others can’t see the user’s eyes and generally how you look wearing it.

      In short, you get very far away from the “vision” of AR.

      1. I’ve already got googley eyes on the outside of my HTC VIVE – and have the front facing camera come on as I approach the tracked volume boundaries…

        Replace that with a full face LCD, add eye tracking and display an avatar that mimics my eyes and face. Would be like wearing an animated mask. DaftPunk helmet style.

        I also really like what car manufacturers are doing with 360 degree cameras for backup and parking displays – pass through a reconstructed image that INCREASES my visions FOV and I’d love that. Probably would make me go crazy to have augmented vision like that, or cause some serious motion sickness, but if you could adjust it would be like having a super power.

Leave a Reply

Your email address will not be published. Required fields are marked *