Tag Archive for Laser Beam Scanning

Collimation, Etendue, Nits (Background for Understanding Brightness)


I’m getting ready to write a much requested set of articles on the pro’s and con’s of various types of microdisplays (LCOS, DLP, and OLED in particular with some discussion of other display types). I feel as a prerequisites, I should give some key information on the character of light as it pertains to what people generally refer too as “brightness”.  For some of my readers, this discussion will be very elementary/crude/imprecise, but it is important to have a least a rudimentary understanding of nits, collimation, and etendue to understand some of the key characteristics of the various types of displays.

Light Measures – Lumens versus Nits

The figure on the left from an Autodesk Workshop Page illustrates some key light measurements. Lumens are a measure of the total light emitted. Candelas (Cd) are a measurement of the light emitted over a solid angle. Lux measures the light per square meter that hits a surface. Nits (Cd/m2) measure light at a solid angle. Key for a near eye display, we only care about the light in the direction that makes it to the eye’s pupil.

We could get more nits by cranking up the light source’s brightness but that would mean wasting a lot of light. More efficiently, we could use optics to try and somehow steer a higher percentage of total light (lumens) to the eye. In this example, we could put lenses and reflectors to aim the light to the surface and we could make the surface more reflective and more directional (known as the “gain” of a screen). Very simply put, lumens is a measure of the total light output from a light source, nits is a measure of light in a specific direction.


The casual observer might think, just put a lens in front of or a mirror behind and around the light source (like a car’s headlight) and concentrate the light. And yes this will help but only within limits. The absolute limit is set down by a physics law that can’t be violated known as “etendue.”

There are more detailed definitions, but one of the simplest (and for our purpose practical) principles is given in a presentation by Gaggione on collimating LED light stating that “the beam diameter multiplied by the beam angle is a constant value” [for an ideal element]. In simpler terms, if we put an optical element that concentrates/focuses the light, the angles of the light will increase. This has profound implications in terms of collimating light. Another good presentation, but a bit more technical, on etendue and collimation is given by LPI.

Another law of physics is that etendue can only be increased. This means that the light once generated, the light rays can only becomes more random. Every optical element will hurt/increase etendue. Etendue is analogous to the second law of thermodynamics which states that entropy can only increase.

Lambertian Emitters (Typical of LEDs/OLEDs)

LEDs and OLEDs used in displays tend to be “Lambertian Emitters” where the nits are proportional to the cosine of the angle. The figure on the right shows this for a single emitting point on the surface. A real LED/OLED will will not be a single point, but an area so one can imagine a large set of these emitting points spread two dimensionally.

Square Law and Concentrating Light

It is very important to note that the diagram above shows only a side view. The light rays are spreading as sphere and nits are a measure of light per unit area on the surface a sphere. If the linear spread by is reduced by X, the nits will then increase by X-squared.

Since for a near eye display, the only light that “counts” is that which makes it into a person’s eye, there is a big potential gain in brightness that comes not from making the light source brighter but by reducing the angles of the light rays in the form of collimation.

Collimating Light

Collimation is the process of getting light rays to be a parallel to each other as possible (within the laws of etendue). Collimation of light is required for projecting light (as with projector), making for very high luminance (nits) near eye displays, and for getting light work properly with a waveguide (waveguides require highly collimated light to work at all)

Show below is the classic issue with collimating light. A light source with the center point “2” and the two extreme points point at the left “1” and right “3” edge of a Lambertian emitter are shown. There is a lens (in blue) trying to collimate the light that is located at a distances equal to the focal length of the lens. There is also shown a reflector in dashed blue that is often used to capture and redirect the outermost rays that would bypass the lens.

The “B” figure shows happens when 3 light rays (1a, 2a, and 3a) from the 3 points enter the lens at roughly the same place (indicated by the green circle). The lens can only perfectly collimate the center 2a ray to become 2a’ (dashed line) which exits along with all other rays from the point 2 perfectly parallel/collimated. While rays 1a and 3a have their angle reduced (consistent with the laws of etendue, the output area is larger than the source light area) to 1a’ and 3a’ but are not perfectly parallel to ray 2a’ or each other.

If the size of the light source were larger such that 1 and 3 are farther apart, the angles of rays 3a’ and 1a’ would be more severe and less collimated. Or if the light source were smaller, then the light would be more highly collimated. This illustrates how the emitting area can be traded for angular diversity by the laws of etendue.

Illuminating a Microdisplay (DLP or LCOS) Versus Self Emitting Display (OLED)

Very simply put, what we get conceptually by collimating a small light source (such as set of small RGB LEDs) is a bundle of individual highly collimated light sources to illuminate each pixel of a reflective microdisplay like DLP or LCOS. The DLP or LCOS device pixel mirrors then simply reflect light with the same characteristics with some losses and scattering due to imperfections in the mirror.

The big advantage in terms of intensity/nits for reflective mirodisplays is that they separate the illumination process from the light modulation. They can take a very bright and small LEDs and then highly collimate the light to further increase the nits. It is possible to get many tens of thousands of nits illuminating a reflective microdisplay.

An OLED microdisplay is self emitting and the light is Lambertian which as show above is somewhat-diffuse. Typically OLED microdisplay can emit only about 200 to at most 400 nits for long periods of time (some lab prototypes have claimed up to 5,000 nits, but this is unlikely for long periods of time). Going brighter for long periods of time will cause the OLED materials to degenerate/burn-up.

With the OLED you are somewhat stuck with the type of light, Lambertian, as well as the amount of light. The optics have to preserve the image quality of the individual pixels. If you want to say collimate the Lambertian light, it would have to be done on the individual pixels with miniature optics directly on top of the pixel  (say a microlens like array) to have a small spot size (pixel) to collimate. I have heard several people theorize this might be possible but I have not seen it done.

Next Time Optical Flow

Next time I plan to build on these concepts to lay out the “optical flow” for a see-through (AR) microdisplay headset. I will also discuss some of the issues/requirements.


Varjo Foveated Display (Part 1)


The startup Varjo recently announced and did a large number of interviews with the technical press about their Foveated Display (FD) Technology. I’m going to break this article into multiple parts, as currently planned, the first part will discuss the concept and the need for and part 2 will discuss how well I think it will work.

How It Is Suppose to Work

Varjo’s basic concept is relatively simple (see figure at left – click on it to pop it out). Varjo optically combines a OLED microdisplay with small pixels to give high angular resolution over a small area (what they call the “foveated display“), with a larger OLED display to give low angular resolution over a large area (what they call the “context display“). By eye tracking (not done in the current prototype), the foveated display is optically moved to be in the center of the person’s vision by tilting the beam splitter. Varjo says they have thought of and are patenting other ways of optically combining and moving the foveated image other than a beam splitter.

The beam splitter is likely just a partially silvered mirror. It could be 50/50 or some other ratio to match the brightness of the large and microdisplay OLED. This type of combining is very old and well understood. They likely will blend/fade-in the image in the rectangular boarder where the two display images meet.

The figure above is based on a sketch by Urho Konttori, CEO of Varjo in a video interview with Robert Scoble combined with pictures of the prototype in Ubergismo (see below), plus answers to some questions I posed to Varjo. It is roughly drawn to scale based on the available information. The only thing I am not sure about is the “microdisplay lens” which was shown but not described in the Scoble interview. This lens(es) may or may not be necessary based on the distance of the microdisplay from the beam combiner and could be used to help make the microdisplay pixels appear smaller or larger. If the optical path though the beam combiner to large OLED (in the prototype from an Oculus headset) would equal the path from to the microdisplay via reflecting off the combiner, then the microdisplay lens would not be necessary. Based on my scale drawing and looking at the prototype photographs it would be close to not needing the lens.

Varjo is likely using either an eMagin OLED microdisplay with a 9.3 micron pixel pitch or a Sony OLED microdisplay with a 8.7 micron pixel pitch. The Oculus headset OLED has ~55.7 micron pixel pitch. It does not look from the configuration like the microdisplay image will be magnified or shrunk significantly relative to the larger OLED. Making this assumption, the microdisplay image is about 55.7/9 = ~6.2 time smaller linearly or effectively ~38 times the pixels per unit area. This ~38 times the area means effectively 38 times the pixels over the large OLED alone.

The good thing about this configuration is that it is very simple and straightforward and is a classically simple way to combine two image, at least that is the way it looks. But the devil is often in the details, particularly in what the prototype is not doing.

Current Varjo Prototype Does Not Track the Eye

The Varjo “prototype” (picture at left from is from Ubergismo) is more of a concept demonstrator in that it does not demonstrate moving the high resolution image with eye tracking. The current unit is based on a modified Oculus headset (obvious from the picture, see red oval I added to the picture). They are using the two Oculus larger OLED displays the context (wide FOV) image and have added an OLED microdisplay per eye for the foveated display. In this prototype, they have a static beam splitter to combine the two images. In the prototype, the location of the high resolution part of the image is fixed/static and requires that the user look straight ahead to get the foveated effect. While eye tracking is well understood, it is not clear how successfully they can make the high resolution inset image track the eye and whether the a human will notice the boundary (I will save the rest of this discussion for part 2).

Foveated Displays Raison D’être

Near eye display resolution is improving at a very slow rate and is unlikely to dramatically improve. People quoting “Moore’s Law” applying to display devices are simply either dishonest or don’t understand the problems. Microdisplays (on I.C.s) are already being limited by the physics of diffraction as their pixels (or color sub-pixels) get withing 5 times the wavelengths of visible light. The cost of making microdisplays bigger to support more pixels drives the cost up dramatically and this not rapidly improving; thus high resolution microdisplays are still and will remain very expensive.

Direct view display technologies while they have become very good at making large high resolution display, they can’t be make small enough for lightweight head-mounted displays with high angular resolution. As I discussed the Gap in Pixel Sizes (and for reference, I have included the chart from that article) which I published before I heard of Varjo, microdisplays enable high angular resolution but small FOV while adapted direct view display support low angular resolution with a wide FOV. I was already planning on explaining why Foveated Displays are the only way in the foreseeable future to support high angular resolution with a wide FOV: So from my perspective, Varjo’s announcement was timely.

Foveated Displays In Theory Should Work

It is well known that the human eye’s resolution falls off considerably from the high resolution fovea/center vision to the peripheral vision (see the typical graph at right). I should caution, that this is for a still image and that the human visual system is not this simple; in particular it has sensitivity to motion that this graph can’t capture.

It has been well proven by many research groups that if you can track the eye and provide variable resolution the eye cannot tell the difference from a high resolution display (a search for “Foveated” will turn up many references and videos). The primary use today is with Foveated Rendering to greatly reduce the computational requirements of VR environment.

Varjo is trying to exploit the same foveated effect to gives effectively very high resolution from two (per eye) much lower resolution displays. In theory, it could work but will in in practice?  In fact, the idea of a “Foveated Display” is not new. Magic Leap discussed it in their patents with a fiber scanning display. Personally, the idea seems to come up a lot in “casual discussions” on the limits of display resolution. The key question becomes: Is Varjo’s approach going to be practical and will it work well?

Obvious Issues With Varjo’s Foveated Display

The main lens (nearest the eye) is designed to bring the large OLED in focus like most of today’s VR headsets. And the first obvious issues is that the lens in a typical VR headset is designed resolve pixels that are more than 6 times smaller. Typical VR headsets lenses are, well . . ., cheap crap with horrible image quality. To some degree, they are deliberately blurring/bad to try and hide the screen door effect of the highly magnified large display. But the Varjo headset would need vastly better, and much more expensive, and likely larger and heavier optics for the foveated display; for example instead of using a simple cheap plastic lens, they may need a multiple element (multiple lenses) and perhaps made of glass.

The next issue is that of the tilting combiner and the way it moves the image. For simple up down movement of the foveated display’s image will follow a simple path up/down path, but if the 45 degree angle mirror tilts side to side the center of the image will follow an elliptical path and rotate making it more difficult to align with the context image.

I would also be very concerned about the focus of the image as the mirror tilts through of the range as the path lengths from the microdisplay to the main optics changes both to the center (which might be fixable by complex movement of the beam splitter) and the corners (which may be much more difficult to solve).

Then there is the general issue of will the user be able to detect the blend point between the foveated and context displays. They have to map the rotated foveated image match the context display which will loose (per Nyquist re-sampling) about 1/2 the resolution of the foveated image. While they will likely try cross-fade between the foveated and context display, I am concerned (to be addressed in more detail in part 2) that the visible/human detectable particularly when things move (the eye is very sensitive to movement).

What About Vergence/Accommodation (VAC)?

The optical configuration of Varjo’s Foveated Display is somewhat similar to that of Oculus’s VAC display. Both leverage a beam splitter, but then how would you do VAC with a Foveated Display?

In my opinion, solving the resolution with wide field of view is a more important/fundamentally necessary problem to solve that VAC at the moment. It is not that VAC is not a real issue, but if you don’t have resolution with wide FOV, then VAC is not really necessary?

At the same time, this points out how far away headsets that “solve all the world’s problems” are from production. If you believe that high resolution with a wide field of view that also address VAC, you may be in for a many decades wait.

Does Varjo Have a Practical Foveated Display Solution?

So the problem with display resolution/FOV growth is real and in theory a foveated display could address this issue. But has Varjo solved it? At this point, I am not convinced, and I will try and work though some numbers and more detail reasoning in part 2.

Microvision Laser Beam Scanning: Everything Old Is New Again

Reintroducing a 5 Year Old Design?

Microvision, the 23 year old “startup” in Laser Beam Scanning (LBS), has been a fun topic on this blog since 2011. They are a classic example of a company that tries to make big news out of what other companies would consider to not be news worthy.

Microvision has been through a lot of “business models” in their 23 years. They have been through selling “engines”, building whole products (the ShowWX), licensing model with Sony selling engines, and now with their latests announcement “MicroVision Begins Shipping Samples to Customers of Its Small Form Factor Display Engine they are back to selling “engines.”

The funny thing is this “new” engine doesn’t look very much different from it “old” engine it was peddling about 5 years ago. Below I have show 3 laser microvision engines from 2017, 2012, and 2013 to roughly to the same scale and they all look remarkably similar. The 2012 and 2017 engine are from Microvision and the 2013 engine was inside the 2013 Pioneer aftermarket HUD. The Pioneer HUD appears use a nearly identical engine and within 3mm of the length of the “new” engine. 

The “new” engine is smaller than the 2014 Sony engine that used 5 lasers (two red, two green, and one blue) to support higher brightness and higher power with lower laser speckle shown at left.  It appears that the “new” Microvision engine is really at best a slightly modified 2012 model, with maybe some minor modification and newer laser diodes.

What is missing from Microvision’s announcement is any measurable/quantifiable performance information, such as the brightness (lumens) and power consumption (Watts). In my past studies of Microvision engines, they have proven to have much worse lumens per Watt compared to other (DLP and LCOS) technologies. I have also found their measurable resolution to be considerably less (about half in horizontally and vertically) than they their claimed resolution.

While Microvision says, “The sleek form factor and thinness of the engine make it an ideal choice for products such as smartphones,” one needs to understand that the size of the optical engine with is drive electronics is about equal to the entire contents of a typical smartphone. And the projector generally consumes more power than the rest of the phone which makes it both a battery size and a heat issue.

Magic Leap – The Display Technology Used in their Videos

So, what display technology is Magic Leap (ML) using, at least in their posted videos?   I believe the videos rule out a number of the possible display devices, and by a process of elimination it leaves only one likely technology. Hint: it is NOT laser fiber scanning prominently shown number of ML patents and article about ML.


Magic Leap, could be posting deliberately misleading videos that show technology and/or deliberately bad videos to throw off people analyzing them; but I doubt it. It is certainly possible that the display technology shown in the videos is a prototype that uses different technology from what they are going to use in their products.   I am hearing that ML has a number of different levels of systems.  So what is being shown in the videos may or may not what they go to production with.

A “Smoking Gun Frame” 

So with all the qualifiers out of the way, below is a frame capture from Magic Leaps “A New Morning” while they are panning the headset and camera. The panning actions cause temporal (time based) frame shutter artifact in the form of partial ghost images as a result of the camera and the display running asynchronously and/or different frame rates. This one frame along with other artifacts you don’t see when playing the video, tells a lot about the display technology used to generate the image.

ml-new-morning-text-images-pan-leftIf you look at the left red oval you will see at the green arrow a double/ghost image starting and continuing below that point.  This is where the camera caught the display in its display update process. Also if you look at the right side of the image you will notice that the lower 3 circular icons (in the red oval) have double images where the top one does not (the 2nd to the top has a faint ghost as it is at the top of the field transition). By comparison, there is not a double image of the real world’s lamp arm (see center red oval) verifying that the roll bar is from the ML image generation.

ml-new-morning-text-whole-frameUpdate 2016-11-10: I have upload for those that would want to look at it.   Click on the thumbnail at left to see the whole 1920×1080 frame capture (I left the highlighting ovals that I overlaid).

Update 2016-11-14 I found a better “smoking gun” frame below at 1:23 in the video.  In this frame you can see the transition from one frame to the next.  In playing the video the frame transition slowly moves up from frame to frame indicating that they are asynchronous but at almost the same frame rate (or an integer multiple thereof like 1/60 or 1/30th)


 In addition to the “Smoking Gun Frame” above, I have looked at the “A New Morning Video” as well the “ILMxLAB and ‘Lost Droids’ Mixed Reality Test” and the early “Magic Leap Demo” that are stated to be “Shot directly through Magic Leap technology . . . without use of special effects or compositing.”through the optics.”   I was looking for any other artifacts that would be indicative of the various possible technologies

Display Technologies it Can’t Be

Based on the image above and other video evidence, I think it save to rule out the following display technologies:

  1. Laser Fiber Scanning Display – either a single or multiple  fiber scanning display as shown in Magic Leaps patents and articles (and for which their CTO is famous for working on prior to joining ML).  A fiber scan display scans in a spiral (or if they are arrayed an array of spirals) with a “retrace/blanking” time to get back to the starting point.  This blanking would show up as diagonal black line(s) and/or flicker in the video (sort of like an old CRT would show up with a horizontal black retrace line).  Also, if it is laser fiber scanning, I would expect to see evidence of laser speckle which is not there. Laser speckle will come through even if the image is out of focus.  There is nothing to suggest in this image and its video that there is a scanning process with blanking or that lasers are being used at all.  Through my study of Laser Beam Scanning (and I am old enough to have photographed CRTs) there is nothing in the still frame nor videos that is indicative of a scanning processes that has a retrace.
  2. Field Sequential DLP or LCOS – There is absolutely no field sequential color rolling, flashing, or flickering in the video or in any still captures I have made. Field sequential displays, display only one color at a time very rapidly. When these rapid color field changes beat against the camera’s scanning/shutter process.  This will show up as color variances and/or flicker and not as a simple double image. This is particularly important because it has been reported that Himax which makes field sequential LCOS devices, is making projector engines for Magic Leap. So either they are not using Himax or they are changing technology for the actual product.  I have seen many years of DLP and LCOS displays both live and through many types of video and still cameras and I see nothing that suggest field sequential color is being used.
  3. Laser Beam Scanning with a mirror – As with CRTs and fiber scanning, there has to be a blanking/retrace period between frames will show up in the videos as roll bar (dark and/or light) and it would roll/move over time.  I’m including this just to be complete as this was never suggested anywhere with respect to ML.
UPDATE Nov 17, 2016

Based on other evidence that as recently come in, even though I have not found video evidence of Field Sequential Color artifacts in any of the Magic Leap Videos, I’m more open to thinking that it could be LCOS or (less likely) DLP and maybe the camera sensor is doing more to average out the color fields than other cameras I have used in the past.  

Display Technologies That it Could Be 

Below are a list of possible technologies that could generate video images consistent with what has been shown by Magic Leap to date including the still frame above:

  1. Mico-OLED (about 10 known companies) – Very small OLEDs on silicon or similar substrates. As list of the some of the known makers is given here at OLED-info (Epson has recently joined this list and I would bet that Samsung and others are working on them internally). Micro-OLEDs both A) are small enough toinject an image into a waveguide for a small headset and B) has the the display characteristics that behave the way the image in the video is behaving.
  2. Transmissive Color Filter HTPS (Epson) – While Epson was making transmissive color filter HTPS devices, their most recent headset has switch to a Micro-OLED panel suggesting they themselves are moving away.  Additionally while Meta first generation used Epson’s HTPS, they moved away to a large OLED (with a very large spherical reflective combiner).  This technology is challenged in going to high resolution and small size.
  3. Transmissive Color Filter LCOS (Kopin) – is the only company making Color Filter Transmissive LCOS but they have not been that active as of last as a component supplier and they have serious issues with a roadmap to higher resolution and size.
  4. Color Filter reflective LCOS– I’m putting this in here more for completeness as it is less likely.  While in theory it could produce the images, it generally has lower contrast (which would translate into lack of transparency and a milkiness to the image) and color saturation.   This would fit with Himax as a supplier as they have color filter LCOS devices.
  5. Large Panel LCD or OLED – This would suggest a large headset that is doing something similar to the Meta 2.   Would tend to rule this out because it would go against everything else Magic Leap shows in their patents and what they have said publicly.   It’s just that it could have generated the image in the video.
And the “Winner” is I believe . . . Micro-OLED (see update above) 

By a process of elimination including getting rid of the “possible but unlikely” ones from above, it strongly points to it being Micro-OLED display device. Let me say, I have no personal reason to favor it being Micro-OLED, one could argue it might be to my advantage based on my experience for it to be LCOS if anything.

Before I started any serious analysis, I didn’t have an opinion. I started out doubtful that it was field sequential or and scanning (fiber/beam) devices due to the lack of any indicative artifacts in the video, but it was the “smoking gun frame” that convince me that that if the camera was catching temporal artifacts, it should have been catching the other artifact.

I’m basing this conclusions on the facts as I see them.  Period, full stop.   I would be happy to discuss this conclusion (if asked rationally) in the comments section.

Disclosure . . . I Just Bought Some Stock Based on My Conclusion and My Reasoning for Doing So

The last time I played this game of “what’s inside” I was the first to identify that a Himax LCOS panel was inside Google Glass which resulted in their market cap going up almost $100M in a couple of hours.  I had zero shares of Hixmax when this happened, my technical conclusion now as it was then was based on what I saw.

Unlike my call on Himax in Google Glass I have no idea which company make the device Magic Leap appears to using nor if Magic Leap will change technologies for their production device.  I have zero inside information and am basing this entirely on the information I have given above (you have been warned).   Not only is the information public, but it is based on videos that are many months old.

I  looked at companie on the OLED Microdisplay List by www.oled-info.com (who has followed OLED for a long time).  It turned out all the companies were either part of a very large company or were private companies, except for one, namely eMagin.

I have know of eMagin since 1998 and they have been around since 1993.  They essentially mirror Microvision doing Laser Beam Scanning and was also founded in 1993, a time where you could go public without revenue.  eMagin has spent/loss a lot of shareholder money and is worth about 1/100th from their peak in March 2000.

I have NOT done any serious technical, due diligence, or other stock analysis of eMagin and I am not a stock expert. 

I’m NOT saying that eMagine is in Magic Leap. I’m NOT saying that Micro-OLED is necessarily better than any other technology.  All I am saying is that I think that someone’s Micro-OLED technology is being using the Magic Leap prototype and that Magic Leap is such hotly followed company that it might (or might not) affect the stock price of companies making Micro-OLEDs..

So, unlike the Google Glass and Himax case above, I decided to place a small “stock bet” (for me) on ability to identify the technology (but not the company) by buying some eMagin stock  on the open market at $2.40 this morning, 2016-11-09 (symbol EMAN). I’m just putting my money where my mouth is so to speak (and NOT, once again, stock advice) and playing a hunch.  I’m just making a full disclosure in letting you know what I have done.

My Plans for Next Time

I have some other significant conclusions I have drawn from looking at Magic Leap’s video about the waveguide/display technology that I plan to show and discuss next time.