When Nits are Not Enough: Quantifying Brightness in Wide Gamut Displays

Picture3

Finding Better Ways to Compare Perceived Brightness 

As I covered in my last post, Better Together: Combing HDR and Quantum Dots, manufacturers are eager to transition their products to both high dynamic range (HDR) and wide color gamut (WCG) to achieve the optimum level of color and visual performance.  With the accelerating adoption of WCG display technologies, the challenge of finding a “fair” brightness metric to compare displays is becoming increasingly important; even more so as the HDR trend pushes nit counts ever higher.

At this year’s CES (the largest consumer electronics show in the world), the combination of HDR and WCGs was arguably one of the hottest topics at the show, with nearly every major manufacturer demonstrating HDR/WCG in their next generation displays.

HDR typically requires higher peak luminance levels, which means higher maximum nit counts.  Samsung was touting 1000 nit displays in their booth, while Sony had a 4000 nit technology demonstration.  Obviously, manufacturers want consumers to associate higher nit counts with higher brightness and better image quality.

Whether this is actually the case will depend on the color performance of the display, specifically on its spectral profile.  Why is that?  Two reasons.  First, there is a fundamental tradeoff between nits and color gamut – the wider the gamut the lower the nit level.  Second, wider color gamuts result in higher perceived brightness levels.  With these two effects in play, you can start to imagine why it becomes difficult to get a true “apples-to-apples” comparison on display brightness.

The “Luminance = Brightness” MythFig1

If you’ve been shopping for a TV or computer monitor lately, you’ve likely came across the word “nits” prominently listed in the display’s technical specifications for brightness. Some time ago display manufacturers began using nits, which is actually a measure of luminance, as a way to market the brightness levels of their products.

What’s the difference?  In color science, any term ending in “–ness” (e.g. colorfulness, lightness, brightness, etc.) refers to a subjective attribute of light – in other words, properties of light that cannot be directly measured and can only be described as sensations by the observer.

Bright“-ness” is defined as the subjective sensation in the consciousness of a human observer when light falls on the retina.  We use words like “dim” or “very bright” to describe this non-linear response to light.  [Note: Do not confuse this concept with the “brightness” setting on most displays, which controls the black level of the display].

Luminance, on the other hand, is the closest correlate to brightness that can be repeatably measured (in the form of nits, or candelas per square meter).  It represents the amount of light that is emitted or reflected from a surface.

One simple way to understand the difference between brightness and luminance is to consider their logarithmic relationship to one another – the sensitivity of the eye decreases rapidly as the luminance of the source increases (see the graphic to the right).

In other words, if the luminance of a source increases 10X, viewers do not perceive a 10X increase in brightness.  It is this characteristic that allows the human eye to operate over such an extremely wide range of light levels (14 stops for you photographers out there).

Many professionals in the display community continue to use “brightness” interchangeably with “luminance” because many people don’t know what “luminance” is.  Thus, display manufacturers who want to make it easier for buyers to compare displays to fit their brightness preferences, are only cementing the confusion.

Why does this even matter?  Well, it didn’t when all of our displays used the same backlight technology and provided more or less the same level of color performance (Rec. 709 or sRGB).  However, as wide color gamut (WCG) displays proliferate, the picture quickly becomes more complex.

Quantum dots, RG phosphors, OLED, lasers, RGB LEDs and other technologies are becoming more common, and with this diversification in backlight technology, the interchangeability between luminance and brightness starts to break down.  Fundamentally, this all comes down to the spectral profile of the RGB primaries used to make the full color display.

Nit-picking: Understanding Photopic Response Overlap and the H-K Effect

One of thefig2 key features that enables WCG displays, and in particular quantum dot displays, to produce a wider spectrum of colors is their narrow emission spectrum of the red, green, and blue (RGB) display primaries.

Compared to the phosphor-based approach used in most standard LCDs, quantum dots produce extremely narrow spectral bands of light.  Due to the spectral purity of the emitted wavelengths (typically 20-30 nanometers in quantum dot displays), RGB primaries are perceived as having a much higher level of saturation.

These highly saturated RGB primaries can be plotted on a CIE gamut diagram as the vertices of a triangle, where every color within the triangle can be recreated by the display (see image to the left).

The horseshoe shape represents all of the colors our eyes can perceive, with the edges of the horseshoe representing monochromatic wavelengths of light.  Therefore, the narrower the spectral profile of emission, the more saturated the color of the display primary, and the closer to the edge of the horseshoe the RGB primaries lie.

This clearly results in larger and larger triangles that cover more and more of the perceptual color space.  Notice how even with the widest TV color gamut standard, Rec. 2020, the three primaries only cover 63% of all perceived colors.  In other words, there is plenty of room to improve our displays from a color performance perspective.

Another way to evaluate the color performance of a display is to examine their spectral distributions.  In the diagrams below, the red lines represent the spectral distributions from two different types of display: WLED  vs. quantum dots.  As drawn, the spectral profiles of both technologies are quite different with the quantum dot display showing three distinct RGB peaks (thanks to the narrow RGB primaries) versus the blue LED peak and broad yellow emission of the WLED.

fig5

The WLED display on the left is reproducing 75% of the NTSC gamut.  The WCG display on the right is reproducing 100% of the NTSC gamut. Again, the wider gamut on the right is a product of the spectrally narrow RGB primaries.  The blue lines in the diagram represent something called the photopic response, which is the way our eyes perceive and naturally emphasize certain wavelengths across the visible spectrum.  So, display wavelengths closer to the peak of the photopic response curve (555 nm) contribute more to the calculation of the nit level than wavelengths at the tails.

Because of the broader spectrum of light from the white LEDs, the WLED spectrum overlaps the photopic response peak to a greater degree than that of the quantum dot display, resulting in higher nit counts.  Conversely, the more well-defined quantum dot primaries, with less photopic response overlap, results in a lower nit count.

This difference in nits due to spectral profile variance happens despite the fact that the power delivered by both displays is the same (i.e. the area under the curves is identical).  Said another way, the color gamut of the quantum dot display with the same power output as a 400 nit WLED display would result in a 336 nit display, a drop in luminance of 16%.

Of course, the degree of the nit penalty for the WCG display depends entirely on the spectral distributions being compared.  The example above clearly demonstrates the tradeoff between color gamut and luminance as well as the difficulty in using nits to evaluate standard gamut and WCG displays.  If the displays in question have different spectral distributions, then the comparison between the two is no longer “apples-to-apples”.

 

But wait, there’s more …

Even though the quantum dot spectrum above measures a 16% lower nit count relative to the WLED display, the perceived drop in brightness is not as drastic as the logarithmic relationship between brightness and luminance would predict.

Put another way, the more saturated color reproduction results in the WCG display being perceived brighter than the luminance would suggest.  Why is this the case?  Well, there is a psychovisual phenomenon known as the Helmholtz–Kohlrausch effect (or, simply, the “H-K effect”) which explains that saturated colors appear brighter to our eyes than less saturated colors of equivalent luminance.  I’ll delve further into this interesting phenomenon in my next post.

Until then, the important takeaways from this post are:

  • Luminance (nits) is not the same as brightness, especially when comparing standard color gamut and WCG displays
  • WCG displays require narrow RGB primaries, resulting in reduced photopic response overlap (lower nit count)
  • Narrow RGB primaries appear brighter due to the H-K effect

 

All of this makes finding an alternative specification for fair brightness comparisons one of the more important challenges to solve in the design and marketing of WCG display technology.  Without it, display engineers and consumers alike will get left in the dark about the full performance benefits of WCG displays.  What’s needed is a revised photopic response curve that takes into account the effects I’ve outlined in this post.  At QD Vision, we’re already developing this new metric and are looking forward to sharing it with the display industry in the near future.  I’ll be talking about that in a future post, so stay tuned.

 

By John Ho, Product Marketing Manager, QD Vision

« Previous