As a colorblind person who specializes in making maps for data visualization, I was inspired by Blake’s recent blog post on Introducing a Color Blind Filter to take a careful look at the limits of my own color perception with an eye toward how we can make smarter color choices for more accessible maps.
I am a deuteranope, “green-blind,” so my experience of color comes from two overlapping bell curves of colored light frequency sensitivity (red and blue) rather than the normal three (adding green). Anecdotally, this means:
- It is difficult for me to distinguish the edges of a green object within a field of red, or vice versa.
- Dark blues, greens, and browns, like “British racing green,” are muddy and indistinguishable
- Certain bright greens, notably in green traffic signals, appear very close to white
- Some oranges, yellows, and greens are hard to tell apart
- The edges of my color categories are shifted: there are objects that are “yellow” for me but “green” for others, or “purple” for me but “blue” for others, or “pink” for me but “purple” for others.
- It is difficult to tell the states of tricolor (red-yellow-green) LEDs apart, and even single-color red LEDs can occasionally appear to be pale green in darkness.
I don’t think the standard algorithm for simulating colorblindness does a very good job of reproducing these symptoms, especially not the collapse of bright red and green into indistinguishability in certain circumstances. Is it possible to do better?
Questioning the premise
The simulation algorithm that Blake used, which comes from an academic paper by Viénot, Brettel, and Mollon, tries to produce one single trichromat color to correspond to the dichromat perception of each physical combination of spectral emissions. I think this is folly, because both groups of people already have, for instance, a concept of “green” that is associated with pure RGB green, and it is not helpful to say, “oh, some of them are actually seeing yellow.” The important thing is not that the hue is “wrong” but that it is ambiguous. I think the right thing to do is to model the error in perception of each physical phenomenon, and to produce images that introduce as much error as possible while still looking imperceptibly different to a dichromat, so that a trichromat sees standard color plus noise, not unrelated colors.
Since there is no solid data available about the nature of this error, I’ve been experimenting on myself. I wrote a miniature web app that chooses a random (RGB) color and lets me shift its (CIE) hue further and further away from that color until it looks different. (By thinking of this as “minimal pairs,” I am looking at an optics problem through a linguistics lens, which is kind of weird.) The data from doing the test a few hundred times gives me a decent idea of how big my perceptual error in hue is across the range of CIE lightness/chroma/hue possibilities.
What the perceptual data suggests
The clearest systematic effect is from variation in chroma (which is analogous to saturation). Not surprisingly, the lower the chroma, the greater the difference there has to be in hue for me to tell the colors apart. I have no idea if the nonlinearity is a deuteranope thing; for all I know, trichromats may be able to tell hues apart equally well at any chromaticity. I also don’t know whether the sag at the right is because very saturated colors are much easier to tell apart or just because the hues of extremely saturated colors can’t be shifted very far.
More interesting is the variation between hues. Normalizing to a chroma of 30 according to the curve above, I can tell purples apart when their hues are less than 0.3 radians apart, but colors near orange have to be more than 0.7 radians apart (40 degrees!) for me to tell them apart reliably.
Lightness doesn’t seem to matter much. I can apparently tell hues apart about equally well at any lightness, although there is a tiny improvement in discernment between the darkest and the lightest.
A deuteranope color wheel
What I take away from this is that if you want a set of colors that are readily distinguishable by deuteranopes and you need them to have constant chroma and lightness, there are only about 17 useful hues. Here is the set you get when you go from -π to π along the curve modeled above. The hues are heavy on greens, which, in spite of the “green-blind” description, are where the blue and red color receptors' ranges overlap the most, and purples, which are fabricated from separate red and blue stimulation even in people with normal color vision. The figure is kind of dark and washed out, because there just aren’t that many CIE lightnesses and chromas where all the hues are representable in RGB.
RGB hues are misleading in all kinds of ways, but they’re easier to tell apart than CIE hues because the lightness and chroma vary wildly from hue to hue, making yellow and orange dramatically different to look at even though they’re in the same hue bucket here.
Simulating errors in hue perception
But does it work as something that can be applied to arbitrary images? Not really. I wrote a program to download images and apply noise to their hues according to the formulas discovered above. But the ambiguity that it creates doesn’t turn out to be ambiguous in a particularly interesting or useful way. Instead it amounts to a minor desaturation of color, although that may be enough to explain the ambiguities I experience with very dark and very light colors.
Most significantly, I still don’t know what to do about the case of pure RGB red and green, which are vivid and distinct in large isolated shapes, but indistinguishable in close alternation. There is no hue transformation that will create an RGB color in between these two, because anything going around the circle between them is outside of the RGB triangle, and the confusion between the two is not a desaturation.
Original
With noise added to hue
Other avenues to investigate
There is another dimension associated with visual ambiguity, which is the size of the objects. I think what is ultimately going on in my eyes with pure RGB red and green is that both colors cause approximately equal stimulation of the red receptor, and are disambiguated only because green also triggers the blue receptor a little. But blue has a much lower resolution (only 7% of cones are blue, and they are concentrated away from the center of the visual field), so if red and green are right next to each other, it’s random which locations the blue component gets associated with, making the redness and greenness jump back and forth.
Before I can figure out how to measure the significance of object size in myself, I need to figure out what kind of edge detection and dimensions even make sense to characterize it. Maybe the right thing to do is to model visible light in terms of the deuteranope response curves as the standard algorithm does, but then to blur the blue channel and then to map it back into RGB space using the inverse of the same model rather than the trichromat model.
In a completely different direction, the 1959 experiments by Edwin Land, the founder of Polaroid, suggest that people with normal color vision can see color in a dichromat way if those are the color stimuli that are presented to them. Maybe that, and not an RGB simulation, is really the way to get the dichromat experience across.