John Tsiombikas nuclear@mutantstargoat.com
26 October 2012
Ok so it's a well known fact among graphics practitioners, that pretty much every game does rendering incorrectly. Since performance, and not correctness is always the prime consideration in game graphics, usually we tend to turn a blind eye towards such considerations. However with todays ultra-high performance programmable shading processors, and hardware LUT support for gamma correction, excuses for why we continue doing it the wrong way, become progressively more and more lame. :)
The gist of the problem with traditional real-time rendering, is that we're trying to do linear operations, in non-linear color spaces.
Let's take lighting calculations for example, when light hits a plane with 60 degrees incidence angle from the normal vector of the plane, Lambert's cosine law states that the intensity of the diffusely reflected light off the plane (radiant exitance), is exactly half of the intensity of the incident light (irradiance) from that light source. However the monitor, responsible for taking all those pixel values and sending them rushing into our retinas, does not play along with our assumptions. That half intensity grey light we expect from that surface, becomes much darker due to the exponential response curve of the electron gun.
Simply put, when half the voltage of the full input range is applied to the electron gun, much less than half the possible electrons hit the phosphor in the glass, making it emmit lower than half-intensity light to the user. That's not a defect of the CRT monitors; all kinds of monitors, tv screens, projectors, or other display devices work the same way.
So how do we correct that? We need to use the inverse of the monitor response curve, to correct our output colors, before they are fed to the monitor, so that we can be sure that our linear color space where we do our calculations, does not get bent out of shape before it reaches our eyes. Since the monitor response curve is approximately a function of the form: where usually, it mostly suffices to do the following calculation before we write the color value to the framebuffer: . Or in a pixel shader:
gl_FragColor.rgb = pow(color.rgb, vec3(1.0 / 2.2));
That's not entirely correct, because if we are doing any blending, it happens
after the pixel shader writes the color value, which means it would operate
after this gamma correction, in a non-linear color space. It would be fine if
this shader is a final post-processing shader which writes the whole
framebuffer without any blending operations, but there is a better and more
efficient way. If we just tell OpenGL that we want to output a gamma-corrected
framebuffer, or more precisely a framebuffer in the sRGB color space, it can do
this calculation using hardware lookup tables, after any blending takes place,
which is efficient and correct. This fucntionality is exposed by the
ARB_framebuffer_sRGB
extension, and should be available on all modern graphics
cards. To use it we need to request an sRGB-capable framebuffer during context
creation (GLX_FRAMEBUFFER_SRGB_CAPABLE_ARB
/ WGL_FRAMEBUFFER_SRGB_CAPABLE_ARB
),
and enable it with glEnable(GL_FRAMEBUFFER_SRGB)
.
Now if we do just that, we're probably going to see the following ghastly result:
The problem is that our textures are already gamma-corrected with a similar
process, which makes them now completely washed out when we apply gamma
correction in the end a second time. The solution is to make color values
looked up from textures linear before using them, by raising them to the power
of 2.2. This can either be done in the shader simply by:
pow(texture2D(tex, tcoord).rgb, vec3(2.2))
, or by using the GL_SRGB_EXT
internal texture format instead of GL_RGB
(EXT_texture_sRGB
extension), to let
OpenGL know that our textures aren't linear and need conversion on lookups.
The result is correct rendering output, with all operations in a linear color space:
A final pitfall we may encounter is if we use intermediate render targets
during rendering, with 8 bit per color channel resolution, we will observe
noticable banding in the darker areas. That is because our 8bit/channel
textures are now raised to a power and the result is again placed in an
8bit/channel render target, which obviously wastes color resolution and loses
details, which cannot be replaced later on when we gamma correct the values
again. Bottom-line is that we need higher precision intermedate render targets
if we are going to work in a trully linear color space. The following
screenshots show a dark area of the game when using a regular GL_RGBA
intermediate render target (top), and when using a half-float GL_RGBA16F
render
target (bottom):
Color artifacts are clearly visible in the first image, around the dark unlit area.
This was initially posted in my old wordpress blog. Visit the original version to see any comments.