The Problem
A camera captures an image. You display it on a monitor. The image looks darker than what the camera originally captured. Why?
Monitors have a nonlinear intensity-to-voltage response curve. When you supply a voltage of x, the displayed intensity is approximately x raised to the power of 2.5. This means that a mid-range voltage — say 0.5 — produces an intensity of 0.52.5 ≈ 0.177, which is far darker than linear 50% brightness.
The Solution
To compensate for the monitor's power curve, gamma correction is applied to the signal before it reaches the monitor. The input signal is raised to the power of 1/2.5 (approximately 0.45), which pre-brightens it. When the monitor then applies its 2.5 power curve, the two effects cancel out and the final output matches the original linear intensity.
// Encode for display (gamma encode)
float encoded = pow(linearValue, 1.0 / 2.2);
// Decode back to linear (gamma decode — done implicitly by sRGB-aware APIs)
float linear = pow(encodedValue, 2.2);
Why 2.2?
The standard gamma value used in practice is 2.2 (not exactly 2.5) because it also accounts for the nonlinear perception of brightness in human vision. The sRGB color space standardizes this: images stored in sRGB are gamma-encoded at ~2.2, and monitors decode them back to linear for display.
In Graphics Programming
This matters directly in shaders. Lighting calculations must happen in linear space — physically correct light math only works linearly. If your textures are stored in sRGB (gamma-encoded) and you don't decode them before lighting, your shading math will be wrong.
The correct workflow:
- Decode sRGB textures to linear space at sample time
- Perform all lighting math in linear space
- Re-encode back to sRGB before writing to the framebuffer for display
OpenGL handles this automatically if you create your textures with GL_SRGB8_ALPHA8 and your framebuffer with sRGB support enabled via glEnable(GL_FRAMEBUFFER_SRGB).