, , ,

Ever had a comment haunt your brain? This one by Palaes got to me; not because I hadn’t thought of trying to mix clamping and progressive renders, but because it was missing a critical bit of information:

What values do you put in the Clamp Direct and Clamp Indirect boxes?

I’ve seen decent explanations of what those values do on forums and posts, but never any good method to set them. Sorry, Andrew Price, but “start at 10 and work down” doesn’t qualify.

So, over this blog post and the next I’ll give you a recipe for how to set the clamping values in Blender, without relying on guesswork. It’ll take a bit of theory to drain the most out of it, though, so this first post will explain Cycles’ rendering loop in some depth.

Know all about that already? Here’s the short version: clamping sets a cap on the per-sample average of pixel RGB channels. You might want to stick around for the long version, though, as it’s a bit more…. illuminating.

Anyway, Cycles is a path tracer, which means it chases back the light rays in a scene. As the destination of every ray we see is the virtual camera, Cycles starts by firing a ray from the camera into the scene.

Pew! A light ray strikes the camera..That was a bad idea, because a lot of different light paths could return down that ray.

All the light paths which could have contributed to that ray. Spoiler alert: there's lots, and the image is nowhere near exhaustive.Did the photon originate from the overhead light, via a reflection off the glass? Did it originate from the headlamp and pass through the glass? Or did it bounce around a few times first? Heck, it may even have originated from the overhead, bounced off the floor, passed through the glass, pinballed around a bit, then came down that ray! To compound the misery, in real life that pixel is probably the combination of millions of photons, some of which took the above paths, some of which took paths I haven’t thought of, but all of which contributed to the illumination of that pixel.

What to do? Ray tracers typically sweep all that complexity under the carpet: reflections and refractions come from exactly one angle, lights are point sources (or simple geometric areas for distributed tracing), and all those other possible light paths are ignored (typically). Path tracers like Cycles embrace the complexity instead, chopping it down to size with a large dose of randomness and optimization,  then average together multiple light paths per pixel. Hence why their images look noisier than ray tracers, but more realistic.

Cycles in particular splits light paths into seven distinct types: background, scatter, diffuse, subsurface, glossy, emission, and transmission.

Cycle's Seven Types of Light (as of 2.73a): background, scatter, diffuse, subsurface, glossy, emission, transmissionCycles also divides light into two categories. Direct light is any light that bounces one or zero times between a light emitter and the camera, while indirect light bounces more than once.

The direct and indirect contributions to a rendering scene.Some types of light only fit in one category, and direct emission is treated specially by Cycles. All light falls into one type or the other, though, no exceptions.

You can see all of the above for yourself, in fact; one of the great things about open source software is that you get to directly read the source, so there’s no need to guess or reverse-engineer Cycles. Most of you probably aren’t coders, though, so let me loosely translate the stock path tracer of version 2.73a into English.

For each segment of the light path, as it bounces around (478):

  • Look for a geometry intersection (480-501).
  • If volumetrics are enabled, handle emission (554-555), sample the diffuse lighting along the path (564) as well as indirect lighting (570-575), then potentially scatter more ray segments to handle light bounces (595-608).
  • If there was no intersection, sample the background and quit the ray (615-632).
  • Otherwise, gather the light emitted by the object at the end of the segment (677-680).
  • Terminate the ray if we must, or a roll of the dice says so; otherwise, attenuate all the future light we collect (687-698).
  • If ambient occulsion is enabled, gather that too (703-704).
  • Subsurface scattering? Add that on as well and exit the ray (711-713).
  • Finally, calculate all the direct lighting at the end of this segment, and move to the next segment if possible (718-722).

Finished all segments? Sum up all the light gathered along the path, clamp it, and write it out (725-733).

Hmm, we could use a bit more detail on that clamping step. The source is easy enough to read, so I’ll let it speak for itself.

#ifdef __CLAMP_SAMPLE__
 else if(sum > clamp_direct || sum > clamp_indirect) {
 float scale;

 /* Direct */
 float sum_direct = fabsf(L_direct.x) + fabsf(L_direct.y) + fabsf(L_direct.z);
 if(sum_direct > clamp_direct) {
 scale = clamp_direct/sum_direct;
 L_direct *= scale;

 L->direct_diffuse *= scale;
 L->direct_glossy *= scale;
 L->direct_transmission *= scale;
 L->direct_subsurface *= scale;
 L->direct_scatter *= scale;
 L->emission *= scale;
 L->background *= scale;

 /* Indirect */
 float sum_indirect = fabsf(L_indirect.x) + fabsf(L_indirect.y) + fabsf(L_indirect.z);
 if(sum_indirect > clamp_indirect) {
 scale = clamp_indirect/sum_indirect;
 L_indirect *= scale;

// HJH: skipping the obvious...

 /* Sum again, after clamping */
  L_sum = L_direct + L_indirect;

OK, maybe not: the above code sums up the values of the red, green, and blue channels for both direct and indirect lighting, then checks to see if each is greater than the Clamp Direct and Clamp Indirect values. If one or both of them are, Cycles scales them down so they juuuust slip under the line.

Astute code readers may have noticed Cycles doesn’t actually do an average; in reality, it handles that implicitly by pre-multiplying, saving a division op. We can confirm this with a simple test scene:

The setup of our test scene for clampingFrom the camera’s view, the center of the image is dominated by a block with an emissive texture; the bottom third is a glossy plane which reflects a quarter of all light, positioned so the camera (offscreen and to the left) sees a reflection of the cube; and the top third of the final render is occupied by two glossy planes that bounce light from that upper clone of the emission cube. With a little Python tomfoolery, we can sweep the clamping value down horizontally and observe when it starts biting into various pixels.

The results are exactly as the source code said: each band starts dropping in intensity once the average of each RGB channel exceeds the clamping value, whereafter they track the clamping value.

Clamp test with annotations. Long story short, light intensities start clamping exactly when the code says they should.This might seem like a silly thing to check; how can the code do anything other than what it says? We’re not peeking at the full source, though; human colour perception is complex, and after the raw values are captured they have to run a gauntlet of processing to be turned into something that can tickle our eyeballs. Even an EXR image has room for interpretation; two image viewers in the same HDR image suite, for instance, gave me two different intensities for the brightest pixels in the above test pattern. We can’t ignore this mapping problem, but fortunately we don’t need to consider it either: there’s one program we can count on to accurately map pixel values from Blender renders back into ray intensities within Blender scenes.


Sampling an image in Blender (left-click in the image viewer). This also works for renders! Left-clicking on an HDR or rendered image will give you the intensity of the pixel you clicked on, and as you can’t see it maps directly: that upper part is the block proper, while the lower part is the glossy surface reflecting 25% of the light it receives. Both are brighter than what your monitor can display, but the HDR pixels don’t lie.

And their dogged devotion to the truth is the final tool we need to start clamping.