Tags

, ,

[If you’re impatient and want to skip directly to the technique, head for the practical side.]

I think I was half-asleep when I had my “holy shit” moment. I was excited to crack open Blender’s source code and start coding, but first I had to do my due diligence. Had anyone else stumbled on the same technique for removing noise from Cycles renders?

Answer: yes, in fact “bashi” did it two years ago and without writing a single line of code. Dang! I fired up Blender, and gave his nodegroup a quick whirl…

Bashi's denoising, compared to the original.

… and discovered it’s a little terrible. The front car’s hood looks great, but the floor is splotchy, the rear car is still noisy, and there’s something terribly wrong with the car seats. This technique should give better results, though; let’s open the hood, and figure out what’s going wrong.

Looking behind the scenes, to see where Bashi's shader goes wrong.

The core problem seems to be the combination of the glossy direct and chroma channels, which create noise that throws off the bilateral filter (more on that later). Maybe we can repair bashi’s basic idea?

Sources Of Noise

Unintended noise when path tracing comes from three possible sources: there aren’t enough sample rays cast into the scene for the integrator to converge on a value, the paths explored miss important light sources, or there’s a bug in the machine. The latter never happens, so we’ll just focus on the first two.

Two of the three sampling failures that lead to noise: too few ray samples to reach convergence, or rays sent out in the wrong directions to capture the scene. Not pictured: bugs.

Fireflies, for instance, are when glossy rays from one pixel connect with a bright light source while neighbouring pixels don’t. If the integrator had more time to sample, the other pixels would find that source too and the difference would disappear. I’ve covered fireflies in depth already, so I won’t rehash them.

The other type of noise we’d most like to wipe away comes from indirect diffuse lighting.

Geometry "close" to a light source will need few rays and earn many samples; geometry "far" from light sources will require more rays and be poorly sampled.

The more bounces it takes to connect with a light source, the less likely we’ll stumble onto it by accident; conversely, the fewer bounces it takes, the more samples we’ll get at that pixel and the more accurate we’ll calculate its illumination. Not all paths are equal, either, sometimes we’ll take a few extra bounces to reach the source, resulting in less light transferred along that path. The result is noise.

A combination of different exposure levels from the same image, demonstrating how noise increases as the path length increases.Worse still, the current trend in 3D is to use indirect diffuse lighting, as it’s what we normally run across in real life. But another complication is more of an asset: this noise is most noticeable on flat planes which our brain thinks should be smoothed, geometry that’s better known as a “wall.”

"Class room", by Christophe Seux. Sample count deliberately reduced. Source: https://www.blender.org/download/demo-files/

Notice something interesting, however: flat things tend to have the same surface normal. Bendy things, where the noise is less noticeable, don’t.

The normal map that Cycles creates for "Class room", with a few tweaks so that you can see everything (Cycles returns negative colours, by default). Red is right, blue is up, and green is away.

An object’s illumination is very dependent on its surface normal, and nearby patches with the same normal should have similar illumination levels. This suggests a cheat: combine together the illumination of nearby pixels that share the same normal, and the result should be similar to cranking the sample count. Even if it isn’t, global illumination is notoriously fuzzy so it should still be close enough to fool the eye.

The Bilateral Filter

Speaking of fuzz, you’ve heard of the Gaussian filter, right? It’s a very handy for blurring things and has some interesting mathematical properties. Like nearly all blur filters, though, it suffers from a flaw: it blurs. As in, it blurs edges and flat areas equally well.

Demonstrating the difference between a normal Gaussian filter and the Bilateral filter.

There’s an easy fix: add a weight function into the mix, which reduces the contributions of pixels that have different colours. Since edges are a boundary between one group of similar pixels and another group of similarly-coloured pixels, each group’s common shade being quite different, this weight function will preserve edges. The result is known as the bilateral filter, and it’s extremely powerful and flexible, frequently used to remove noise from images.

This scene is incredibly noisy, though. The bilateral filter of my image editor isn’t cutting it, there just isn’t enough visual difference between either side of an edge to stand out from the noise.

But we do have access to the geometry of the scene, via the normals. They fulfill the same requirements we need for that weighting function: physical edges correspond to visual edges, and areas with the same surface normal have the same intensity. Many implementations of the bilateral filter have the weighting function locked to the image to be blurred, unfortunately …

image

… but Cycle’s implementation isn’t one of those. Hot dog! Let’s feed the image and map into this Compositor node, and gaze on the results.

image

They’re terrible, for the most part. The walls do look nice, as do the desk surfaces, but the chalkboard’s contents are gone and the papers on the cork board are blurred into the background in a strange way. Parts of those papers had the same normal as the wall behind, and as requested the bilateral filter blurred wall and paper together; raised parts didn’t, hence why the corners of those papers aren’t blurred at all. The normal map isn’t catching every edge, we need some sort of object-specific colour to mark their boundaries. While we’re at it, we should find a way to deal with decals and posters.

Cycles has a two-bird solution. Surface normals are just one of two dozen passes that Cycles can record for you over a scene. Four of them are devoted to colour alone, without any illumination data.

Most of the colour passes Cycles can return, plus their combination.Colour is quite important, as we frequently use it to visually divide up a scene into objects. Most of the time, an object will have just one or two major colours. Not all objects obey this rule, such as wood, but they’re visually noisy so poor sampling will be camouflaged. That applies equally to 2D and 3D art.

Like the normal map, these colour channels behave exactly as our weighting function expects. But they also aren’t a perfect match for what we want, either. Maybe we can get superior results by combining the normals and colour channels?

image

YES!!! That looks amazing! Well OK, some of the subtle lighting details have been blurred away, giving a sort of “floaty” look, and the light hints of dirt are gone, but otherwise that’s a HUUUGE improvem-

image

… Dang. Something is very wrong with that clock. And the cork board is a flat brown with a few white specks in it. Time to look under the hood again.

Debugging why the new nodegroup fails in the classroom scene.That clock face is coming up as a solid colour, so the entire face, including those numbers, is being blurred away. Both the normal and colour channels only record from the first object they hit, and here that’s the transparent glass covering the clock face. Since it’s pretty invisible in this scene, I’ll cheat and simply delete it. Otherwise, we’d have to work something out with multiple passes.

The cork board is behaving as it should, it just lacks strong colour contrasts. Masking it off is a better approach, and Cycles offers an easy solution: increment the pass number of the cork board and use the Object Index channel as a mask. We’ll feed this into the bilateral filter, as we don’t want masked areas to bleed out. Let’s add the floor to the mask, too, the blurred version is quite terrible.

The classroom scene, with the compositor nodes laid out.Ahhh. There we go. Finally, we have something solid.

Other Sources of Noise

If you’ve worked with Christophe Seux’s demo scene before, you might have noticed that this version has no depth of field on it. The colour and normal channels are only tac-sharp when the scene is in focus; as it goes out of focus, they will noise out.

A montage demonstrating how depth of field introduces noise into otherwise solid pass data.This is due to how path tracers and focal blur works. Points in the scene radiate light in a variety of directions, not just directly at the camera. This creates a problem if you want to form an image, as an aperture will let that point’s light spread all over the imaging device.

An imaging device without a lens: the light radiating from a single point is restricted by the aperture, but is still spread across a larger area of the imaging device. The result: fuzziness. Pinholes help, but run up against the diffraction limit.Lenses come to our rescue here: by bending light, they can reverse that spreading and cause light emitted from a point to re-converge to a point. Unfortunately, that convergence only happens for points with specific distances and locations, typically in a plane projected in front of the camera. As a point gets further from that area, its light is spread across a greater area of the imaging surface.

A lens bends the light radiated from a point, and if that point is in the right place the light will reconverge on a point on the imaging surface. If not, it'll smear across an area. Diffraction plays a much smaller role here, unless you add an aperture into the mix.We can flip that around. A specific point on the imaging surface will gather all the light emitted at a single point in the scene; away from that point, light originating from a range of places will contribute to that point. In other words, that pixel will be a scattering of rays taken from a range of directions, a blur of rays that fill the volume of a conic projected into the scene.

A specific point on the image capture thing will map to a set of cones projected out into the scene; as a result, only one point in a scene will be tac sharp at that point.Seux’s demo scene fares well, thanks to the floor masking, but that won’t work in general. One possible solution is to fake depth of field, using a number of techniques. With a little work and foresight, this can look great.

The results of the noise reduction filter on the BMW scene. Note the overly sharp edges in some areas, and the bizarre blurring on the headlights.But there are other sources of noise, too. Motion blur has the same problem as depth of field, but there are fewer techniques to fake it. Caustics break the rule that nearby areas with the same normal have roughly the same illumination, but fortunately they look fairly good when blurred. Reflections are another matter.

The various components of a caustics scene. Note that direct glossy light shouldn't be blurred, while the rest is in desperate need. Like caustics, reflections break the rule. Unlike caustics, they’re usually sharp and intended to be so. Worse still, sometimes they aren’t; a surface with a non-trivial roughness will blur out a reflection much like focal blur. Olivier Vandecasteele’s caustics test is a good example of this: the direct glossy pass should be left alone, the indirect diffuse needs denoising badly, and the other passes are somewhere in-between. The only decent way to handle this is by combining multiple passes or masking off areas.

Olivier Vandecasteele's caustics test, with a multipass denoising setup.Enough theory, though. Let’s put this into practice.

Advertisements