, , ,

Now that this little project is out of the way, I can finally get back to regular blogging.

Said project had a big problem with “fireflies,” or salt-and-pepper noise caused by difficult-to-integrate scenes. I’ll defer on fully unpacking the jargon (for the impatient, T.F.M. does a decent job), but the short version is that while Cycles and other path-tracers try to be smart about where they cast their rays about, some scenes just outsmart them.

Three close-ups of a scene, showing a back corner that's especially prone to "fireflies" even though it's nowhere near a specular source. The path tracer was used, with no tweaking other than sample count.There are a number of ways to be rid of these pests; Andrew Price has an excellent article covering seven ways to do it within Blender, and if you kick in Blender’s compositor or pop open an image editor you also gain access to the median or despeckle image filters. But all those image techniques and five of Price’s seven suggestions are destructive, as in they throw out or alter image data. As ugly as those fireflies look, they’re still a valid light path through your scene. You can prove that just by cranking the sample count (one of Price’s seven) and watching them naturally fade away. So scrubbing them just feels… wrong.

[HJH 2015/02/23] A very clever alternative comes from Sebastian Koenig, and involves hacking your shaders so that diffuse gathering rays only see diffuse surfaces. That’s not always an option (imagine a cave scene partly lit by water caustics), and it’s still altering your scene, but this technique deserves to be more widely known in my opinion.

A third approach is to animate the random number seed, a technique that’s popped up in multiple places but seems to have originated with YouTube user “seltsamliebe”. Two renders of the exact same scene that use the same seed will have their fireflies in the exact same spot, as the light paths the integrator follows are determined by the same sequence of “random” numbers. Two neighboring animation frames will have a similar layout and thus a similar noise pattern, which both video compression and your own eyes will pick up on. Two renders with different seeds, however, will have their fireflies in completely different places, and two neighboring frames will have uncorrelated noise.

The same scene rendered at 50 samples in the path tracer, but with three different random number seeds. Might have to get in close to spot the differences.So by plugging the current frame (“#frame”) into the random number seed, you can render an “animation” of your still scene and merge all the individual frames into one. The randomization of the fireflies will mix them in with saner light paths, resulting in a smooth image. But while animations look nicer with uncorrelated noise, nice noise is still noise. And recent versions of Blender disable scripting by default, forcing you to manually enable that and potentially open yourself to malicious code.

[HJH 2015/02/25] I’ve since improved on the #frame trick, and fixed the issues with animations at the cost of playing havoc with Cycles’ live preview. Check out the details over here, I think it’s a much better solution. Meanwhile, Palaes gives a quick tutorial on how to animate the seed value in a comment over here. It has the same issues as #frame, except it’s much much more secure.

Greg Zaal came up with a better variation: he wrote a plugin which randomly changed the random seed, independent of the frame, and automatically merged duplicate frames. Now the same approach can work for animations… if you don’t mind installing an 18-month old plugin, one that forces you to kill Blender if you want to stop the render early. Still, it has the nifty feature of being progressive: the finished product is slowly built up in layers, so you can quickly get a feel for the final result.

I couldn’t find his plugin back when I needed it, though, so I wrote my own solution. No plugins necessary, no need to modify the scene file at all, you just need a Unix-like environment, ImageMagick, and perl. The full source is at the end, but the results are… intriguing.

500_50x10_4500Despite the 500 and 50×10 renders using the same number of samples, the 50×10 render looks quite a bit smoother. In fact, by eyeballing different mergers, I can get something that looks about as noisy as the 500 sample image by layering together six 50 sample images, for a savings of 40%!

500_spreadBut intriguing isn’t the same as surprising. Zaal points out there’s a non-obvious form of clamping going on here: those white pixels aren’t actually white, they’re clipped down from “super-white” to keep the overall image within the dynamic range of the monitor. He speculated any difference in performance would disappear if you shifted from a low dynamic range container like JPG or PNG to something like EXR, which wouldn’t clip anything. And he’s right, at least as far as my eyes can tell.

50x10exr_500pngHe points to further proof: notice how the stacked version is darker than the straight-through one? That’s because it’s losing energy via those clipped values. So alas, this image stacking script is also tossing out data.

But that doesn’t mean the script useless, either, as it retains all the benefits of Zaal’s plugin and adds several more. Want to continually refine your result, in perpetuity, rather that artificially halt it after X rounds? You can do that. And don’t worry about interrupting it, my script randomizes the frames it renders to prevent uneven “baking.” It also runs from the command line, and as mentioned doesn’t modify the .blend file itself or requiring a plugin, which makes it handy as part of an ad-hoc distributed renderer. And it can either update finished frames on-the-fly or wait until the end before consolidating; the former gives you that nice progressive render and saves disk space, but the latter uses CPU time more efficiently.


# render_incremental: do an incremental render of a Blender still/animation
# Author: HJ Hornbeck
# Version: 2.0

##### user-tweakable parameters

DIVISOR=11              # default number of increments
START=1                 # default start frame
END=1                   # default end frame
INPUT=                  # default Blender file to render
OUTPUT=                 # default prefix for the target render
PARAMS=                 # default parameters to be passed to Blender
SCENE="Scene"           # default scene to render
MODE=progressive        # default image combine mode
INFINITE="no"           # should we loop infinitely?
                        # where's Blender? allows you to switch between different versions

printHelp () {

cat <<EOL;

render_incremental -i [INPUT] -o [OUTPUT_PREFIX] -s [START_FRAME] -e [END_FRAME] 
        -c [COUNT] -p [BLENDER_PARAMS] -n [SCENE] -m [MODE] (-u)

Default values are:

        Input file:     $INPUT
        Output prefix:  $OUTPUT
        Start:          $START
        End:            $END
        Count:          $DIVISOR
        Params:         $PARAMS
        Scene:          $SCENE
        Mode:           $MODE
        Unending?       $INFINITE

[MODE] is one of:

        (P)rogressive:  combine incremental renders on a per-frame basis
        (G)ather:       wait until all incremental renders are done before combining
        (I)gnore:       don't combine incremental renders, let them accumulate


# read in the user parameters
while getopts ":i:o:s:e:c:n:p:m:u" OPTNAME ; do
        case "$OPTNAME" in

                        DIVISOR="$OPTARG"       # undocumented: set this to zero and mode to "gather" to just consolidate
                        PARAMS="$PARAMS $OPTARG"
                        if [ "$OPTARG" = "p" -o "$OPTARG" = "P" ] ; then
                        elif [ "$OPTARG" = "g" -o "$OPTARG" = "G" ] ; then
                        elif [ "$OPTARG" = "i" -o "$OPTARG" = "I" ] ; then
                                exit 1
                        BLENDER="$OPTARG"       # undocumented: allows you to switch your Blender version from the command line
                        exit 1

# validate said parameters, starting with input file
if [ ! -s "$INPUT" ] ; then

        echo "Hey, '$INPUT' isn't a file! Please fix!"
        exit 1

# ensure DIVISOR, START, and END are all valid numbers
if [ -z "`seq 1 $DIVISOR 2>/dev/null`" -a "$DIVISOR" != "0" ]; then

        echo "No increment? We'll go with 11 then."

if [ -z "`seq $START $END 2>/dev/null`" ]; then

        echo "Whoa, we can't render frames $START to $END. Check your parameters."
        exit 1

# KLUDGE: just trust $OUTPUT and $PARAMS are correct

##### internal variables

# create a place to stuff the Python file, and delete it on exit
trap "rm $TMPFILE \"$OUTPUT\".*temp.png 2>/dev/null " EXIT   # make sure temp files are gone, otherwise they'll muck up the combine phase

##### main routine

# loop until told to shut off
while [ $INDEX -le $DIVISOR -o "$INFINITE" = "yes" ] ; do 

        # going frame by frame, thanks to GPU flakiness; shuffle to avoid "hot spots"
        for FRAME in `seq $START $END | shuf` ; do

                # get a random number; use Perl to maximize seed range (Blender uses 31bit unsigned, no leading zeros)
                RAND=`dd if=/dev/urandom 2>/dev/null | perl -ne 'print unpack"%31L";exit'`

                # generate the Python script (cover both the path and branched path tracers)
                cat <<EOL > $TMPFILE
import bpy

bpy.data.scenes["$SCENE"].cycles.samples=max( int(bpy.data.scenes["$SCENE"].cycles.samples/$DIVISOR), 1 )
bpy.data.scenes["$SCENE"].cycles.aa_samples=max( int(bpy.data.scenes["$SCENE"].cycles.aa_samples/$DIVISOR), 1 )

                # now, launch Blender!
                "$BLENDER" -b "$INPUT" --python $TMPFILE -o "$OUTPUT.####.$RAND.png" $PARAMS -f $FRAME

                # average together images on-the-fly, to save disk space (but only if asked!)
                if [ "$MODE" = "progressive" ] ; then

                        BASE="$OUTPUT."`printf '%04g' $FRAME`
                        if [ ! -s "$BASE.count" ] ; then
                                mv "$BASE.$RAND.png" "$BASE.png"
                                echo 1 > "$BASE.count"

                        # silently condense multiple renders into one; handy if you're consolidating from other machines!
                        elif [ `ls -1 "$BASE".*.png 2>/dev/null | wc -l` -gt 0 ] ; then
                                COUNT=`cat "$BASE.count"`
                                SEQ="`seq 1 $COUNT`"
                                convert `for ITER in $SEQ ; do echo "$BASE.png" ; done` "$BASE".*.png -average "$BASE.temp.png"
                                mv "$BASE.temp.png" "$BASE.png"
                                echo $((COUNT+`ls -1 "$BASE".*.png 2>/dev/null | wc -l`)) > "$BASE.count"
                                rm "$BASE".*.png


        INDEX=$((INDEX+1))              # increment this!

# gather mode: only consolidate frames when finished
if [ "$MODE" = "gather" ] ; then
        for FRAME in `seq $START $END` ; do
                BASE="$OUTPUT."`printf '%04g' $FRAME`
                if [ ! -s "$BASE.count" ] ; then                # ensure this file exists!
                        echo 0 > "$BASE.count"

                # condense multiple renders into one; handy if you're consolidating from other machines!
                if [ `ls -1 "$BASE".*.png 2>/dev/null | wc -l` -gt 0 ] ; then
                        COUNT=`cat "$BASE.count"`
                        SEQ="`seq 1 $COUNT`"
                        convert `for ITER in $SEQ ; do echo "$BASE.png" ; done` "$BASE".*.png -average "$BASE.temp.png"
                        mv "$BASE.temp.png" "$BASE.png"
                        echo $((COUNT+`ls -1 "$BASE".*.png 2>/dev/null | wc -l`)) > "$BASE.count"
                        rm "$BASE".*.png