Archive for the 'Graphics' Category

How to implement dodge and burn tools

Today we’re going to cover some photographic tools, namely dodge and burn. When I started looking into dodge and burn, I actually didn’t know they had a real world analog, but they do. So to begin with, I’m going to cover what the tools are supposed to be mimicking, then delve into how we’re going to achieve that.

Overview

Dodge and burn are photographic processes that lighten or darken a specific part of an image, respectively. Dodge works by placing an object between the light and the photo, such that the area that should be lightened is blocked from the light. Burn works the same way, except that the card is placed such that it disallows light to every part of the photo except the one the developer wants to darken. If you’re interested, Wikipedia has more in depth coverage of dodge and burn.

Our dodge and burn tools are simply going to lighten and darken the image. As in most bitmap editors, our dodge and burn tools will act like brushes.

For example, if we have a simple black and white linear gradient, the dodge tool affects it like so:

Dodge midtones, 100%

The burn tool has the opposite effect on the same image:

Burn midtones, 100%

In addition to being able to adjust the strength, often called exposure, of the dodge and burn tools, the user will also be able to determine the range of colors the tools will affect the most: highlights, midtones, or shadows. Although our dodge and burn tools do not exactly mimic the tools found in large commercial packages, I believe they get reasonably close.

Implementation Overview

Since dodge and burn are brushes, they inherit all the properties of our basic bitmap brush. The difference is, instead of using a solid color for the brush tip, they use the source image, after it has been altered by a Core Image filter.

Since the basic brush implementation has been covered before, we’re only going to cover the basic algorithms used in the Core Image filters for dodge and burn. Both the dodge and burn tools only affect a specific range of colors at a time: highlights, midtones, and shadows. Because of this, there will be six filter algorithms shown here (three for dodge and three for burn).

The filters affect each of the color components in the same way. i.e. The function applied to the red channel is the same as the function applied to the green and blue channels. In addition, the range parameter is also per color component. Highlights are the component values closest to 1.0, while midtones are around 0.5, and shadows are close to 0.0. Finally, we’re not going to cover how to implement the exposure parameter here in the overview, but instead show the general filter functions.

Dodge Highlights

The dodge highlights filter will lighten all components, but will affect the components near 1.0 the most. The function for this filter is:


new component = component + e ^ component –1.0

where component is one of pixel component values (red, green, or blue) and e is the math constant.

This yields the graph:

Dodge highlights graph

As you can see, for component values closer to 0 (i.e. shadow values), the function is close to the identity function (new component = component, aka y = x). As the component values approach 1, the function diverges from the identity greatly in that the new component value reaches 1 much quicker. The result is that components closer to 1 are pushed towards 1 more quickly than they normally would. i.e. Highlights are lightened.

Dodge Midtones

Dodging midtones will lighten all components, but it will affect the components near 0.5 the most. The function for this filter is:


new component = component + 0.25 * sin(component * PI)

where component is one of pixel component values (red, green, or blue) and PI is the math constant. The constant of 0.25 is just to limit the amplitude.

This yields the graph:

Dodge midtones graph

As the graph demonstrates, the function intersects at (0, 0), and (1, 1), just like the identity function, but bows upward in the middle, near 0.5. The result is that components near 0.5 are pushed closer to 1, or in other words, lightened.

Dodge Shadows

Dodge shadows, like all the other dodge filters, will lighten all colors, but will affect the components closest to 0 the most. The function for this filter is linear:


new component = 0.5 * component + 0.5

where component is one of pixel component values (red, green, or blue).

This yields the graph:

Dodge shadows graph

The function intersects at (1, 1), like the identity function, but quickly diverges from it, as the component approaches 0. As it approaches 0, the new components are lightened, in relation to the identity.

Burn Highlights

Burning the highlights will darken all colors, but will effect the components closest to 1 the most. The function for this filter is:


new component = 0.25 * component

where component is one of pixel component values (red, green, or blue).

This yields the graph:

Burn highlights graph

Note that the function intersects at (0, 0) like the identity function, but then skews the values closer to 1 to closer to 0. i.e. It darkens the highlights.

Burn Midtones

Burning midtones will darken all components, but affects the components closest to 0.5 the most. The function for this filter is almost identical to that for dodging midtones:


new component = component –0.25 * sin(component * PI)

where component is one of pixel component values (red, green, or blue) and PI is the math constant. The constant of 0.25 is just to limit the amplitude.

This yields the graph:

Burn midtones graph

As the graph demonstrates, the function intersects at (0, 0), and (1, 1), just like the identity function, but bows downward in the middle, near 0.5. The result is that components near 0.5 are pushed closer to 0, or in other words, darkened.

Burn Shadows

The burn shadows filter will darken all components, but will affect the components near 0 the most. The function for this filter is similar to that of dodge highlights:


new component = component + (1.0 –e ^ (component –1.0))

where component is one of pixel component values (red, green, or blue) and e is the math constant.

This yields the graph:

Burn shadows graph.png

As you can see, for component values closer to 1 (i.e. highlight values), the function is close to the identity function (new component = component aka y = x). As the component values approach 0, the function diverges from the identity greatly in that the new component value reaches 0 much quicker. The result is that components closer to 0 are pushed towards 0 more quickly than they normally would. i.e. Shadows are darkened.

Code Architecture

As always, I’ve provided sample code for this article. Unlike previous sample code, the code for dodge and burn is somewhat forward looking, in that it provides a framework for future filter brushes. In the future, we can simply discuss the subclasses of FilterBrush and the corresponding filter instead of covering all the other brush related code that just changed a little.

Although all the brushing code is heavily borrowed from the smudge tool, it has been refactored so that it can support generic filter brushes. We’ll go over the changes required to support that, but in general, we’ll ignore the code we’ve already covered in previous articles.

Because the code attempts to set up somewhat of a framework, there are nine classes that make up the sample code: MyDocument, CanvasView, Canvas, GenericBrush, FilterBrush, Dodge, Burn, BurnBrushFilter, and DodgeBrushFilter. Fortunately most of these are carry overs which we’ll ignore since we’ve already dealt with them.

MyDocument, CanvasView and Canvas are the same as they ever were, although Canvas has two new methods which I’ll present later. GenericBrush is all generic brushing code that we’ve carried over ever since the first brush article. It contains a couple of new hooks that subclasses can override.

FilterBrush is where the new code begins. It is an abstract subclass of GenericBrush and deals with rendering a filtered section if the canvas as a brush stamp. Concrete subclasses only need to overload a method to return a configured CIFilter to work.

Dodge and Burn are subclasses of FilterBrush. They load their respective filters, DodgeBrushFilter and BurnBrushFilter, and override a method to return instances to their FilterBrush base class.

DodgeBrushFilter and BurnBrushFilter are CIFilter subclasses, and are nearly identical except for which kernels they load up. They are responsible for actually applying the filter.

We’ll first cover the new methods in Canvas and GenericBrush, then move on to FilterBrush, Dodge, and DodgeBrushFilter. Since Burn and BurnBrushFilter are so similar to their dodge counterparts, we’ll ignore them. However, all the Core Image kernel code will be covered since it is different for each.

Canvas

There are only two new methods on the Canvas class and they’re just used for supporting applying filters to the canvas.

The first one returns the CGContextRef for the canvas:

- (CGContextRef) context
{
	// Just grab the context off the layer
	return CGLayerGetContext(mLayer);
}

Nothing to really discuss here: we just ask our backing CGLayerRef for the CGContextRef.

The next method gets a CIImage that can be used for the input of a CIFilter. Because of a bug, it’s a bit more complicated that it should be:

- (CIImage *) image
{
	// We should theoretically be able to just return [CIImage imageWithCGLayer:],
	//	but we're not able to and get the proper results. Core Image does something
	//	a little indeterminate with the pixels: the effect is applied, but the component
	//	values are shifted.
	// This seems to be caused by the fact we're using a CGLayerRef for both the
	//	source and destination of the effect. I asked on the quartz-dev list
	//	if this was supported or not, and received no answer.
	// The work around is create a deep copy of the layer in a bitmap context.
	//	It is heavy, but it works. Simply created a duplicate CGLayerRef here
	//	doesn't seem to work.
#if 1
	CGSize size = CGLayerGetSize(mLayer);
	size_t width = size.width;
	size_t height = size.height;
	size_t bitsPerComponent = 8;
	size_t bytesPerRow = (width * 4+ 0x0000000F) & ~0x0000000F; // 16 byte aligned is good
	size_t dataSize = bytesPerRow * height;
	void* data = calloc(1, dataSize);
	CGColorSpaceRef colorspace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);

	CGContextRef bitmapContext = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorspace,  kCGImageAlphaPremultipliedFirst);

	CGColorSpaceRelease(colorspace);

	CGContextDrawLayerAtPoint(bitmapContext, CGPointMake(0, 0), mLayer);

	CGImageRef imageRef = CGBitmapContextCreateImage(bitmapContext);
	CGContextRelease(bitmapContext);
	free(data);

	CIImage* image = [CIImage imageWithCGImage:imageRef];
	CGImageRelease(imageRef);

	return image;
#else
	return [CIImage imageWithCGLayer:mLayer];
#endif
}

Technically speaking, we should be able to just call the static CIImage method imageWithCGLayer to get our CIImage. Unfortunately, there appears to be a bug in Core Image when you use the same CGLayerRef as both the source and destination of a CIFilter. In that case it tends to lighten the effect a lot more than it should. I am not sure why.

The workaround is to create a bitmap context, copy our layer into it, create a CGImageRef from that, then produce a CIImage from the CGImageRef. It’s a lot more involved, not to mention slower.

GenericBrush

The GenericBrush is the same brushing code that it has always been. In the sample code however, we’ve pulled it out into its own class and added a few methods that can be overridden by subclasses. By default, these methods do nothing:

// Subclass overrides so that they can know when we about to start and stop brushing
- (void) startBrush:(Canvas *)canvas;
- (void) stopBrush;

	// Override for sublcasses to implement brush render
- (void) renderInCanvas:(Canvas *)canvas bounds:(CGRect)bounds at:(NSPoint)point;

startBrush and stopBrush are optional overrides, used in case the brush needs to do special set up or take down before or after a brush stroke. startBrush is invoked inside the mouseDown handler before the first stamp is rendered, and stopBrush is invoked in the mouseUp handler after the last stamp is rendered.

renderInCanvas is a required override for subclasses. It renders a single brush stamp. Before invoking renderInCanvas, the GenericBrush class will have already set up the canvas with the proper clipping and whatnot. The subclass only needs to render a stamp at the specified point.

FilterBrush

The FilterBrush class derives from GenericBrush and provides a base class for all future filter-based brushes, which, in this case, will be Dodge and Burn. It does all the set up for Core Image and the actual rendering of the single brush stamp.

Initialization

Although FilterBrush doesn’t add any new properties to the brush, it does need to cache some data, as seen in the init method:

- (id) init
{
	self = [super init];
	if ( self != nil ) {
		// Initialize to nil
		mFilter = nil;
		mCIContext = nil;
	}
	return self;
}

In order to get decent performance out of brushing with a filter, we have to cache both the CIFilter we’re applying and the CIContext we’re rendering into. If we don’t, performance is bad enough that the brush is unusable at any size.

Setting up the Brush Stroke

In order to implement the caching, we use the startBrush and stopBrush methods we just covered in GenericBrush. This means we cache the filter data for one brush stroke. With some more complicated logic, we could potentially be more aggressive with the caching and get even better performance, by caching for more than one stroke.

First, let’s cover the startBrush method:

- (void) startBrush:(Canvas *)canvas
{
	// This gets called right at the mouse down, before we start stamping. We
	//	should do any set up here.

	// Ask the subclass for our filter. Cache it since it is expensive to keep
	//	creating for each stamp we render.
	mFilter = [[self filter] retain];

The first step is to get and cache our filter. We don’t know what the filter is, being an abstract class and all, but our subclass does, so ask them. We, as the base class, expect the filter to already be configured and ready to go, with the exception of the input image:

	// Ask the canvas for a representation of itself as an image. Pass that off
	//	to our filter to be applied to.
	CIImage* inputImage = [canvas image];
	[mFilter setValue:inputImage forKey:@"inputImage"];

We ask the canvas for a CIImage representation of itself, and then give that to the filter as input.

Finally, we need to cache the CIContext we’ll be rendering into:

	// Since we're a filter brush, we're rendering right back onto the canvas,
	//	so ask the canvas for the context to render into. Convert the normal
	//	context into a Core Image context, and cache it off (it's expensive to
	//	destory).
	CGContextRef context = [canvas context];
	mCIContext = [[CIContext contextWithCGContext:context options:nil] retain];
}

Not much here. We ask the canvas for its context, then build a CIContext from it.

Rendering a Stamp

Now that we have our filter and context set up, we’re going to start getting render requests from the base class. We handle that in renderInCanvas:

- (void) renderInCanvas:(Canvas *)canvas bounds:(CGRect)bounds at:(NSPoint)point
{
	// Render a single stamp. In our case, that simply means asking the Core Image
	//	context to draw the output of our filter, at the specified stamp location.
	//	Our base class that called us has already set up the mask clipping so
	//	our stamp will be properly shaped.
	[mCIContext drawImage:[mFilter valueForKey:@"outputImage"] atPoint:bounds.origin fromRect:bounds];
}

Since we already have the filter and context, and the base class has already set up the context and helpfully passed in the stamp bounds, all we do is ask the CIContext to render the resulting image. Easy as pie.

Tearing down the Brush Stroke

When we are done with a single brush stroke, we need to clean up our cache. We do that in stopBrush:

- (void) stopBrush
{
	// This gets called after the mouse up, and the last stamp is rendered. We
	//	should do any clean up here.

	// We're done with the filter and context, so free them up.
	[mFilter release];
	[mCIContext release];
	mFilter = nil;
	mCIContext = nil;
}

This is pretty self-explainatory: we release the cached filter and context. Note that this is a fairly expensive operation.

Spacing

As we saw with the smudge tool, stamp spacing changes based the kind of brush. The filter tools, dodge and burn, are no different. So we overload the spacing method:

- (float) spacing
{
	// By filter brushes typically want closer spacing so the effect is smoother
	return 1.0;
}

Through trial and error I discovered decreasing the spacing to one pixel improved the quality of the rendering, so we do that here.

Loading an Image Unit Plugin

Although not used in the FilterBrush class itself, FilterBrush does provide a helper method to load up image unit plugins. This is useful for subclassed brushes who have custom filters stored in the application bundle:

- (void) loadFilter:(NSString *)filterName fromPlugin:(NSString *)pluginName
{
	// Helper function to ensure the given filter is loaded, so we can use it.

	// Ask the system to load up all the plugins it knows about
	[CIPlugIn loadAllPlugIns];
	NSArray *filterList = [CIFilter filterNamesInCategories:nil];

	// Check to see if our filter is loaded (it should be if we added to a system
	//	path.). If it is in a plugin in the application bundle, then it won't
	//	be found.
	if( ![filterList containsObject:filterName]) {
		// It wasn't loaded by default, so manually load it

		// Construct the path to the plugin bundle. We assume it's in the application
		//	bundle, in the plugins folder.
		NSString *path = [[[NSBundle mainBundle] builtInPlugInsPath] stringByAppendingPathComponent:pluginName];

		// Explicitly load the plugin, given the path
		[CIPlugIn loadPlugIn:[NSURL fileURLWithPath:path] allowNonExecutable:NO];
	}
}

This is fairly standard Core Image code. We ask the system to load up all the known Core Image plugins, then look for our specific filter. If it’s there, then we’re done. If not, then we look in our Plug-Ins folder inside our application bundle for the plug-in, and ask CIPlugIn to manually load that plug-in.

Dodge

The Dodge class implements the dodge tool by deriving from the FilterBrush, and overriding the filter method. It has a couple of parameters, which are the same as the Burn tool.

Parameters

The two parameters are initialized in the init method (in addition to the parameters in the GenericBrush init):

- (id) init
{
	self = [super init];
	if ( self != nil ) {
		// First, make sure our filter is loaded. We don't have to do it now
		//	but its convienent here.
		[self loadFilter:@"DodgeBrushFilter" fromPlugin:@"Filters.plugin"];

		// Set the default values for our parameters
		mExposure = 1.0;
		mRange = kDodgeRange_Midtones;
	}
	return self;
}

There are two new parameters for the Dodge and Burn tools: mExposure and mRange.

  • mExposure Exposure determines how strong the dodge or burn is applied. It ranges from 0.0 to 1.0, where 0.0 means the effect isn’t applied, and 1.0 is where the effect is at its strongest.

    Dodge examples:

    • mExposure = 0.25, Dodge exposure 0.25
    • mExposure = 0.5, Dodge exposure 0.50
    • mExposure = 1.0, Dodge exposure 1.0

  • mRange Range determines the range of pixels that the dodge tool is applied to. It is an enumeration that includes highlights, midtones, and shadows. Highlights are those closest to 1, midtones those closest to 0.5, and shadows are those closest to 0.

    Dodge examples:

    • mRange = kDodgeRange_Highlights, Dodge range highlights
    • mRange = kDodgeRange_Midtones, Dodge range midtones
    • mRange = kDodgeRange_Shadows, Dodge range shadows

We use a black to white linear gradient because it represents the entire component range, from shadows to highlights.

The only other thing to note about the init method is that we manually load up our filter, by invoking the parent class helper method.

Creating a filter

The only real task of the Dodge tool is to create the appropriate filter based on its parameters. This is accomplished in one method, filter:

- (CIFilter *) filter
{
	// We need to create and configure our filter here.

	// Pull out our special filter, and set the exposure to be exactly
	//	what was given to us.
	CIFilter * filter = [CIFilter filterWithName:@"DodgeBrushFilter"];
	[filter setDefaults];
	[filter setValue:[NSNumber numberWithFloat:mExposure] forKey:@"inputExposure"];

	// Configuring the range is slightly more complicated because we have to
	//	convert an enumeration to a straight number.
	switch ( mRange ) {
		case kDodgeRange_Highlights:
			[filter setValue:[NSNumber numberWithInt:kDodgeFilter_Highlights] forKey:@"inputRange"];
			break;
		case kDodgeRange_Midtones:
			[filter setValue:[NSNumber numberWithInt:kDodgeFilter_Midtones] forKey:@"inputRange"];
			break;
		case kDodgeRange_Shadows:
			[filter setValue:[NSNumber numberWithInt:kDodgeFilter_Shadows] forKey:@"inputRange"];
			break;
	}

	return filter;
}

Here we simply ask CIFilter for an instance of our dodge filter (not the system one), which we loaded up in our init method. Note that our dodge tool parameters have a one to one mapping with the dodge filter parameters, so we just pass them through unchanged. After we’re done configuring our filter, we return it.

It should be noted that the Burn class is identical to the Dodge class, except that it uses the burn filter instead of the dodge filter.

DodgeBrushFilter

DodgeBrushFilter is a CIFilter derived class that is housed in an image unit plugin that our main application loads. It is a fairly standard filter, whose main task is to select the correct kernel to apply to the image passed in.

Like most CIFilter classes, we load the kernels in the init method:

static NSArray	*sKernels = nil;

- (id) init
{
	// If we haven't loaded up our array of kernels, do so now
	if ( sKernels == nil ) {
		// Look for our kernel code file inside of our bundle
		NSBundle *bundle = [NSBundle bundleForClass:[self class]];
		NSString *code = [NSString stringWithContentsOfFile:[bundle pathForResource:@"DodgeBrushFilter" ofType:@"cikernel"]];

		// We have three kernels in the file: highlights, midtones, and shadows.
		//	The range parameter selects which one we'll use. Cache them.
		sKernels = [[CIKernel kernelsWithString:code] retain];
	}

	return [super init];
}

We keep our kernels in a static variable so we don’t waste time loading them each time. The kernels are all stored in one file, DodgeBrushFilter.cikernel, which is kept inside the Image Unit Plugin bundle. We load the cikernel file, convert it to an array of CIKernels, and cache those off. There are three kernels loaded: one for each range.

Our dodge filter has two custom parameters, which we declare in the customAttributes method:

- (NSDictionary *) customAttributes
{
	// Return the custom attributes, which, in our case, is just the
	//	exposure and range parameters.
	return [NSDictionary dictionaryWithObjectsAndKeys:

		[NSDictionary dictionaryWithObjectsAndKeys:
			[NSNumber numberWithFloat: 0.0], kCIAttributeMin,
			[NSNumber numberWithFloat: 1.0], kCIAttributeMax,
			[NSNumber numberWithFloat: 0.0], kCIAttributeSliderMin,
			[NSNumber numberWithFloat: 1.0], kCIAttributeSliderMax,
			[NSNumber numberWithFloat: 0.5], kCIAttributeDefault,
			[NSNumber numberWithFloat: 0.0], kCIAttributeIdentity,
			kCIAttributeTypeScalar, kCIAttributeType,
			nil], @"inputExposure",

		[NSDictionary dictionaryWithObjectsAndKeys:
			[NSNumber numberWithInt: kDodgeFilter_Highlights], kCIAttributeMin,
			[NSNumber numberWithInt: kDodgeFilter_Shadows], kCIAttributeMax,
			[NSNumber numberWithInt: kDodgeFilter_Highlights], kCIAttributeSliderMin,
			[NSNumber numberWithInt: kDodgeFilter_Shadows], kCIAttributeSliderMax,
			[NSNumber numberWithInt: kDodgeFilter_Midtones], kCIAttributeDefault,
			kCIAttributeTypeScalar, kCIAttributeType,
			nil], @"inputRange",

		nil];
}

As with the dodge brush, the parameters are exposure and range. They mean exactly the same thing they did in the Dodge class.

The last part of our filter is the outputImage method, which actually constructs the output CIImage:

- (CIImage *) outputImage
{
	// The idea is to apply the kernel selected by the range parameter, to
	//	the image passed in. We don't really do any fancy preprocessing.
	CISampler *sampler = [CISampler samplerWithImage:inputImage];
	CIKernel *kernel = [sKernels objectAtIndex: [inputRange intValue]];

	// Just apply the chosen kernel
	return [self apply:kernel, sampler, inputExposure, kCIApplyOptionDefinition, [sampler definition], nil];
}

This is also a reasonably simple method. We create a sampler from our input image, and pull a kernel out of our cached array using the range parameter as an index. We then apply the chosen kernel to our image, and we’re done.

Like with the brushes, the burn filter is identical to the dodge filter, except that it loads up the burn kernels and applies those instead.

Kernels

The real meat of all this code are the kernels. They are straightforward implementations of the functions presented in the overview section, so not a lot of explanation will be given. The only difference is the kernels deal with implementing the exposure parameter, which the function presented in the overview section ignored.

Dodge Highlights

kernel vec4 dodgeHighlights(sampler image, float exposure)
{
	vec4 source = unpremultiply(sample(image, samplerCoord(image)));

	float factor = exposure;

	source.rgb = source.rgb + factor * (exp(source.rgb) - 1.0);

	return premultiply(source);
}

Dodge highlights is almost identical to the overview function presented. Note that we apply the function to all three components (RGB) at once, and the exp() function is e raised to the value passed in. In this case, the exposure is used directly to scale the steepness of the function.

Dodge Midtones

kernel vec4 dodgeMidtones(sampler image, float exposure)
{
	vec4 source = unpremultiply(sample(image, samplerCoord(image)));

	float pi = radians(180.0);
	float factor = exposure * 0.25;

	source.rgb = source.rgb + factor * sin(source.rgb * pi);

	return premultiply(source);
}

Dodge midtones is fairly straightforward. We scale the exposure by a quarter just so we don’t push values to 1 too quickly.

Dodge Shadows

kernel vec4 dodgeShadows(sampler image, float exposure)
{
	vec4 source = unpremultiply(sample(image, samplerCoord(image)));
	float factor = (1.0 - exposure * 0.5);

	source.rgb = factor * source.rgb + (1.0 - factor);

	return premultiply(source);
}

For dodging shadows, we scale the exposure by half so we don’t push everything to 1 at its peak. Also, since we’re using a linear function, we have to make the y intercept dependent on the slope. i.e. Both slope and y intercept are based on the exposure.

Burn Highlights

kernel vec4 burnHighlights(sampler image, float exposure)
{
	vec4 source = unpremultiply(sample(image, samplerCoord(image)));

	float factor = (1.0 - exposure * 0.75);

	source.rgb = factor * source.rgb;

	return premultiply(source);
}

Note that burning highlights is also a linear function. However there is no y-intercept because we want it to intersect at (0, 0). Also note that we scale the exposure by 3/4.

Burn Midtones

kernel vec4 burnMidtones(sampler image, float exposure)
{
	vec4 source = unpremultiply(sample(image, samplerCoord(image)));

	float pi = radians(180.0);
	float factor = exposure * 0.25;

	source.rgb = source.rgb - factor * sin(source.rgb * pi);

	return premultiply(source);
}

Burning midtones is identical to dodging midtones, except that we subtract off the sine wave. Like before, we scale the exposure by 1/4 before using it.

Burn Shadows

kernel vec4 burnShadows(sampler image, float exposure)
{
	vec4 source = unpremultiply(sample(image, samplerCoord(image)));

	float factor = exposure;

	source.rgb = source.rgb + factor * (1.0 - exp(1.0 - source.rgb));

	return premultiply(source);
}

Burning shadows is the same idea as dodging highlights, except that we’ve flipped the function upside down.

Method to the Madness

It took me a long time to figure out the current kernels for each of the ranges. On the surface, dodge and burn sound easy: you just lighten or darken the image. But when I started playing with existing dodge and burn tools, I found that it wasn’t quite that simple.

I started out with simple linear functions for the highlights and shadows. With some tweaks this worked respectably well for dodging shadows and burning highlights, but was way off for dodging highlights and burning shadows. In other implementations these tools pushed components to the extreme (either 0 or 1 depending on the tool) very quickly. It seemed exponential to me, which lead to the current implementation eventually. I started with base 2 instead of e though, on the presumption that 2 was more likely because it would execute faster.

Dealing with midtones took me a while. I knew I needed a curve that would be highest in the middle, but wasn’t sure which function would yield the best results. I actually started out with a gaussian function, but I concluded that was way too complex for anyone to really use for a filter. I then played around with making a curve with smooth step, but I wasn’t happy with the results. I’m not sure why sin wasn’t the first function to pop into my head when I knew I needed a curve, but it turned out to be the one to yield the best results.

I’m not sure if this how the big graphics apps implement dodge and burn, but the results seem reasonably close to me. Unfortunately no one seems to publish how they implemented dodge and burn. As a result, this took me a long time to figure out. I literally started writing this article three times before, but stopped because I wasn’t happy with the results of the tools. I also spent a lot of time in Grapher, Apple’s graphing calculator program staring at the graphs of various functions.

Conclusion

In addition to showcasing some relatively complex brushes, this article introduces a framework that we can build off of to implement other filter brushes. For example: sharpen, blur, and sponge tools.

This is one of the more satisfying articles to write, just because of what it took to implement the dodge and burn tools. Enjoy, and be sure to download the sample code.

How to implement smudge and stamp tools

I like the smudge tool because, like the brush, it has a real world analog, which means it’s a bit easier for new users to figure out how it works. Just about everyone has played with finger paints before, and knows what happens when you drag your finger through paint.

I originally thought the smudge tool would be rather complex. I was pleasantly surprised that it was quite simple to implement. So, as a bonus, I went ahead and threw in a stamp tool implementation too.

Overview

Your basic smudge tools smears the paint as if you dragged a clean brush across it. For example if you start with simple image below:

Image that is blue on one side, white on the other

then drag the smudge tool from the left to the right (i.e. from the blue to the white), your get:

Image that is blue on one side, white on the other, with a smudge

How much of a smudge you get is determined by how much pressure is applied.

The idea behind a smudge tool is that as you drag a brush through paint, both the canvas and brush swap paint. The brush picks up paint from the canvas then deposits it elsewhere on the canvas. If you’re familiar with bitmap editors, that almost sounds like the stamp brush tool, which it is, so as a bonus we’ll cover how to implement a stamp brush tool.

Algorithm

When researching how to implement a smudge tool, I found that just about everyone has a different implementation. They range from the very simple, to the very complex which are trying to accurately simulate the physical characteristics of a brush dragged across paint. The algorithm that I’ll present here definitely falls on the simple side, but it is reasonably close to what popular graphics editors implement.

The first thing to remember is that the smudge tool is just a special kind of brush. As a result, it is implemented in a very similar fashion to bitmap brush. In fact, if you haven’t yet already read the article “How to implement a basic bitmap brush”, you should do that now. The main difference between the simple bitmap brush and the smudge tool is how it renders a single stamp.

Instead of using a single color in the shape of the brush tip, the smudge tool will use a piece of the canvas, in the shape of the brush tip, as the stamp image. It locates the piece of canvas to be used as a brush image by looking at the last location it rendered a stamp. It grabs the canvas at that point and uses it as the brush tip.

The only other deviation from the usual stamping algorithm is the stamp spacing. For most brushes, the stamps are spaced 1/4 of brush tip wide. However, since the smudge tool takes its brush tip from the previous render location, it requires a spacing of one pixel. If we were to use the standard spacing, we’d end up with a choppy smudge, like so:

Image that is blue on one side, white on the other, with a choppy smudge

That’s it for our simple smudge algorithm. If we wanted something more complex, we could create a separate drawing context for the brush tip image, and have it actually accumulate paint from the canvas. We could also have the brush tip textured, so it picked up paint more in the raised areas of the brush.

Code architecture

As always, I have provided sample code to demonstrate the smudge tool. It is heavily based on the sample code from the brushes article, so we will only cover the differences between the smudge tool and a normal brush. There are four classes in the sample code: MyDocument, CanvasView, Canvas, and Smudge.

MyDocument is derived from NSDocument, and exists only to load up a user selected NSImage and hand it off to CanvasView. It is almost identical to the MyDocument used in the magic wand article, plus it doesn’t do anything terribly exciting, so we’ll ignore it from here on out.

CanvasView is a NSView derived class that renders the canvas to the screen, and catches mouse events to pass off to the Smudge class to handle. It also acts like a moderator between the Smudge and Canvas classes. It is nearly identical to the CanvasView class used in the brushes article, so we’ll ignore it the rest of the article.

The Canvas class represents the canvas the user draws onto. It can render itself into a NSGraphicsContext, and provides drawing primitives to render a simple brush stamp and a line of stamps. Although the stamping algorithm hasn’t changed from the brushes article, the Canvas class does use a different backing implementation and parameters, which warrants a quick revisit in this article.

The Smudge class represents a brush that can be used to smear paint across the canvas. Since it is a type of brush, it is heavily based on the Brush class used in the brushes article. In this article, we will only cover what the differences are. In this code, the smudge tool not only processes the user’s mouse events, it is also responsible from rendering a single stamp onto the canvas.

We’ll tackle the changes in the Canvas class first, then dig into the Smudge class.

Canvas

Unlike its previous incarnation, the Canvas class here is backed by a CGLayerRef, instead of a CGBitmapContext. This allows both greater ease of use when stamping from the canvas onto itself, and better performance. There is no init method on this class because it requires an NSImage, which will be specified later. There is a dealloc method, which simply releases the CGLayerRef.

Initialization

The initialization happens when the setImage method is called by the CanvasView class. It is responsible for creating a CGLayerRef of the proper size and rendering the NSImage parameter into the layer:

- (void) setImage:(NSImage *)image view:(NSView *)view
{
	// First free up the previous layer, in case the new one is a new size
	CGLayerRelease(mLayer);
	mLayer = nil;

	// Next, create a new layer the size of the image. For performance reasons,
	//	we want to base our layer off of the window context.
	NSGraphicsContext* windowContext = [NSGraphicsContext graphicsContextWithWindow:[view window]];
	CGContextRef context = [windowContext graphicsPort];

	NSSize size = [image size];
	mLayer = CGLayerCreateWithContext(context, CGSizeMake(size.width, size.height), nil);

Although in the sample code we never call setImage more than once, we cover that possibility here by freeing up any previously existing layer. Note that we pass in the NSView we’ll eventually render into as a parameter. This is for the purpose of creating the CGLayerRef. The layer needs to be based on a CGContextRef similar to the one that it will eventually be rendered into, in order to get the best performance. To that end, we create grab the NSGraphicsContext for the window the view is inside. We then create the layer based on that and the size of the image.

Next, we need to render the NSImage we got as a parameter into our layer:

	// Pull out the NSGraphicsContext from the layer and focus it so we can
	//	use Cocoa class to draw into it.
	CGContextRef cgLayerContext = CGLayerGetContext(mLayer);
	NSGraphicsContext* layerContext = [NSGraphicsContext graphicsContextWithGraphicsPort:cgLayerContext flipped:YES];

	[NSGraphicsContext saveGraphicsState];
	[NSGraphicsContext setCurrentContext:layerContext];

	// Some images might have transparency, so fill the background with an opaque
	//	white. Real apps would probably do a checkerboard pattern.
	[[NSColor whiteColor] set];
	[NSBezierPath fillRect:NSMakeRect(0, 0, size.width, size.height)];

	// Draw the image, with no scaling
	[image drawAtPoint:NSMakePoint(0, 0) fromRect:NSMakeRect(0, 0, size.width, size.height) operation:NSCompositeSourceOver fraction:1.0];

	[NSGraphicsContext restoreGraphicsState];
}

We’re going to use Cocoa drawing classes to render the image, so we need to go from our CGLayerRef to a focused NSGraphicsContext. Fortunately, CGLayerRef has a method that will return a CGContextRef, and NSGraphicsContext has a constructor that will take a CGContextRef. The first bit of drawing we do is to render a white background. This is for the case of a transparent image. Finally, we render the image into the current NSGraphicsContext, which happens to our layer. Our Canvas object is now initialized and ready to be used.

Drawing the canvas onto a view

After initialization, the first thing we’ll be asked to do in the Canvas class is to render. This is covered in drawRect:

- (void)drawRect:(NSRect)rect inContext:(NSGraphicsContext*)context
{
	if ( mLayer == nil )
		return;

	// Very straight forward: just draw our layer in the given context at 0, 0
	CGContextRef cgContext = [context graphicsPort];
	CGContextDrawLayerAtPoint(cgContext, CGPointMake(0, 0), mLayer);
}

This is almost too trivial to cover. If we have created a layer, we render it into the NSGraphicsContext passed in. The End.

Rendering a line of stamps

As we covered earlier, the stamping algorithm hasn’t changed from the brushes article, although one of the parameters has, and a constant derived from that parameter. Since the line stamping is rather long, we’ll only review the part that has changed, which, fortunately for us, is only the function signature and the first line of code:

- (float)stamp:(Smudge *)brush from:(NSPoint)startPoint to:(NSPoint)endPoint leftOverDistance:(float)leftOverDistance
{
	// Set the spacing between the stamps.
	float spacing = [brush spacing]; // Ask the brush

Instead of passing in an image to be used for stamping, we now pass in the smudge tool. This is because the Smudge class is now responsible for rendering a single stamp. Also, we want the spacing for the stamping to be configurable based on the brush. So instead of computing the stamp spacing here in the Canvas class, we ask the brush for it.

The rest of the method is the same as before.

Rendering a single stamp

The last part of the Canvas class to cover is the rendering of a single stamp, which as noted above, is no longer handled in the Canvas class:

- (void)stamp:(Smudge *)brush at:(NSPoint)point
{
	// Just pass through to the brush to ask it to render. Give it the layer
	//	that backs the canvas, and the point to render at.
	[brush render:mLayer at:point];
}

We just hand the rendering of a stamp off to our Smudge class. We provide it with the canvas layer, so that it can pull pixels from it and render into it.

As you can see, the main ideas of the Canvas class didn’t change, but a bit of the implementation did, in order to make it more flexible.

Smudge

The Smudge class contains most of the interesting code. Like its predecessor, Brush, it tells the Canvas class where to render lines and single points. However, it is also responsible for rendering a single stamp. Since Smudge and its predecessor have so much in common, we’ll only discuss where they differ.

Parameters

The smudge tool has some new parameters that weren’t in the basic brush class. They are initialized in init:

- (id) init
{
	self = [super init];

	if ( self ) {
		// Set the size of the brush. A radius of 10 means a 20 pixel wide brush
		mRadius = 10.0;

		// Create the shape of the tip of the brush. Code currently assumes the bounding
		//	box of the shape is square (height == width)
		mShape = CGPathCreateMutable();
		CGPathAddEllipseInRect(mShape, nil, CGRectMake(0, 0, 2 * mRadius, 2 * mRadius));
		//CGPathAddRect(mShape, nil, CGRectMake(0, 0, 2 * mRadius, 2 * mRadius));

		// Set the initial smudge color, that starts out on the brush. May be nil,
		//	if you don't want a smudge color.
#if 1
		mColor = nil;
#else
		CGColorSpaceRef colorspace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
		float components[] = { 0.0, 0.0, 1.0, 1.0 }; // I like blue
		mColor = CGColorCreate(colorspace, components);
		CGColorSpaceRelease(colorspace);
#endif

		// The "softness" of the brush edges
		mSoftness = 0.5;

		// The pressure at which to smudge. The more pressure, the more of a smudge
		//	will result.
		mPressure = 1.0;

		// Initialize variables that will be used during tracking
		mMask = nil;
		mLastPoint = NSZeroPoint;
		mLeftOverDistance = 0.0;
		mLastRenderPoint = NSZeroPoint;
	}

	return self;
}

There are two new parameters: mPressure and mColor. Although mColor was also in the brush class, its function is different for the smudge tool.

  • mPressure Pressure mimics how hard the user is pressing down on the canvas, and as a result, how much the paint is smeared. It has a range of 0.0 to 1.0, where 0.0 means the paint doesn’t smudge at all, and 1.0 means it smudges a lot.

    Examples:

    • mPressure = 0.1, Blue stripe, smudged, 1% pressure
    • mPressure = 0.5, Blue stripe, smudged, 50% pressure
    • mPressure = 1.0, Blue stripe, smudged, 100% pressure

  • mColor Color is used to determine if the brush used for smudging is dirty. i.e. Does the brush start out with some paint already on it? If so, it leaves an initial smudge of the specified color. If mColor is nil, then the smudge tool simulates a clean brush.

    Examples:

    • mColor = [1.0, 0.0, 0.0], Blue stripe, smudged, red color
    • mColor = [0.0, 1.0, 0.0], Blue stripe, smudged, green color
    • mColor = [0.0, 0.0, 1.0], Blue stripe, smudged, blue color

The rest of the parameters are identical to those in the regular brush class.

Creating the brush tip

The job of creating the brush tip is a bit different in the smudge tool than the original brush tool. In the original brush we created a full ARGB image in the brush color with the correct transparency. Since our brush image is going to be pixels from the canvas, we just want the brush tip, mMask, to reflect the shape, or transparency, of the brush. So instead of a full color image, we create a grayscale image with no alpha that will act as a mask for our actual brush image. This allows us to impose our brush shape onto any random image.

The new createBrushTip method looks like:

- (CGImageRef) createShapeImage
{
	// Create a bitmap context to hold our brush image
	CGContextRef bitmapContext = [self createBitmapContext];

	CGContextSetGrayFillColor(bitmapContext, 1.0, 1.0);

	// The way we acheive "softness" on the edges of the brush is to draw
	//	the shape full size with some transparency, then keep drawing the shape
	//	at smaller sizes with the same transparency level. Thus, the center
	//	builds up and is darker, while edges remain partially transparent.

	// First, based on the softness setting, determine the radius of the fully
	//	opaque pixels.
	int innerRadius = (int)ceil(mSoftness * (0.5 - mRadius) + mRadius);
	int outerRadius = (int)ceil(mRadius);
	int i = 0;

	// The alpha level is always proportial to the difference between the inner, opaque
	//	radius and the outer, transparent radius.
	float alphaStep = 1.0 / (outerRadius - innerRadius + 1);

	// Since we're drawing shape on top of shape, we only need to set the alpha once
	CGContextSetAlpha(bitmapContext, alphaStep);

	for (i = outerRadius; i >= innerRadius; --i) {
		CGContextSaveGState(bitmapContext);

		// First, center the shape onto the context.
		CGContextTranslateCTM(bitmapContext, outerRadius - i, outerRadius - i);

		// Second, scale the the brush shape, such that each successive iteration
		//	is two pixels smaller in width and height than the previous iteration.
		float scale = (2.0 * (float)i) / (2.0 * (float)outerRadius);
		CGContextScaleCTM(bitmapContext, scale, scale);

		// Finally, actually add the path and fill it
		CGContextAddPath(bitmapContext, mShape);
		CGContextEOFillPath(bitmapContext);

		CGContextRestoreGState(bitmapContext);
	}

	// Create the brush tip image from our bitmap context
	CGImageRef image = CGBitmapContextCreateImage(bitmapContext);

	// Free up the offscreen bitmap
	[self disposeBitmapContext:bitmapContext];

	return image;
}

Note that createBitmapContext actually creates a grayscale bitmap context with no alpha that is filled to completely black. The only real difference in createBrushTip is that we set the color to white instead of a user specified color. In an image that is used as a mask, white means that a pixel is full opaque, while black means the pixel will be fully transparent.

Rendering a single stamp

The real meat of Smudge is where we render a single stamp of the brush into the canvas’s CGLayerRef. The render function starts out like the old single stamp method on Canvas:

- (void) render:(CGLayerRef)canvas at:(NSPoint)point
{
	// Grab the context for the canvas. No matter what, we're going to determine
	//	where the current brush stamp should go, then translate the context
	//	to that position.
	CGContextRef canvasContext = CGLayerGetContext(canvas);
	CGContextSaveGState(canvasContext);

	// So we can position the image correct, compute where the bottom left
	//	of the image should go, and modify the CTM so that 0, 0 is there.
	CGPoint bottomLeft = CGPointMake( point.x - CGImageGetWidth(mMask) * 0.5,
									  point.y - CGImageGetHeight(mMask) * 0.5 );
	CGContextTranslateCTM(canvasContext, bottomLeft.x, bottomLeft.y);

We grab the context off the layer, which is really our canvas, and apply an affine transform to it such that the origin is where we want to draw the bottom left of our brush image.

Next, we want to force the shape of our brush onto our brush image:

	// Our brush has a shape and soft edges. These are replicated by using our
	//	brush tip as a mask to clip to. No matter what we render after this,
	//	it will be in the proper shape of our brush.
	CGContextClipToMask(canvasContext, CGRectMake(0, 0, CGImageGetWidth(mMask), CGImageGetHeight(mMask)), mMask);

We simply apply the image we created up in createBrushTip to the context as a clipping mask.

The last thing we need to do before actually rendering the brush image, is to deal with the mPressure parameter:

	// The pressure of the smudge is a one to one correspondance with the amount
	//	transparency of the brush stamp.
	CGContextSetAlpha(canvasContext, mPressure);

As you can see, the pressure is directly related to how transparent with render the brush image. If the user is applying a lot of pressure, then we render the brush image fully opaque. If the user applies less, then the brush image is rendered more transparent.

Now we’re ready to render the brush image:

	// If this is our first stamp, then we normally don't want to lay down any
	//	ink. That's because if we're smudging with a clean brush, we don't have
	//	any ink on the brush to lay down. Only after the initial stamp will we
	//	have the ink from the canvas to lay down.
	if ( !NSEqualPoints(mLastRenderPoint, NSZeroPoint) ) {
		// Based on the last render point that we keep track of, determine the
		//	source bounds.
		CGPoint sourceBottomLeft = CGPointMake( mLastRenderPoint.x - CGImageGetWidth(mMask) * 0.5,
												mLastRenderPoint.y - CGImageGetHeight(mMask) * 0.5 );

		// We pull straight from the canvas, and render directly onto the canvas. CGLayerRefs
		//	make this easy.
		CGContextDrawLayerAtPoint(canvasContext, CGPointMake(-sourceBottomLeft.x, -sourceBottomLeft.y), canvas);
	}

Notice that we check to see if this our first rendering the brush image in the tracking loop. This if statement works because both the mouse down and mouse up handlers reset the mLastRenderPoint member to NSZeroPoint.

If this is our first time, then there is no previous render point to use as our source image, so we don’t do anything. This matches the real world analog: until we move the brush, no smudging occurs.

If this is not our first time trying to stamp the brush image, then we actually stamp the image. We determine the bottom left coordinate of the previously rendered stamp, then use that as our source in the CGLayerRef. Since we have already applied our mask, the pixels from the layer are shaped just like our brush. As you may note, CGLayerRef makes it nice and easy to draw back into itself.

However, if this is our first time trying to render, that doesn’t necessarily mean we have nothing to do:

 	 else if ( mColor ) {
		// If this is our first stamp, and we have an initial color specified (i.e. the brush
		//	was dirty), we have to render the brush with that color only on the first
		//	stamp. The initial color might be carried to other pixels depending on
		//	how strong the pressure is.
		CGContextSetFillColorWithColor(canvasContext, mColor);
		CGContextFillRect(canvasContext, CGRectMake(0, 0, CGImageGetWidth(mMask), CGImageGetHeight(mMask)));
	}

This is the else clause for the if shown above. If this our first time rendering and we have a color specified, then we need to render it. This actually works the same as a normal brush would. We set the fill color to the same as the initial color and fill the entire rect. Since we have a mask applied, only the pixels in the brush shape are drawn.

Note that we only render the initial color on the first stamp. It will get smudged across the canvas on subsequent stamps, since the stamps are only one pixel apart.

The final part of rendering a single stamp is cleaning up, and remembering where we just stamped:

	CGContextRestoreGState(canvasContext);

	// Remember our last render point, so we know where to pull from
	mLastRenderPoint = point;
}

We update mLastRenderPoint so we can use it the next time we stamp.

The last little bit of the Smudge class that we need to examine is the spacing method:

- (float) spacing
{
	// Smudge has to be spaced by 1 pixel or we get jaggies
	return 1.0;
}

If you recall, the Canvas class calls back into the smudge tool to determine how far apart to space the stamps. In order to avoid a really choppy smudge, we return a spacing of 1 pixel.

Stamp tool

We haven’t really spoken about a stamp tool, but if you know what one is, you probably realized that it sounds a bit like the smudge tool in terms of implementation. Briefly, a stamp tool takes a location on the canvas and copies it to another point on the canvas. It is used to copy parts of an image exactly. The source pixels are determined by the user specifying an offset from the cursor for the stamp tool to pull from.

It will probably make more sense when we look at an example implementation. It is exactly the same as the Smudge class except that the spacing and render functions change.

First, the render function for a stamp tool:

- (void) render:(CGLayerRef)canvas at:(NSPoint)point
{
	// Grab the context for the canvas. No matter what, we're going to determine
	//	where the current brush stamp should go, then translate the context
	//	to that position.
	CGContextRef canvasContext = CGLayerGetContext(canvas);
	CGContextSaveGState(canvasContext);

	// So we can position the image correct, compute where the bottom left
	//	of the image should go, and modify the CTM so that 0, 0 is there.
	CGPoint bottomLeft = CGPointMake( point.x - CGImageGetWidth(mMask) * 0.5,
									  point.y - CGImageGetHeight(mMask) * 0.5 );
	CGContextTranslateCTM(canvasContext, bottomLeft.x, bottomLeft.y);

	// Our brush has a shape and soft edges. These are replicated by using our
	//	brush tip as a mask to clip to. No matter what we render after this,
	//	it will be in the proper shape of our brush.
	CGContextClipToMask(canvasContext, CGRectMake(0, 0, CGImageGetWidth(mMask), CGImageGetHeight(mMask)), mMask);

	// Based on the user specified offset, determine the source bounds.
	CGPoint sourcePoint = CGPointMake(point.x + mOffset.x, point.y + mOffset.y);
	CGPoint sourceBottomLeft = CGPointMake( sourcePoint.x - CGImageGetWidth(mMask) * 0.5,
											sourcePoint.y - CGImageGetHeight(mMask) * 0.5 );

	// We pull straight from the canvas, and render directly onto the canvas. CGLayerRefs
	//	make this easy.
	CGContextDrawLayerAtPoint(canvasContext, CGPointMake(-sourceBottomLeft.x, -sourceBottomLeft.y), canvas);

	CGContextRestoreGState(canvasContext);
}

This should look really familiar. We set up the layer’s context exactly the same as the smudge tool, up to and including setting the brush tip as a mask. A stamp tool doesn’t have a pressure parameter, so we eliminate the line that monkeys with the alpha. Since we’re doing a straight replicate with the stamp tool, we always draw from the canvas layer as our source. The only difference is we don’t pull from our last rendered point. Instead we take our current point and apply the user specified offset to determine what we should use as our brush image.

The only other modification we have to make is to the spacing function:

- (float) spacing
{
	// Standard brush spacing
	return CGImageGetWidth(mMask) * 0.25;
}

All we’re doing here is returning the stamp spacing to the normal spacing. We don’t need the one pixel spacing of the smudge tool to avoid a choppy render, and a wider spacing gives us better performance.

The code for the stamp tool is not included in the downloadable sample code, but you should be able to copy paste the methods provided here into the Smudge class and have them work.

Conclusion

The smudge tool was surprisingly simple to implement. I had done a stamp tool previously, but not presented it, because I felt it was too simple to stand on its own. However, given that its implementation is similar to the smudge tool, I jumped at the chance to present it here.

There’s still plenty that could be improved on here. The smudge tool is begging for pressure sensitivity from a tablet. It could also be made more realistic by accumulating paint and using a textured brush.

Happy smudging!

Download the sample code

An alternate way to implement marching ants

After my previous article on how to implement a magic wand tool, I got an email from Will Thimbleby suggesting an alternate way of implementing marching ants. You might know Will from the most excellent vector graphics program, LineForm. So if you like the new approach, go buy a few dozen copies of LineForm.

Overview

As I mentioned in the previous article, converting an image mask to a vector path is not exactly a cheap operation. Will suggested much simpler way that involved using strictly Core Image.

The basic idea is to use the CIStripesGenerator Core Image filter to generate us up some black and white vertical lines. We then do an edge detection filter, CIEdges, on our selection mask to calculate a new mask, representing where the marching ants should show up. We then do a multiply composite, using CIMultiplyCompositing, to merge the striped lines with our marching ants mask. The result is the stripes only show up at the edges of the selection mask. Ta-da, marching ants.

OK, its a little more complicated than that, but the previous paragraph should give you a pretty good idea what’s going on.

Code

Like before, I have sample code to go along with what I’m going to show. However, unlike before, this sample code is heavily based on the previous article‘s sample code. In fact, I really only modified one class from the Magic Wand sample code. So instead of going through all that code again, I’m going to assume you know how it works, and only highlight the new stuff.

SelectionBuilder

OK, I have to admit right off the top I lied about only having to modify one class. I had to modify SelectionBuilder slightly in order to get the generated mask to work with Core Image.

Instead of generating a true image mask via CGImageMaskCreate, I had to create a CGImageRef with a grayscale colorspace and no alpha. This meant that I had to:

  1. Flip the colors. In SelectionBuilder, black now means not in the selection, while white means in the selection. In the init method, the mask data is calloc’d and left zeroed out. When we mark a point in the selection, we set it to white.
  2. Use CGImageCreate instead of CGImageMaskCreate.

Fortunately, CoreGraphics doesn’t care, outside of mask creation, if it’s really an image or an image mask. So no other classes or code had to be modified for this particular change.

CanvasView

CanvasView is really the class that had to change, and it was mainly in the drawRect method. Other than that, it was simply stripping out the mCachedPath member data since it wasn’t needed anymore. In fact, I’m only going to cover the drawRect method. If you would like to see how the rest of the code changed download the sample code.

The new drawRect method starts out normal enough:

- (void)drawRect:(NSRect)rect {
	// Simply ask the canvas to draw into the current context, given the
	//	rectangle specified. A more sophisticated view might draw a border
	//	around the canvas, or a pasteboard in the case that the view was
	//	bigger than the canvas.
	NSGraphicsContext* context = [NSGraphicsContext currentContext];

	[mCanvas drawRect:rect inContext:context];

	// If we don't have a selection, bail now
	if ( mSelection == nil )
		return;

We just draw the contents of the canvas. If we don’t have a selection, we can stop right here (but that wouldn’t be very interesting, now would it?).

The first thing we need to do is convert our selection mask into something Core Image can use:

	// Create a CIImage from our selection image. It's important that our mSelection
	//	has to be an actual image, not an image mask as created by CGImageMaskCreate.
	//	CIImage will not create the proper image with a CGImageRef created with
	//	CGImageMaskCreate.
	CIImage *selectionImage = [CIImage imageWithCGImage:mSelection];

	// The first thing we want to do is edge detection. We make the assumption
	//	that our mask has only two colors: black and white. If we were to do
	//	some antialiasing in it, we might have to do some posterization to
	//	reduce the number of colors before running the edges filter.
	CIFilter* edgesFilter = [CIFilter filterWithName:@"CIEdges"];
	[edgesFilter setDefaults];
	[edgesFilter setValue:selectionImage forKey:@"inputImage"];

	// In order to use our mask, convert it into an alpha channel
	CIFilter* maskToAlphaFilter = [CIFilter filterWithName:@"CIMaskToAlpha"];
	[maskToAlphaFilter setDefaults];
	[maskToAlphaFilter setValue:[edgesFilter valueForKey:@"outputImage"] forKey:@"inputImage"];

We also go ahead and do an edge detection on our mask. Since we know that our mask only ever has two colors, we don’t need to do any posterization on it beforehand. In a real system, we might have antialiasing, and might need to reduce the number of colors with posterization. We then convert our new mask into an alpha channel so we can use it in a compositing filter later.

To illustrate this, suppose our selection is this:

Bitmap graphic with selection

our image mask would then be:

Bitmap graphic selection mask

After we apply the edges filter to our mask, it would be:

Bitmap graphic selection mask edges

As you can see the mask is now white where we want our marching ants to appear. Applying the mask to alpha filter then means it has an alpha of 1.0 (opaque) where it is white, and an alpha of 0.0 (transparent) where it is black.

Now that we have our mask, we need to generate our stripes that we’re going to use for the ants:

	// Generate vertical black and white stripes that are 4 pixels wide.
	//	We animate the marching ants here by shifting the y axis to the right
	//	each time through.
	CIFilter* stripesFilter = [CIFilter filterWithName:@"CIStripesGenerator"];
	[stripesFilter setDefaults];
	[stripesFilter setValue: [CIColor colorWithRed:0.0 green:0.0 blue:0.0 alpha:1.0] forKey:@"inputColor0"];
	[stripesFilter setValue: [CIColor colorWithRed:1.0 green:1.0 blue:1.0 alpha:1.0] forKey:@"inputColor1"];
	[stripesFilter setValue: [NSNumber numberWithFloat:4.0] forKey:@"inputWidth"];
	[stripesFilter setValue: [CIVector vectorWithX:mPhase Y:150.0] forKey:@"inputCenter"];

We use the CIStripesGenerator to create some vertical black and white alternating lines. We set the width to four simply because that was the line dash width we used in the original marching ants algorithm. However, because of the next step, the line segments won’t exactly be four pixels wide everywhere.

We also implement the animation of the marching ants here. One of the parameters of the stripes filter is where the center of the generated lines are. By incrementing the x value of the center point, we shift the vertical lines to the right each time through the animation, which makes the ants “march.”

Our initial stripes filter image would look like this:

Generated stripes graphic

In order to get our stripes to show up on all edges of a selection correctly, we need to tilt the stripes in one direction:

	// We have vertical stripes, which will look good on the top and bottom edges of
	//	the selection, but will appear as a solid colored line on the left and right.
	//	So that most border shapes will appear dashed, rotate the vertical lines
	//	by 45 degrees.
	CIFilter *affineTransform = [CIFilter filterWithName:@"CIAffineTransform"];
	NSAffineTransform *rotateTransform = [NSAffineTransform transform];
	[rotateTransform rotateByDegrees:-45];
	[affineTransform setDefaults];
	[affineTransform setValue:[stripesFilter valueForKey:@"outputImage"] forKey:@"inputImage"];
	[affineTransform setValue:rotateTransform forKey:@"inputTransform"];

The problem with leaving the stripes vertical is that they wouldn’t look right on the vertical edges of the selection. The top and bottom edges of the selection would nicely alternate between black and white, but the left and right edges would be one solid color.

To fix this we apply an affine transform to rotate the lines 45 degrees. Our stripes now look like:

Generated stripes graphic, rotated 45 degrees

We now have our two parts: the stripes that will be our ants, and the mask that marks where they should go. We only need to combine them:

	// The last filter we apply combines our newly created stripes with our mask.
	CIFilter *multiplyFilter = [CIFilter filterWithName:@"CIMultiplyCompositing"];
	[multiplyFilter setDefaults];
	[multiplyFilter setValue:[maskToAlphaFilter valueForKey:@"outputImage"] forKey:@"inputImage"];
	[multiplyFilter setValue:[affineTransform valueForKey:@"outputImage"] forKey:@"inputBackgroundImage"];

	// Finally, render our creation to the view.
	CIContext *ciContext = [context CIContext];
	CGRect imageRect = CGRectMake(0, 0, CGImageGetWidth(mSelection), CGImageGetHeight(mSelection));
	[ciContext drawImage:[multiplyFilter valueForKey:@"outputImage"] inRect:imageRect fromRect:imageRect];

We use the multiply compositing filter to combine the images. This works because our edge mask’s alpha is 1.0 at the edges and 0.0 everywhere else. When you multiply the alpha channels of the two images together, it filters out everything but the edges, thus giving us ants around the selection.

Since we now have our fully formed ants, we render them to the screen using a CIContext created from our NSGraphicsContext.

Just for completeness, here’s the last bit of the drawRect fuction:

	// The marching ants need to animate, so fire off a timer half a second later.
	//	It will update the mPhase member and then invalidate the view so
	//	it will redraw.
	[NSTimer scheduledTimerWithTimeInterval:0.5 target:self selector:@selector(onSelectionTimer:) userInfo:nil repeats:NO];
}

Nothing new here: we just fire off a timer that will increment the phase member and invalidate the view so the ants march.

That’s it, we’re done. Much less involved for us than the previous algorithm.

Conclusion

Once again, I’d like to thank Will Thimbleby for suggesting this approach. I have to admit: I think using a NSBezierPath to render the ants looks better. However for sufficiently complex or large image masks, it may be prohibitively expensive to use.

Download the sample code