Image Filter Theatre: HSB Mixer
For my next trick Rocky, I’m gonna pull a rabbit outta my hat. Or maybe just a HSB mixer.
I’m going to build off what I learned from my previous Core Image example, this time adding user parameters, demonstrating how to build functions in the Core Image Kernel Language, and other exciting magic tricks.
This filter is going to simply allow the user to make adjustments to the image in the HSB color space. So I probably need to come up with a few definitions as to what HSB color space is.
HSB Color Space
HSB, aka HSV, is a color space that defines color by three components: hue, saturation, and brightness (sometimes called value). Hue describes the color type (e.g. red, green, blue, etc) and is measured in degrees, from 0 to 360. Saturation is the vibrancy of the color, typically measured from 0 to 100. The less saturation, the more gray a color appears. The value, or brightness of a color, is essentially what it sounds like, and typically ranges from 0 to 100.
The HSB color space is a cone, with the hue running the circumference of the base, saturation running from the center to the edge of the cone, and brightness running the height of the cone. It made a lot more sense to me when I looked a few models, which can be found at Wikipedia’s entry on the HSV color space.
Project and Resources
Like last time, I want to create a project from the “Image Unit Plug-in for Objective-C” template. I’ve named mine “HSB Mixer.”
Next, I need to modify the Description.plist file to tell Core Image about the parameters my filter has, and what category my filter belongs to, which effects where it shows up in the UI.
Under the CIAttributeFilterCategories key in the plist, I find the different default categories HSB Mixer belongs to. Although the HSB Mixer will work with both video and still images, it’s not really a “stylize” filter, but more of a color adjustment. To that end, I change CICategoryStylize to CICategoryColorAdjustment:
<key>CIAttributeFilterCategories</key>
<array>
<string>CICategoryColorAdjustment</string>
<string>CICategoryVideo</string>
<string>CICategoryStillImage</string>
</array>
Now I need to add my parameters. By default, the image to modify is the first parameter, followed by two number parameters. Since this isn’t a generation filter, I need the image parameter, but instead of the two default number parameters, I need three number parameters for hue, saturation, and brightness. So I delete the last two parameters (the last two dict elements) under the CIInputs key.
The first parameter that I add is hue:
<dict>
<key>CIAttributeClass</key>
<string>NSNumber</string>
<key>CIAttributeDefault</key>
<real>0</real>
<key>CIAttributeDisplayName</key>
<string>inputHue</string>
<key>CIAttributeIdentity</key>
<real>0</real>
<key>CIAttributeName</key>
<string>inputHue</string>
<key>CIAttributeSliderMax</key>
<real>6</real>
<key>CIAttributeSliderMin</key>
<real>-6</real>
</dict>
The first key is CIAttributeClass, and it simply states the corresponding Cocoa class to use for the given parameter. All the possible values can be found in the document Modify the Description Property List in Table 4-2.
The CIAttributeDefault key describes the default value of the parameter in the UI. I choose 0 as the default value, because it is also the identity value (see below).
CIAttributeDisplayName is a key into the Description.strings file, and determines the localized name that is shown to the user.
The CIAttributeIdentity key specifies a value for the parameter which produces the identity of the input image. i.e. It’s a value such that the filter doesn’t change the source image in any way. In this case, a value of zero means the filter doesn’t change the hue of the image.
CIAttributeName is an internal name for the parameter. Core Image Filters are “Live Effects” in Fireworks parlance, meaning they store their parameters so they can be modified later, and they don’t modify the source image directly. Instead, they generate a new image based on the source image plus the filter parameters. In other words, they’re editable. To that end, each of the parameters I define need to be stored in an NSDictionary, and CIAttributeName is simply the key for each parameter in that dictionary.
CIAttributeSliderMin and CIAttributeSliderMax define the range of the slider control used to represent the parameter. Although hue ranges from 0 to 360 degrees, I’m going to scale it down to 0 to 6, using real numbers. Since I want to be able to both add and subtract from the hue, so all values are achievable, I let the range be -6 to 6.
The parameters for saturation and value are the same as hue, except they range from -1 to 1, and obviously have different names:
<dict>
<key>CIAttributeClass</key>
<string>NSNumber</string>
<key>CIAttributeDefault</key>
<real>0</real>
<key>CIAttributeDisplayName</key>
<string>inputSaturation</string>
<key>CIAttributeIdentity</key>
<real>0</real>
<key>CIAttributeName</key>
<string>inputSaturation</string>
<key>CIAttributeSliderMax</key>
<real>1</real>
<key>CIAttributeSliderMin</key>
<real>-1</real>
</dict>
<dict>
<key>CIAttributeClass</key>
<string>NSNumber</string>
<key>CIAttributeDefault</key>
<real>0</real>
<key>CIAttributeDisplayName</key>
<string>inputValue</string>
<key>CIAttributeIdentity</key>
<real>0</real>
<key>CIAttributeName</key>
<string>inputValue</string>
<key>CIAttributeSliderMax</key>
<real>1</real>
<key>CIAttributeSliderMin</key>
<real>-1</real>
</dict>
Finally, we need to modify Description.strings to have localized versions of our parameter display names:
"MyKernelFilter" = "HSB Mixer";
"inputHue" = "Hue";
"inputSaturation" = "Saturation";
"inputValue" = "Brightness";
Kernel Code
The algorithm for the kernel is simple:
- Sample the current pixel
- Convert the current pixel from RGB to HSV
- Add the hue, saturation, and value input parameters to the corresponding component in the current HSV pixel
- Convert the adjusted HSV pixel back to RGB
- Return the new RGB pixel
The kernel function is defined as:
kernel vec4 hueSaturationKernel(sampler image, float inputHue, float inputSaturation, float inputValue)
{
// Get source pixel
vec4 p = sample(image, samplerCoord(image));
// Convert to HSV color space
vec4 hsvPixel = rgbToHsv(p);
// Add on our values (but be sure to clip)
hsvPixel.r = clamp(hsvPixel.r + inputHue, 0.0, 6.0);
hsvPixel.g = clamp(hsvPixel.g + inputSaturation, 0.0, 1.0);
hsvPixel.b = clamp(hsvPixel.b + inputValue, 0.0, 1.0);
// Convert back to RGB color space
return hsvToRgb(hsvPixel);
}
Note that my user parameters are just passed in, in the order I declared them in Description.plist. The kernel function itself is pretty straightforward. rgbToHsv() and hsvToRgb() are the only user defined functions, and they convert to the HSV color space and back. Notice that the vec4 data type always assumes rgba, so when you see hsvPixel.r here, it actually corresponds to the hue. Similarly hsvPixel.g corresponds to saturation and hsvPixel.b corresponds to value, although the alpha channel remains untouched.
The tricky part is dealing with HSV to RGB and RGB to HSV conversions. There is code all over the place to deal with this, but I used the C code from an ACM paper as a template. The algorithm there scales the hue down to be from 0 to 6, which is why my input parameters do as well. I won’t go into the math behind the conversions here, because it is covered in any graphics textbook, as well as several places on the web, including Wikipedia.
The easier of the conversions, code wise, is RGB to HSV:
vec4 rgbToHsv(vec4 rgb)
{
float x = min(rgb.r, min(rgb.g, rgb.b));
float v = max(rgb.r, max(rgb.g, rgb.b));
float f = (rgb.r == x) ? rgb.g - rgb.b : ((rgb.g == x) ? rgb.b - rgb.r : rgb.r - rgb.g);
float i = (rgb.r == x) ? 3.0 : ((rgb.g == x) ? 5.0 : 1.0);
float h = i - (f / (v - x));
float s = (v - x) / v;
return (v == x) ? vec4(-1, 0, v, rgb.a) : vec4(h, s, v, rgb.a);
}
Just about everything in this function is straightforward. min() and max() are built in functions that will work on any numerical type.
You might notice that there is potentially wasted computation here, in the special case of v == x. In that case, I just return v, meaning I didn’t need to compute f, i, h, or s. The reason for this is that the Core Image Kernel Language doesn’t support if statements that are data dependent. That is, the expression inside of an if statement can only contain a constant.
Correct:
if ( true ) v = x;
if ( false ) x = v;
Incorrect:
if ( v == x ) v = x; // Error!
bool val = true;
if ( val ) x = v; // Error!
The kernel language does however allow use of the ternary operator, so that’s what I’m forced to use here. The side effect of that is that I cannot return early, even if all the relevant calculations are complete. I’m guessing the reason for this restriction is related to the abilities of most GPU’s.
The HSV to RGB conversion is sightly more complicated:
vec4 hsvToRgb(vec4 hsv)
{
float h = hsv.r;
float s = hsv.g;
float v = hsv.b;
int i = int(floor(h));
float f = isOdd(i) ? h - float(i) : 1.0 - (h - float(i));
float m = v * (1.0 - s);
float n = v * (1.0 - s * f);
vec4 result = (i == 0) ? vec4(v, n, m, hsv.a) : ((i == 1) ? vec4(n, v, m, hsv.a) : ((i == 2) ? vec4(m, v, n, hsv.a) : ((i == 3) ? vec4(m, n, v, hsv.a) : ((i == 4) ? vec4(n, m, v, hsv.a) : ((i == 5) ? vec4(v, m, n, hsv.a) : vec4(v, n, m, hsv.a))))));
return (h == -1.0) ? vec4(v, v, v, hsv.a) : result;
}
The first thing to note is that all type conversions must be explicit. To get i I call floor(), a built in function, on h, then explicitly cast it to an int. I’m not sure of design decisions made here, but I would hazard to guess that float to int conversions, and vice versa, are expensive.
The only other thing to note here is the last user defined function, isOdd():
bool isOdd(int v)
{
float dividend = float(v) / 2.0;
return dividend != floor(dividend);
}
I call attention to this function because of its technique. Usually, to determine if a number is odd, I bitwise AND against a bitmask of 0×0001, or take the modulus by 2 and see if it has a remainder. However, the kernel language does not support any bitwise operators or modulus. Here, I divide by two and see if has fractional part by comparing the dividend against the floor of itself. This is error prone because of rounding errors. Another way would be to take the difference between the dividend and its floor, and see if it is greater than a specified small number.
Trying it out
Like before, I like using the Core Image Fun House for experimenting with my filters. I just need to copy my built filter to ~/Library/Graphics/Image Units/ to do that.
Here’s Flowers.jpg with the HSB Mixer applied to it. The parameters are Hue: 2.0, Saturation: -0.48, Brightness: 0.0.
Conclusion
Hopefully you’ve learned more about Core Image Filters from this post. We’ve learned that simply declaring input parameters in a plist file, Core Image clients can automatically construct UI for your filter. Also, we’ve learned that although the kernel language is powerful, it has some serious limitations that we have to work around.