If you haven’t found this series incredibly exciting then shame on you, but you might find this a little more intriguing! Up until now we’ve dealt with some pretty simple utilities to draw circles, lines and rectangles. These were great for showing the core concepts of fields, but aren’t particularly useful when actually making cool stuff. So, by the time we reach post number 6, it’s finally time to take ‘cool stuff step 1’, and generate fields from textures you can make in any standard art package. Excited now? I thought so. Code here if you need it!
Basic texture sweeping
With everything we’ve built so far, creating a field from a texture is incredibly simple. We start with a monochrome image – each pixel is either black or white. White represents a ‘solid’ bit, black an ’empty’ bit:
I’ve added a default constructor, so we can create an empty SignedDistanceFieldGenerator
, then set it up with an extra function call. Our new function will be called LoadFromTexture
:
public void LoadFromTexture(Texture2D texture)
{
Color[] texpixels = texture.GetPixels();
m_x_dims = texture.width;
m_y_dims = texture.height;
m_pixels = new Pixel[m_x_dims * m_y_dims];
for (int i = 0; i < m_pixels.Length; i++)
{
if (texpixels[i].r > 0.5f)
m_pixels[i].distance = -99999f;
else
m_pixels[i].distance = 99999f;
}
}
Here we read a unity texture that is assumed to have been loaded elsewhere. Note that to work, the texture must be marked as ‘read/write’ when importing, and for best results, uncompressed. The pixels are read, and the dimensions/pixel buffers are setup to match the texture:
Color[] texpixels = texture.GetPixels();
m_x_dims = texture.width;
m_y_dims = texture.height;
m_pixels = new Pixel[m_x_dims * m_y_dims];
Next, we iterate over every pixel. If the input colour is greater than 0.5 (i.e. close to white), we treat it as solid. If it is less than 0.5, we treat it as empty. The solid pixel is interpreted as ‘internal’ geometry, and so is given a very large negative number. The empty pixel is interpreted as ‘external’ geometry, and so is given a very large positive number:
for (int i = 0; i < m_pixels.Length; i++)
{
if (texpixels[i].r > 0.5f)
m_pixels[i].distance = -99999f;
else
m_pixels[i].distance = 99999f;
}
A simple extra button in SignedDistanceField.cs
to load the ‘rectangle’ texture in the sample project completes this first step:
if (GUILayout.Button("Load texture"))
{
SignedDistanceFieldGenerator generator = new SignedDistanceFieldGenerator();
generator.LoadFromTexture(Resources.Load("rectangles"));
field.m_texture = generator.End();
}
Visualising the distances for the field, we now see bright red (+ve) external pixels, or bright green (-ve) internal pixels:
Crazily, that’s the hard part! By writing in 1 of these 2 ‘extreme’ values into our field, we’ve generated an extremely imprecise field. However, the ‘0 boundary’ that denotes the edge of a solid bit is still correct. When rendering, the shader will blend between pixels to calculate a distance at any given point. Thus when it blends between a solid pixel (-99999) and an empty pixel (99999), there will be a tiny point right on the boundary where a distance of 0 is read. By doing some tinkering with the numbers in our signed distance shader (technically rendering in ‘border’ mode with a border size of 99998!), we can actually visualise this:
The great thing is, half of the previous post was devoted to a specific task – taking a signed distance field for which only the edge points are valid, and sweeping it to get a completely valid one. All we have to do is run the sweep, with no changes whatsoever, and we convert the image to a field:
Once its a field, any standard signed distance effects such as our ‘solid-with-borders’ shader can be used:
And there you have it. The truth is, we’d already done most of the work for textures, so this first step was really just loading them up.
Aliasing
Unfortunately, whenever images are involved, eventually the issue of aliasing pops up. I chose rectangles quite specifically for the above demo – they’re all nice vertical or horizontal lines that fit perfectly into a grid of pixels. If, however we take the following image of a line:
You can see how MS Paint has generated zig-zaggy shapes along the edges. The effect in our signed distance field is not pleasant:
Note: if you spotted it, apologies for images inverted horizontally – shader bug whilst taking screen shots!
One way of solving this would be to simply use giant textures (lets say 4k by 4k), do the whole sweeping process, then scale down (aka downsample) the result. SDFs actually scale down very well, so this isn’t a crazy idea. However hi res input data isn’t always available, and even when it is, burning CPU processing it may not be desirable.
Many more advanced art packages will hide this problem very effectively using anti aliasing. Pixels that are only partially on the line will be only partially coloured in, making them look a lot more smooth. This very similar image from Paint.Net looks much nicer:
Sadly though, it seems to make no difference to our distance field:
This is because our current image loading algorithm completely ignores the ‘solidity’ of the pixel. As far as it’s concerned, a pixel is either solid or empty. As a result, the anti-aliasing performed by more advanced drawing packages to make nicer images hasn’t helped.
Using a low resolution version of this cat (thanks to Tricia Moore) we can see just how much we lose from ignoring this anti aliasing:
To show off the problem, the original 512×512 cat was shrunk down to 128×128 pixels in Paint.Net, which cleverly used anti aliasing to get a slightly blurry but otherwise nice image. On the left some detail has been lost but the a soft edge maintains the illusion of curved edges. Once again though, ignoring this anti aliasing has resulted in a pretty much useless SDF on the right.
To solve the problem, lets first look at an image pixel by pixel and think about what the field values really should be:

In this diagram we imagine a user has opened up a package such as Paint.Net and managed to draw a rectangle exactly 3 pixels wide and 2 pixels high into a 5×5 texture. However, whilst they aligned it perfectly horizontally, the rectangle overlaps pixel borders vertically. The result, shown on the right, is Paint.Net’s best guess at representing the rectangle in pixel format. The pixels that were fully covered are fully filled, but the pixels that were half covered have been blended in with the background.
Now we ask the question, given the texture that Paint.Net generated, what should the corresponding signed distance field texture look like?

Here you can see the box (left) and the texture (right) with the desired signed distance values written in. The first clear thing that stands out is that a ‘solidity’ of 0.5 in the input image suggests a signed distance value of 0. As expected, we can also see the more ‘solid’ pixels are assigned a negative distance (aka inside the shape), and the less solid pixels are assigned a positive distance (aka outside the shape).
Unfortunately, in addition to this handy info, we can also see there is no ‘clear’ answer to what solidity corresponds to what distance. If we were to examine the vertical edges on the left/right sides, we’d assume that a solidity of 0 meant a distance of 0.5. However If we were to examine the horizontal edges at the top/bottom, we’d assume it meant a distance of 1.
Techniques to addressing this have certainly been developed (check out this for example), though they are not entirely trivial and beyond the scope of this post. For now we’ll take the relatively good results that can be obtained simply by compromising and assuming edge distances ranging from -0.75 to 0.75. This leads us to:
public void LoadFromTextureAntiAliased(Texture2D texture)
{
Color[] texpixels = texture.GetPixels();
m_x_dims = texture.width;
m_y_dims = texture.height;
m_pixels = new Pixel[m_x_dims * m_y_dims];
for (int i = 0; i < m_pixels.Length; i++)
{
//r==1 means solid pixel, and r==0 means empty pixel and r==0.5 means half way between the 2
//interpolate between 'a bit outside' and 'a bit inside' to get approximate distance
float d = texpixels[i].r;
m_pixels[i].distance = Mathf.Lerp(0.75f, -0.75f, d);
}
}
This extremely simple version of the texture loader just reads a pixel as in our previous example, then uses it to lerp between distance values of 0.75 (outside) and -0.75 (inside). Testing it out:

Not only do we now have a softer border, but it is also slightly thicker due to more accurate approximation of the edge pixels. Similarly, looking at the earlier aliased lines:
Yummy! Whilst the result is still clearly not perfect, it is substantially better – especially given the source texture is only 128×128 pixels.
For reference, here’s 3 more cats (though I’m actually more of a dog person myself) showing the cat generated from different source data with the border width adjusted for comparison.

Downsampling
This is turning into a long post, but I want to cover some fun effects soon, and they won’t look cool unless we nail the quality of our fields first! Interpreting anti aliased textures from packages such as Paint.Net / Photoshop has improved the conversion from image to field, but we still get some artefacts along the edges that it’d be nice to clean up:
These come from the fact that our simple approach to extracting the field from images still struggles to get something perfect. Even a cleverer one would have trouble attaining really high quality, as the simple fact is data was lost when the paint package had to convert some nice clean geometry into a blurry low res image.
To fix these artefacts, we’ll utilise down-sampling, in which a higher resolution image is loaded/swept, then scaled down to the desired field size.
This function builds a new field by scaling down the existing one by 50%:
public void Downsample()
{
//to keep life simple, only downsample images that can be halfed in size!
if ((m_x_dims % 2) != 0 || (m_y_dims % 2) != 0)
throw new Exception("Dumb downsample only divides by 2 right now!");
//calculate new field size, and allocate new buffer
int new_x_dims = m_x_dims / 2;
int new_y_dims = m_y_dims / 2;
Pixel[] new_pixels = new Pixel[new_x_dims * new_y_dims];
//iterate over all NEW pixels
for (int y = 0; y < new_y_dims; y++)
{
int srcy = y * 2;
for (int x = 0; x < new_x_dims; x++)
{
int srcx = x * 2;
//combine the 4 pixels in the existing field that this one corresponds to
float new_dist = 0;
new_dist += GetPixel(srcx,srcy).distance * 0.25f;
new_dist += GetPixel(srcx+1, srcy).distance * 0.25f;
new_dist += GetPixel(srcx, srcy+1).distance * 0.25f;
new_dist += GetPixel(srcx+1, srcy+1).distance * 0.25f;
//also divide distance by 2, as we're shrinking the image by 2, and distances
//are measured in pixels!
new_dist /= 2;
//store new pixel
new_pixels[y * new_x_dims + x].distance = new_dist;
}
}
//once done, overwrite existing pixel buffer with new one and store new dimensions
m_pixels = new_pixels;
m_x_dims = new_x_dims;
m_y_dims = new_y_dims;
}
Here we allocate a new buffer, and calculate the new dimensions for a field that is exactly half the size of the existing one:
//calculate new field size, and allocate new buffer
int new_x_dims = m_x_dims / 2;
int new_y_dims = m_y_dims / 2;
Pixel[] new_pixels = new Pixel[new_x_dims * new_y_dims];
Next, we loop over all the new pixels, and for each one calculate the location of the corresponding pixel in the existing image. With this we end up with:
- x,y: The coordinate of the new pixel in the new field
- srcx,srcy: The coordinate of the existing pixel in the existing field
Now the key code in the loop:
//combine the 4 pixels in the existing field that this one corresponds to
float new_dist = 0;
new_dist += GetPixel(srcx,srcy).distance * 0.25f;
new_dist += GetPixel(srcx+1, srcy).distance * 0.25f;
new_dist += GetPixel(srcx, srcy+1).distance * 0.25f;
new_dist += GetPixel(srcx+1, srcy+1).distance * 0.25f;
//also divide distance by 2, as we're shrinking the image by 2, and distances
//are measured in pixels!
new_dist /= 2;
//store new pixel
new_pixels[y * new_x_dims + x].distance = new_dist;
This reads 4 distances from in the existing field in a square and combines them to create 1 new distance to be stored in the new field. The final bit divides the new distance by 2, as we are dividing the size of the field by 2.
By loading a higher resolution image than necessary, sweeping it as normal and then down-sampling it to the desired field resolution we get a much nicer result:
This field was still built from a relatively low res 256×256 image. However, after down-sampling to a 128×128 field the result is much more pleasing.
Summary
This post focused on building high quality fields from images, as I want to get onto some fun effects soon but fun effects need good fields! The images in this blog are typically around 512×512 pixels, so here’s our cat image loaded and swept at 1024×1024, then downsampled to 512×512:
Pretty tasty! The one final step we could go into is the use of eikonal equations to normalize the field, but I’ll leave that for another post.
There’s lots more boring stuff to learn about compression, normalizing, more sweeping, csg operations etc etc, but now that the foundation exists, it’s time for some cool s**t. Hence, next post, we’ll look at some funky 2D effects!
And for the 6th time, the code for this blog can be found on git hub here!