# Raymarching Distance Fields: Concepts and Implementation in Unity

Raymarching is a fairly new technique used to render realtime scenes. The technique is particularly interesting because it is entirely computed in a screen-space shader. In other words, no mesh data is provided to the renderer and the scene is drawn on a single quad that covers the camera’s field of vision. Objects in the scene are defined by an analytic equation that describes the shortest distance between a point and the surface of any object in the scene (hence the full name Raymarching Distance Fields). It turns out that with only this information you can compose some strikingly complicated and beautiful scenes. Further, because you aren’t using polygonal meshes (and are instead using mathematical equations) it is possible to define perfectly smooth surfaces, unlike in a traditional renderer. Snail by Inigo Quilez was created entirely using raymarching. You can find more examples of raymarched scenes on Shadertoy.

This article will first discuss the fundamental concepts and theory of raymarching. Then it will show how to implement a basic raymarcher in the Unity game engine. Finally it will show how to integrate raymarching practically in a real Unity game by allowing raymarched objects to be occluded by normal Unity GameObjects.

You can find a complete reference implementation at this Github Repository.

## Introduction to Raymarching

Raymarching is similar to traditional raytracing in that a ray is cast into the scene for each pixel. In a raytracer, you are given a set of equations that determine the intersection of a ray and the objects you are rendering. This way it is possible to find which objects the ray intersects (that is, the objects that the camera sees). It is also possible to render nonpolygonal objects such as spheres because you only need to know the sphere / ray intersection formula (for example). However, raytracing is very expensive, especially when you have many objects and complex lighting. Additionally you can not raytrace through volumetric materials, such as clouds and water. Therefore raytracing is largely inadequate for realtime applications. Figure 1: Simplified representation of a raytracer. The thick black line is an example ray cast to render a pixel from the camera.

Raymarching takes an alternative approach to the ray / object intersection problem. Raymarching does not try to directly calculate this intersection analytically. Instead, in raymarching we “march” a point along the ray until we find that the point intersects an object. It turns out that sampling this point along the ray is a relatively simple and inexpensive operation, and much more practical in realtime. As you can see in figure 2, this method is less accurate than raytracing (if you look closely the intersection point is slightly off). For games however it is more than adequate, and is a great compromise between the efficiency of polygonal rendering and the accuracy of traditional raytracing. Figure 2: Basic implementation of a raymarcher with a fixed marching interval. The red dots represent each sample point.

### Enter distance fields

A fixed interval raymarcher such as the one shown in Figure 2 is sufficient for many applications such as volumetric or transparent surfaces. However, for opaque objects we can introduce another optimization. This optimization calls for the use of signed distance fields. A distance field is a function that takes in a point as input and returns the shortest distance from that point to the surface any object in the scene. A signed distance field additionally returns a negative number if the input point is inside of an object. Distance fields are great because they allow us to limit how often we need to sample when marching along the ray. See the example below: Figure 3: Visualization of a raymarcher using signed distance fields. The red dots represent each sample point. Each blue circle represents the area that is guaranteed to not contain any objects (because they are within the results of the distance field). The dashed green lines represent the true shortest vector between each sample point and the scene.

As you can see above, the distance field allows us to march the ray by a maximal distance each step.

## Implementing a Basic Raymarcher

Because the raymarching algorithm is run on every pixel, a raymarcher in Unity is essentially a post processing shader. Because of this, much of the C# code that we will write is similar to what you would use for a full-screen image effect.

### Setting up the Image Effect Script

Let’s implement a basic image effect loading script. A quick note: I am using the SceneViewFilter script to automatically apply image filters to the scene view. This allows you to debug your shader more easily. To use it, simply extend `SceneViewFilter` instead of `MonoBehaviour` in your image effect script.

Just to get the boilerplate code out of the way, a basic image effect script is shown below:

To use this script, attach it to a camera and drag an image effect shader onto the “Effect Shader” field. As a test, you can try the default image effect shader (Assets > Create > Shader > Image Effects Shader), which simply inverts the screen. With that out of the way, we can begin to get into the more technical aspects of the implementation.

### Passing Rays to the Fragment Shader

The first step in actually implementing a raymarcher is to calculate the ray that we will be using for each pixel. We also want these rays to match up with the Unity render settings (such as the camera’s postion, rotation, FOV, etc). Figure 4: A visualization of the rays sent out from the camera

There are many ways of doing this, but I have chosen to use the following procedure every frame:

1. Compute an array of four vectors that make up Camera View Frustum. These four vectors can be thought of as the “corners” of the view frustum: The four view frustum corners that are later passed to the shader

2. When rendering our raymarcher as an image effect shader, use our own custom replacement for Graphics.Blit(). Graphics.Blit essentially renders a quad over the entire screen, and this quad renders with the image effect shader. We will add to this by, for each vertex, passing the corresponding indices in the array we created in step 1. Now, the vertex shader is aware of the rays to cast at each corner of the screen!
3. In the shader, pass the ray directions from step 2 into the fragment shader. Cg will automatically interpolate the ray directions for each pixel, giving the true ray direction.

Okay, now let’s implement the above process.

#### Step 1: Computing the View Frustum corners

To calculate the camera frustum corner rays, you have to take into account the field of view of the camera as well as the camera’s aspect ratio. I have done this in the function `GetFrustumCorners` below:

It’s worth noting a couple of things about this function. First, it returns a `Matrix4x4` instead of an array of Vector3’s. This way, we can pass the vectors to our shader with a single variable (without having to use arrays). Second, it returns the frustum corner rays in eye space. This means that (0,0,0) is assumed to be the camera’s position, and the rays themselves are from the Camera’s point of view (instead of, for example, worldspace).

#### Step 2: Passing the Rays to the GPU

To pass this matrix to the shader, we need to make a slight modification to our Image Effect Script:

Later, when we work on the image effect shader itself, we can access this matrix using the uniform `_FrustumCornersES`. I also threw in some camera-related information that we will need later (`_CameraInvViewMatrix` will be used to convert the rays from eye space to world space, and `_CameraWS` is the camera’s position).

Next up, we need to give the vertex shader the tools to interpret this matrix correctly. Remember: an image effect is simply a quad drawn over the entire screen, so we need to somehow pass the corresponding index of `_FrustumCornersES` to each vertex in the vertex shader. To do this, we need to use our own custom replacement to `Graphics.Blit` (line 13 above). In this custom version, we will use a sneaky trick: because the quad in `Graphics.Blit` is drawn using Orthographic Projection, the `z` position of each vertex doesn’t affect the final image. So, we can simply pass the corresponding indices of `_FrustumCornersES` through the `z` coordinate of each vertex! This sounds complicated, but is quite simple in practice:

In a normal `Graphics.Blit` implementation, the four calls to `GL.Vertex3` would all have z coordinates of 0. However, with this modification, we assign the corresponding indices in `_FrustumCornersES` as the z coordinate.

Finally, we are now ready to start writing the raymarching shader. As a base, I will start with the default image effects shader (Assets > Create > Shader > Image Effects Shader). First, we need to edit the vertex shader to properly interpret `_FrustumCornersES`:

Much of the vertex shader so far should be familiar to Unity graphics programmers: as in most image effect shaders we pass the vertex positions and UV data to the fragment shader. We also need to flip the UVs in the Y axis in some cases to prevent our output appearing upside-down. Of course, we also extract the corresponding ray from `_FrustumCornersES` that we are interested in, using the Z coordinate of the input vertex (these Z values were injected above in Step 2). After the vertex shader finishes, the rays are interpolated by the GPU for each pixel. We can now use these interpolated rays in the fragment shader!

As a test, try simply returning the ray direction in the fragment shader, like so:

You should see the following visualization back in Unity:

Visualizing the world-space ray direction of each pixel. Notice that, for example, when you look up the result is green. This corresponds to the actual ray direction (0, 1, 0).

### Building the Distance Field

The next step is to construct the distance field that we are going to use. As a reminder, the distance field defines what you are going to render (as opposed to 3D models/meshes in a traditional renderer). Your distance field function takes in a point as input, and returns the distance from that point to the surface of the closest object in the scene. If the point is inside an object, the distance field is negative.

Constructing a distance field is an incredibly involved and complex topic that is perhaps beyond the scope of this article. Luckily, there are some excellent resources online about distance fields, such as this excellent resource from Inigo Quilez listing a number of common distance field primatives. For the purposes of this article, I will borrow from Inigo and draw a simple torus at the origin of the scene:

In this case, `map` defines the distance field that describes a torus with diameter 1.0 and thickness 0.2, located at the origin of the scene. This `map` function is perhaps the most creative and fun aspect of raymarching, so I recommend you have fun with it! Try new primitives out, combinations of primitives, or even your own weird custom shapes! Once again, you should check out this resource for more distance field equations.

### Writing the Raymarch Function

Now that we have built a distance field to sample, we can write the core raymarch loop. This loop will be called from the fragment shader, and as explained at the top of this post, is responsible for “marching” a sample point along the current pixel’s ray. The raymarch function returns a color: the color of whatever object the ray hits (or a completely transparent color if no object is found). The raymarch function essentially boils down to a simple `for` loop, as shown below:

In each iteration of the raymarch loop, we sample a point along the ray. If we hit something, then bail out of the loop and return a color (in other words, the color of the object). If we don’t hit anything (the result of `map` is greater than zero) then we move forward by the distance given to us by the distance field. If you’re confused, revisit the theory discussed at the beginning of the article.

If you find yourself building extremely complex scenes with lots of small details, you may need to increase the `maxstep` constant on line 7 (at an increased performance cost). You also might want to carefully tweak `maxstep` anyway to see how many samples you can get away with (64 samples in this case is overkill for a simple torus, but for the sake of example I’ll keep it).

Now all that’s left is to call `raymarch()` from the fragment shader. This is simply done like so:

All we are doing here is receiving our ray data from the vertex shader and passing it along to `raymarch()`. We finally blend the result with `_MainTex` (the rendered scene before applying this shader) using standard alpha blending. Recall that `_CameraWS` represents the world-space position of the camera and was passed to the shader as a uniform earlier in our C# script.

Open up Unity again, and behold! A torus!

Look mom, no polygons!

We have made some great progress thus far: by now we can render arbitrary shapes with infinite resolution using raymarching. However, of course, it would be hard to actually use raymarched objects in a game without being able to light them (except, I guess, in some sort of abstract game).

To perform any sort of lighting calculation of an object, you must first calculate the normals of that object. This is because light reflects off of objects as a function of their normals. More concretely, any BDRF requires the normal of the surface as input. In a normal polygonal 3D mesh, it is easy to find the normals of the object, because finding the normals of a triangle is an easily solved problem. However, in our case finding the normals of an object inside of a distance field isn’t so obvious.

It turns out that, at any point on a surface defined in the distance field, the gradient of the distance field is the same as the normals of the object at that point. The gradient of a scalar field (such as a signed distance field) is essentially the derivative of the field in the x, y, and z directions. In other words, for each dimension d we fix the other two dimensions and approximate the derivative of the field along d. Intuitively, the distance field value grows fastest when moving directly away from an object (that is, along it’s normal). So, by calculating the gradient at some point we have also calculated the surface normal at that point.

Here is how we approximate this gradient in code:

Be careful however, because this technique is quite expensive! You have to calculate your distance field a total of 6 extra times for each pixel in order to find the gradient.

Now that we have the ability to find the normals of objects, we can begin to light things! Of course, we need a light source first. In order to pass a light source to our shader, we need to modify our scripts a bit:

These additions simply pass along a vector to our shader that describes the direction of the sun. You can pass along more information (such as light intensity, color, etc.) if you would like, but we’ll keep it simple for now and assume that it is a simple white directional light with intensity 1.0. This vector is passed to our scripts by the shader uniform `_LightDir`. We can now use `_LightDir` along with `calcNormal()` to light our objects:

We use the Lambertian Reflectance Model above on lines 18-20, but you could use any BDRF that you want (just like with normal 3D models!). Back in the Unity editor, assign the script’s “Sun Light” attribute to a directional light in the scene, and you will find a very nicely lit torus indeed: Our torus with lambertian lighting

## Interacting With Mesh-Based Objects

So now you have constructed a bunch of objects using distance fields and you are ready to integrate them into your Unity project. However, you run into a major problem very quickly: Mesh-based objects and raymarched objects can’t interact with or touch each other! In fact, the raymarched objects always float on top of everything else, because our raymarcher doesn’t take depth into account. The video below illustrates this:

My brain hurts...

To fix this problem, we need to find the distance along each ray at which the closest mesh-based object lies. If our raymarch loop marches past this point, we bail out and render that object instead (because it is in front of any potential raymarched objects).

To find this distance, we need to take advantage of the depth buffer. The depth buffer is accessible to all image effects shaders and stores the eyespace depth of the closest object in the scene for each pixel. Refer to figure 5 below for more context. Figure 5: Diagram of the measurements we are interested in when calculating depth. The red line is the ray for some arbitrary pixel.

In Figure 5, the magnitude of r is the measurement that we are trying to find (depth beyond which we should bail out in the raymarch loop). The magnitude of d is the eyespace depth that is given to us for that pixel by the depth buffer (note that d is shorter than r, because d does not account for perspective).

In order to find the magnitude of r we can simply leverage the rules of similar triangles. Consider rn, the vector with the same direction as r but with length 1.0 in the z direction. We can write rn as:

rn = rd ÷ (rd).z

In the above equation, rd is the vector with the same direction as r but with an arbitrary length (in other words, the ray vector that our shader is given). Clearly, from Figure 5 r and rn create two similar triangles. By multiplying rn by d (which we know from the depth buffer) we can derive r and its magnitude as follows:

| r | / d = rd / 1.0
| r | = rd × d

### Using the depth buffer in our shader

Now we need to make some modifications to our code to align with the above theory. First, we need to make some changes to our vertex shader so that it returns rn instead of rd:

Note that we are dividing by `abs(o.ray.z)` instead of simply `o.ray.z`. This is because in eyespace coordinates, `z < 0` corresponds to the forward direction. If we were to divide by a negative number, the ray direction would flip when dividing (and therefore the entire raymarched scene would appear flipped).

The final step is to integrate depth into our fragment shader and raymarch loop:

On line 45, we access Unity’s depth texture using the standard Unity shader uniform `_CameraDepthTexture`, and convert it to eyespace depth using `LinearEyeDepth()`. For more information about depth textures and Unity, see this page from the Unity Manual. Next, on line 46, we multiply the depth by the length of rn, which was passed to us by the vertex shader, satisfying the equations discussed above.

We then pass the depth as a new parameter to `raymarch()`. In the raymarch loop, we bail out and return a completely transparent color if we march past the value given by the depth buffer (see lines 13-16). Now, when we check back in Unity, our raymarched objects coexist with normal mesh-based objects as expected:

## Fun with Distance Fields

Now that we have our raymarcher up and running, we can start to build scenes! As I said earlier, this is a very deep rabbit hole and it is beyond the scope of this article to explore distance field construction entirely. However, below are some simple techniques that I have tried out. I recommend you check out some examples on Shadertoy to spark your imagination. In any case, below is a small sampler of some of the things that you can do:

### Basic Transformations

Just like with mesh-based 3D models, you can perform transformations on an object using a model matrix. In our case however, we need to compute the inverse of the model matrix since we aren’t actually transforming the model itself. Rather, we are transforming the point that is used to sample our distance field.

To implement these transformations, we first build the model matrix in the image effect script and pass the inverse to the shader:

Note how you can use `Time.time` to animate objects. You can also use any variables from your script (including, concievably, Unity’s animation system) to inform these transformations. Next, we receive the Model Matrix in our shader and apply it to the torus:

You’ll notice that the torus now moves nicely back and forth in Unity (enter play mode to see the animation):

### Combining Objects

You can combine objects as well to create more complex forms. To do this, you simply need to take advantage of some simple distance field combine operations: `opU()` (Union), `opI()` (Intersection), and `opS()` (Subtraction). Below is an example distance field function that demonstrates the outcomes of these operations:

The result of this in Unity is shown below: From Left to Right: Union, Subtraction, and Intersection Operators

### Multiple Materials

You can extend your distance field function to return material data as well. Simply have your `map()` function return the relevant material information for each object - in the example below, we pull from a color ramp texture to pick which color each object is. We also need to modify the `opU()` function introduced above to support multiple materials. The Color Ramp I am using.

As always, we need to pass the color ramp to our shader through the image effect script:

Next we can use the new `_ColorRamp` uniform in the shader. As mentioned, we need to modify `map()` as well as the lighting calculation in `raymarch()` to leverage these different material properties.

Now, we have 3 objects with different colors: Raymarching with multiple materials

### Performance Testing

It is often necessary to test the performance of your raymarch shader. The best way to do this is to see how often `map()` is called per frame. We can create a nice visualization of this by modifying `raymarch()` to output the number of samples per frame. Simply map the number of samples in a given pixel to a Color Ramp, as in the previous section.

This is what the visualization looks like in Unity: A performance visualization, with blue = lower step count and red = high step count.

The above visualization highlights a major problem in our algorithm. The pixels that do not show any raymarched objects (most pixels fall under this category) display the maximum step size! This makes sense: the rays cast from these pixels never hit anything, so they march onward forever. To remedy this performance issue, we can add a maximum draw distance like so:

Here’s our heatmap after the above optimization: Another performance visualization after the above optimization, with blue = lower step count and red = high step count.

Much better! We can add this optimization to a normal raymarch loop by adding the draw distance check to the depth buffer culling check:

## Closing Remarks

I hope that this article has given a fairly robust introduction to Distance Field Raymarching. Once again you can find a complete reference implementation at this Github Repository. If you are interested in learning more, I would suggest looking at examples on Shadertoy and at the resources referenced below. Much of the techniques used in Distance Field Raymarching are not formally documented, so it is up to you to find them. From a theoretical perspective, I haven’t touched on a whole bunch of interesting topics relating to raymarching including shadows, ambient occlusion, complex domain operations, complex procedural texturing techniques, etc. I suggest you begin to do your own research on these tricks!

• Inigo Quilez’s blog is in my opinion the seminal resource on Raymarching Distance fields. His articles discuss advanced raymarching techniques.
• This Article by 9bit Science is a great writeup on the theory behind raymarching.
• Shadertoy is a web-based shader viewing site and hosts many striking examples of distance field raymarching (as well as other applications of raymarching such as volumetric lighting). Every shader has full source code access, so it’s a great way to learn about different techniques.
• This Gamedev Stackexchange discussion gives some interesting background into how raymarching shaders work fundamentally, and offers some alternative usecases of raymarching such as volumetric lighting.