Hey everyone! This is my first of several posts. I've been working on a Unity game called Collidalot for almost 3 years now and would like to share some pieces of its development that I think are interesting and potentially helpful to other developers working on their own projects. I apologize now for how long this article is, I just wanted to make sure to be thorough.

Collidalot is out today for the Nintendo Switch and this article is the first of several focused on explaining how rails in Collidalot work, with this entry's topic being arguably the most unique and difficult thing I have had the pleasure of working on: rendering dynamically data-driven energy rail systems. I know that sounds like nonsense so let me elaborate by briefly explaining what you do in the game. In Collidalot, you play as jet-powered hover cars that can grind on the energy rails in the map, kind of like a skateboard, to outmaneuver your opponents and slam them off of the map or into dangerous obstacles. The rails react as you grind over them, filling with energy and getting painted in the color of your ship. This post specifically addresses techniques used to write position based gameplay data into render textures (or heatmaps) so that they can be sampled by shaders to produce dynamic visual effects.

Part 1: Origins

In my team's original pass on Collidalot, the rails were essentially lifeless and didn't react at all to players, only serving as a simple visual indication of where your ship could grind. They looked like this:

Long story short, when the team decided we needed start from scratch on the visuals of the game, I made it my mission make the rails reactive, vibrant, and visually pleasing. I had some background in OpenGL rendering, but it had been a while and I had only lightly messed around with shaders in Unity, so I knew it was not going to be easy. My implementation starts on shader toy with an amazing shader called the "smoke ring".

I stumbled upon this when looking for interesting visual effects to base my new rail implementation off of. I was instantly impressed by what the developer created and wanted to know if it was possible to apply the effect to any arbitrary line. I worked with my instructor at the game design school I was attending at the time and she was able to come up with an energy rails shader. This shader demonstrated to me that rails could have exciting visuals that could also be dynamically changed via game data.

Part 2: High-Level Implementation of Data-Driven Rail Rendering

This section goes over the high-level explanation of how rails get rendered in Collidalot. It would be a monumental task to go over it all in detail in one post (believe me I tried), so this post will focus on Phase 1 and Phase 4 which are related to saving data into heatmaps so that shaders, like my rails, can access the data and update their visuals. I will be going more into details about Phase 2 and Phase 3 in future posts, but I still recommend reading through the entire process to understand the fundamentals of what is going on.

Phase 1: Writing Gameplay Data into Heatmap Textures

Gameplay data is stored in RenderTextures, called heatmaps.

Each heatmap stores a different type of data to change to the visuals of the rails, e.g. rail color.

stores a different type of data to change to the visuals of the rails, e.g. rail color. Collidalot uses heatmaps to store the following data at specific locations: The player who has most recently painted the rails, which determines the base rail color The current blend color which is temporary and fades to the base rail color The amplitude/intensity of the waves that make up the rail which stabilize over time.

Certain gameplay events, like rail grinding, position 2D sprites where events occurred, and then use custom shaders to output data that reflects the visuals of the event.

A separate camera is used to render these 2D sprites containing gameplay data into the heatmaps before anything else is rendered that frame.

Multi-target rendering is used to render the data into every heatmap at once.

Phase 2: Rendering the Rails into a Render Texture

Each rail is a line segment, arc, or bezier curve positioned in 3D space.

A rail's geometry is generated by assigning positions and a width to a Unity Line Renderer and positioning it on the map.

Each rail uses a shader similar to the aforementioned energy rail shader modified to work in 3D space and has references to the data in all of the relevant heatmaps.

A separate camera is set to only render the "rails layer" and renders after the heatmap camera.

The camera is positioned to have the map fit perfectly in its view and renders all of the rails into a render texture using max blending so that they do not have artifacts where they overlap.

When a rail renders, its output visuals change based on the data in all of the heatmaps at its location.

Phase 3: Placing the Rails into the World using a Quad Mesh

A quad mesh is positioned in 3D space to fit perfectly inside the map.

The quad mesh uses a custom shader that has the rails render texture created in Phase 2 passed to it.

The quad mesh outputs the render texture using Alpha blending to make the rails render with a little transparency.

The quad mesh shader casts shadows without having the rails be lit by light sources, as shown in this unlit shadow cutout shade example.

The main game camera renders the quad mesh consisting of all of the rails as well as most of the other objects in the game, create a visible energy rail track.

Phase 4: Updating the Rail Heatmaps Before the Next Frame

Heatmaps are updated by scripts attached to their respective camera to change the values therein with the passage of time. For example, this allows the heatmap that stores blend color data to fade over time.

Graphics.Blit is used to have materials with specifics shaders process the current heatmaps and output them to other RenderTextures that store the updated heatmaps. It is possible to update multiple heatmaps at once using Graphics.Blit by setting multiple render targets.

Each heatmap needs to have two RenderTextures that hold the current data and updated data. These RenderTextures are swapped each frame

Part 3: Techniques

Rendering into a Heatmap

A heatmap is just a texture that stores data; typically 4 floats in the format of RGBA, in each texel of the texture. A common use of a heatmap is to map said texels to a location in the world, and to have each texel represent some sort of data about that location. The texel data could represent color, which is the most intuitive, but it could also represent anything else as long as you define the data and interpret it properly when you use it. For example, I use a heatmap to store the amplitude and intensity (brightness) of my energy rails at locations on the map, where the R channel stores amplitude data and the G channel stores intensity data.

Heatmaps are especially easy and useful if you have a small, fixed-position world that is rectangular, particularly if that world is fundamentally 2D in nature. It would be much more difficult to use heatmaps to represent data across giant irregular shaped maps where heatmaps textures would have to be too large to store all of the data, but there are techniques that could make it possible.

This example will focus on how to setup heatmap rendering for a 2D, fixed position maps, as these are what are found in Coillidalot.

Typically when Unity renders an object, the main camera renders the object according to its material and shader and then outputs the result into the default RenderTexture that is displayed on-screen. Although a little counterintuitive, we can use cameras to "render" arbitrary data into a texture that we stretch across the map without ever showing that data on the screen. The data is instead used when certain objects actually render to the screen to change how they look. This is one way heatmaps can be used, and is the method used in Collidalot to dynamically render rails.

Let's get into how to actually set this process up. First off, we need to set up an alternate rendering stage that renders to heatmaps instead of to the screen. This will require a separate camera from the main camera. This heatmap camera has the following settings/conditions:

Clear flags are set to "Don't Clear" to prevent clearing existing heatmap data.

Culling Mask is set to a unique layer or layers so that it only renders objects that represent heatmap data. Mine is set to "Rail Heatmap Data" and only my heatmap renderer objects use this layer.

The camera is the same aspect ratio as the map and is positioned so that it fits the entire map perfectly in view.

The camera is in orthographic projection mode.

The camera's depth is set lower than the depth of the main camera and any other camera that will use heatmap data so that the heatmap data is rendered first.

The camera's target texture is set to your heatmap RenderTexture. I explain how to set this up in code in the next section Multi-target Rendering and how to have the heatmap camera write to multiple heatmaps at once.

After the camera is set up, you need an object for it to render. I use a standard 2D sprite renderer object in unity and set it to the layer the heatmap camera is rendering. One option is to have this object use a sprite with the default material and directly render that sprite into the heatmap, though I prefer to set the sprite renderer to use a material with a custom shader so that I have greater control over how I write data into the heatmaps. The process of rendering the data works as follows:

Activate the heatmap renderer sprite object so that the heatmap camera will know to render it. Position, rotate, and scale the sprite so that it covers the area you want to write heatmap data to. Set any properties on the sprite renderer material in order to have it output the correct data in the shader. Deactivate the sprite after the camera has rendered so that it does not continue to write into the heatmap next frame.

Using this sprite object and the heatmap camera you can write data into the heatmap at any location on the map. You can also create multiple sprite objects to write in multiple locations on the same frame. In the next section, I will go over an example shader for the heatmap sprite that allows you to render into multiple heatmaps at once. The following are a couple heatmaps created in Collidalot from gameplay and the visual result they produce on the rails:

Multi-target Heatmap Rendering

Multi-target rendering is a useful technique that is possible with Unity's rendering pipeline and more people should be aware of it. Typically when a camera renders, it renders into one render texture, like the main render texture displayed in-game that shows up on your screen. In some situations, you want different types of data to end up in different textures from the same source object that was rendered, or you may not be able to fit all of the data you need into a single render texture. Without multi-target rendering, this would take multiple cameras each rendering to a different target and potentially more GameObjects and Layers being used to help organize the divided rendering (this is what my first attempt looked like). Multi-target rendering simplifies some of these scenarios and can even act as a significant optimization in many cases. I use multi-target rendering to render different types of data into multiple heatmaps at the same location with the simplicity of only needing one GameObject, one camera, one layer (that the camera uses to cull objects), and one shader.

To set up a camera to render into multiple render textures at once, you need a render buffer array, an initialized render texture for each target, and to properly set the camera to target those buffers. The following code shows the significant steps:

// Create an array of render buffers, 1 for each target heatmap render texture // I have 4 types of data (player painted color, blend color, positive rail attributes, negative // rail attributes private RenderBuffer[] heatmapRenderBuffers = new RenderBuffer[4];

// How to initialize a render texture for the buffer, do this for each render texture // These should typically be the same aspect ratio as the camera RenderTexture currColorTex = new RenderTexture(textureWidth, textureHeight, 0, RenderTextureFormat.ARGBFloat);

// Set the camera to render to the correct targets // In my case the depth buffer I used did not matter but needed to be specified heatmapRenderBuffers[0] = currColorTex.colorBuffer; heatmapRenderBuffers[1] = currBlendColorTex.colorBuffer; heatmapRenderBuffers[2] = currAttribTex.colorBuffer; heatmapRenderBuffers[3] = currNegAttribTex.colorBuffer; heatmapCamera.SetTargetBuffers(heatmapRenderBuffers, currColorTex.depthBuffer);

Now for how to handle the multiple render targets in the shader. Any object your camera renders should use a shader that supports multi-target rendering. It is important to note that the indices of the render textures in the render buffer array matter. The shaders rendered needed to output the correct number and formats of data that match the targets assigned to the camera. To do this, create a struct in the shader that fulfills this requirement.

// This is a struct in the shader my sprite objects utilize to render into the heatmaps // The render textures I used were formatted to ARGBFloat which is equivalent to float4 // The variables are ordered to coincide with the order I assigned the heatmapRenderBuffers array struct FragmentOutput { float4 paintColor; float4 blendColor; float4 posAttribs; float4 negAttribs; };

The final step is to have the fragment shader output this struct type. The most basic form would look something like this:

FragmentOutput frag(vertOutput input) : COLOR { FragmentOutput output; // The rendered sprite will output white into all 4 heatmaps // What each channel means in each texture is up the the implementation output.paintColor = float4(1.0, 1.0, 1.0, 1.0); output.blendColor = float4(1.0, 1.0, 1.0, 1.0); output.posAttribs = float4(1.0, 1.0, 1.0, 1.0); output.negAttribs = float4(1.0, 1.0, 1.0, 1.0); return output; }

The goal is to customize the output data in this shader to be whatever you want to be stored in the heatmaps, then sample that data in other shaders when rendering the game so that their visuals are affected by the data in the heatmaps. Multi-target rendering makes it easier to set up more data at once and separates it by type.

The following is a generic shader you can apply to your heatmap sprites to render different data into 4 heatmaps at the same time using one heatmap camera:

Shader "Collidalot/shad_rail_heatmap_write" { Properties { _MainTex("Texture", 2D) = "black" {} _Heatmap_Data_1("Heatmap Data 1", Vector) = (1, 1, 1, 1) _Heatmap_Data_2("Heatmap Data 2", Vector) = (1, 1, 1, 1) _Heatmap_Data_3("Heatmap Data 3", Vector) = (1, 1, 1, 1) _Heatmap_Data_4("Heatmap Data 4", Vector) = (1, 1, 1, 1) _Write_Mask("Write Mask", Vector) = (1, 1, 1, 1) } SubShader { Tags{ "RenderType" = "Opaque" } LOD 100 Pass { Cull Off ZWrite Off ZTest Always Blend SrcAlpha OneMinusSrcAlpha CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" // values needed to write new data sampler2D _MainTex; float4 _Heatmap_Data_1; float4 _Heatmap_Data_2; float4 _Heatmap_Data_3; float4 _Heatmap_Data_4; // values need to keep old data float4 _Write_Mask; struct vertInput { float4 pos : POSITION; float2 uv : TEXCOORD0; }; struct vertOutput { float4 pos : SV_POSITION; float2 uv : TEXCOORD0; float3 worldPos : TEXCOORD1; }; struct FragmentOutput { float4 heatmap1; float4 heatmap2; float4 heatmap3; float4 heatmap4; }; vertOutput vert(vertInput input) { vertOutput o; o.pos = UnityObjectToClipPos(input.pos); o.uv = input.uv; o.worldPos = mul(unity_ObjectToWorld, input.pos); return o; } FragmentOutput frag(vertOutput input) : COLOR { FragmentOutput output; output.heatmap1 = float4(_Heatmap_Data_1.xyz, _Write_Mask.y); output.heatmap2 = float4(_Heatmap_Data_2.xyz, _Write_Mask.y); output.heatmap3 = float4(_Heatmap_Data_3.xyz, _Write_Mask.z); output.heatmap4 = float4(_Heatmap_Data_4.xyz, _Write_Mask.w); return output; } ENDCG } } }

To use this shader, just apply it to your heatmap sprite object. When you activate the sprite to render heatmap data, set values for the _Heatmap_Data_1, _Heatmap_Data_2, _Heatmap_Data_3, _Heatmap_Data_4, and _Write_Mask properties. This would look as follows:

// put whatever data you want in the xyz channels of the heatmap data vectors, the last channel w is // reserved heatmapSpriteMaterial.SetVector("_Heatmap_Data_1", new Vector4(1.0, 1.0, 1.0, 0.0)); heatmapSpriteMaterial.SetVector("_Heatmap_Data_2", new Vector4(1.0, 1.0, 1.0, 0.0)); heatmapSpriteMaterial.SetVector("_Heatmap_Data_3", new Vector4(1.0, 1.0, 1.0, 0.0)); heatmapSpriteMaterial.SetVector("_Heatmap_Data_4", new Vector4(1.0, 1.0, 1.0, 0.0)); // set value of an index to 1.0 if you want that heatmap to be written to, 0.0 otherwise heatmapSpriteMaterial.SetVector("_Write_Mask", new Vector4(1.0, 1.0, 1.0, 1.0));

You can specify up to 3 floats to write to each heatmap in the XYZ channels of the Vectors. The last Channel is reserved for the _Write_Mask where X is associated with _Heatmap_Data_1, Y is associated with _Heatmap_Data_2, Z is associated with _Heatmap_Data_3, and W is associated with _Heatmap_Data_4. If you set a channel to 0.0 in _Write_Mask, the shader will not write to that corresponding heat map (it will but it will be completely transparent and ignored). If you set a channel to 1.0 in _Write_Mask, that shader will write the value you passed in the property to that heatmap. For example, A _Write_Mask value of Vector4(0.0, 1.0, 1.0, 0.0) would write data into _Heatmap_Data_2 and _Heatmap_Data_3 only.

Updating Heatmap Data

After you write data into a heatmap, sometimes you want to update the data in the heatmap over time without writing additional data into it. In Collidalot, I use my blend color heatmap to place temporary colors in the heatmap that fade over time, allowing me to temporarily color or flash the rails different colors at specific locations. This can be seen when you grind on rails and your trail color fades from one color to your main player color.

The most important function I use to update heatmap textures is Graphics.Blit. This function allows you take a source render texture (my existing heatmap data) and outputs it to another render texture (my updated heatmap data) after being processed by a material with your update shader. Because you cannot write to the same render texture you are reading from, you will need a second render texture with the same formatting to output to. In a script attached to your heatmap camera, you can update a heatmap in the following way (this uses is my blend color example):

public void OnPostRender(){ // fade blend color heatmap // set any properties you need for the update shader blendColorDecayMaterial.SetFloat("_Decay_Rate", decayRate); // run the update shader on your current heatmap data Graphics.Blit(currBlendColorTex, nextBlendColorTex, blendColorDecayMaterial); // swap the render textures so that the output is now your current render texture SwapRenderTexture(ref currBlendColorTex, ref nextBlendColorTex); }

You will want to call the update on a script attached your heatmap camera in the OnPreRender or OnPostRender built in Unity functions. In my case, I update the heatmap after the camera has finished rendering this frame. Each time you update your heatmap you will want to swap your two heatmap render textures to ensure the most recent data is referenced by a variable tracking your current heatmap.

// function used to swap render textures for the next frame public void SwapRenderTexture(ref RenderTexture currTex, ref RenderTexture nextTex) { RenderTexture temp = currTex; currTex = nextTex; nextTex = temp; }

Here is a simplified example of the shader that I use to perform this heatmap update:

Shader "Collidalot/shad_rail_color_blend_mask_decay" { Properties { _MainTex("Texture", 2D) = "white" {} _Decay_Rate("Decay Rate", Float) = 0.0 } SubShader { // No culling or depth Cull Off ZWrite Off ZTest Always Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" sampler2D _MainTex; float _Decay_Rate; struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; }; v2f vert(appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = v.uv; return o; } fixed4 frag(v2f i) : SV_Target { // get current blend color fixed4 col = tex2D(_MainTex, i.uv); // reduce alpha of blend color based on decay rate and dt col.a -= _Decay_Rate * unity_DeltaTime.x; col.a = max(0.0, col.a); // if alpha is 0, clear blend color if (col.a == 0.0) col = float4(0.0, 0.0, 0.0, 0.0); return col; } ENDCG } } }

Multi-target Heatmap Updating

What if you want to update more than one of your heatmaps at once?! Unity is pretty awesome and allows you to do that as well. In Collidalot, several of my heatmaps store related data that depend on each other to update properly. Graphics.Blit supports writing to multiple render textures, allowing you to handle scenarios where you can process and updated related information all at once. Here is an example:

private RenderBuffer[] blitRenderBuffers = new RenderBuffer[4]; // on a script attached to your heatmap camera public void OnPostRender(){ // set the render texture color buffers (heatmaps) you want the blit to write to blitRenderBuffers[0] = nextHeatmapTex1.colorBuffer; blitRenderBuffers[1] = nextHeatmapTex2.colorBuffer; blitRenderBuffers[2] = nextHeatmapTex3.colorBuffer; blitRenderBuffers[3] = nextHeatmapTex4.colorBuffer; // pass in your current heatmaps heatmapUpdateMaterial.SetTexture("_Heatmap_1", currHeatmapTex1); heatmapUpdateMaterial.SetTexture("_Heatmap_2", currHeatmapTex2); heatmapUpdateMaterial.SetTexture("_Heatmap_3", currHeatmapTex3); heatmapUpdateMaterial.SetTexture("_Heatmap_4", currHeatmapTex4); // tell the Render Engine to use your specified targets // the depth buffer specified did not matter in my case, so I just picked one Graphics.SetRenderTarget(blitRenderBuffers, nextHeatmapTex1.depthBuffer); // call blit using a material with the shader you are using to update Graphics.Blit(null, heatmapUpdateMaterial); // swap the references of your heatmaps SwapRenderTexture(ref currHeatmapTex1, ref nextHeatmapTex1); SwapRenderTexture(ref currHeatmapTex2, ref nextHeatmapTex2); SwapRenderTexture(ref currHeatmapTex3, ref nextHeatmapTex3); SwapRenderTexture(ref currHeatmapTex4, ref nextHeatmapTex4); }

This passes in multiple of your heatmaps as properties to the shader, and outputs the updated heatmaps to another set of render textures. Here is a basic example of a shader that removes the red value from multiple heatmaps:

Shader "Collidalot/shad_multi_target_blit_example" { Properties { // my heatmaps passed in as textures _Heatmap_1("Heatmap 1", 2D) = "black" {} _Heatmap_2("Heatmap 2", 2D) = "black" {} _Heatmap_3("Heatmap 3", 2D) = "black" {} _Heatmap_4("Heatmap 4", 2D) = "black" {} } SubShader { // No culling or depth Cull Off ZWrite Off ZTest Always Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; }; struct FragmentOutput { float4 heatmap1; float4 heatmap2; float4 heatmap3; float4 heatmap4; }; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = v.uv; return o; } sampler2D _Heatmap_1; sampler2D _Heatmap_2; sampler2D _Heatmap_3; sampler2D _Heatmap_4; FragmentOutput frag (v2f i) : SV_Target { FragmentOutput output; // sample values from the different textures float4 heatmapInput1 = tex2D(_Heatmap_1, i.uv); float4 heatmapInput2 = tex2D(_Heatmap_2, i.uv); float4 heatmapInput3 = tex2D(_Heatmap_3, i.uv); float4 heatmapInput4 = tex2D(_Heatmap_4, i.uv); // use previous heatmap values, but clear the red channel output.heatmap1= heatmapInput1; output.heatmap1.r = 0.0; output.heatmap2= heatmapInput2; output.heatmap2.r = 0.0; output.heatmap3 = heatmapInput3; output.heatmap3.r = 0.0; output.heatmap4 = heatmapInput4; output.heatmap4.r = 0.0; return output; } ENDCG } } }

To apply this heatmap update, you would have a material use this shader and then use the material in the Graphics.Blit call. This isn't a shader I use in Collidalot, but it is easier to understand and is a good example of how to update multiple heatmaps at once using Graphics.Blit.

How to Use Heatmap Data in Shaders

I am finally done explaining how to create and update heatmaps. Now what? Pass them to shaders, sample them, and do whatever you want to them. To pass them to a shader set them as properties in the shader like so:

Properties { // my heatmaps passed in as textures _Heatmap_1("Heatmap 1", 2D) = "black" {} _Heatmap_2("Heatmap 2", 2D) = "black" {} _Heatmap_3("Heatmap 3", 2D) = "black" {} _Heatmap_4("Heatmap 4", 2D) = "black" {} }

And give the shader the heatmap like this:

// pass in your current heatmaps heatmapUpdateMaterial.SetTexture("_Heatmap_1", currHeatmapTex1); heatmapUpdateMaterial.SetTexture("_Heatmap_2", currHeatmapTex2); heatmapUpdateMaterial.SetTexture("_Heatmap_3", currHeatmapTex3); heatmapUpdateMaterial.SetTexture("_Heatmap_4", currHeatmapTex4);

Then define samplers for them in the shader's CGPROGRAM:

sampler2D _Heatmap_1; sampler2D _Heatmap_2; sampler2D _Heatmap_3; sampler2D _Heatmap_4;

Then prepare coordinates to sample the textures in the vertex shader like this:

struct vertInput { float4 pos : POSITION; }; struct vertOutput { float4 pos : SV_POSITION; float4 wPos : TEXCOORD1; }; vertOutput vert(vertInput input) { vertOutput o; // save clip position o.pos = UnityObjectToClipPos(input.pos); // save world position o.wPos = mul(unity_ObjectToWorld, input.pos); return o; }

And sample the heatmaps in the fragment shader:

half4 frag(vertOutput output) : COLOR { // get the world position of the fragment float2 uv = output.wPos; // Convert world position to heatmap sample position // I pass in the size of my map to determine this float2 samplePos = float2(uv.x/_Map_Size.x, uv.y/_Map_Size.y) + float2(0.5, 0.5); // sample from heatmap float4 heatMapVal = tex2D(_Heatmap_1, samplePos); // do stuff with heatmap data return heatMapVal; }

After you sample the data, do whatever you want with it.

And with that, we're done! Hopefully, everything I have talked about in this post can help other anyone else trying to implement heatmap data rendering. Heatmaps are an amazing tool for saving location-based information and can be used to benefit a wide variety of applications. In a future post, I will go into how I use heatmap data to change the visuals of the energy rails with shaders in Collidalot. This is my first post, so let me know if I could do anything better next time. If you have any questions about the content of the post, let me know =)