This is the fourth tutorial in a series about creating the appearance of flowing materials. In it, we will make a water surface transparent, adding underwater fog and refraction.

This tutorial is made with Unity 2017.4.4f1.

This does not yet remove the shadows of the main directional light. Those are still added by the default diffuse shadow caster pass, which we've inherited from the diffuse fallback shader. To eliminate the shadows, remove the fallback.

However, our water does still cast shadows, which removes all the direct lighting underwater. We don't want this, because it makes the underwater scenery too dark. First, we can remove the fillforwardshadows keyword, because we no longer need to support any shadow type.

The water no longer receives shadows, even when its alpha is set back to 1. That's because it is now put in the transparent rendering queue. Because of the way that these objects are rendered, they cannot receive shadows. While you could somewhat work around this limitation, that's not possible with a simple surface shader.

Because we're using the standard physically-based lighting function, our shader will use Unity's transparent rendering mode by default, which keep highlights and reflections on top of its otherwise transparent surface. The alternative would be the fade mode, which fades out everything equally, which is not realistic.

We also have to instruct Unity to generate transparent shaders from our surface shader code, which is done by adding the alpha keyword to the surface pragma directive.

To make the Distortion Flow shader support transparency, change its RenderType tag to Transparent and give it a Queue tag set to Transparent as well. That makes it work with any replacement shaders that you might have and moves the shader to the transparent rendering queue, now being drawn after all opaque geometry has been rendered.

We're going to work with the Distortion Flow effect, so add a quad with that material to the scene, representing the water surface. It is still fully opaque, so it will hide everything that is underwater.

First, create some underwater scenery so that there is something interesting below the water surface. I have created a deep pit with some objects that suggest plant growth, both deep below and at the surface. I also added two spheres that float on the water. To brighten the bottom part of the pit, I added an intense spotlight that shines from above the water. Both this light and the main directional light have shadows enabled.

The water effects that we have created thus far are fully opaque. This works for water or other liquids that are very murky, or are covered with a layer of debris, foam, plants, or something else that blocks light. But clear water is transparent, which requires a transparent shader. So we're going to adjust our surface shaders to work with transparency. We're only going to concern ourselves with looking into the water. An underwater camera requires a different approach.

Underwater Fog

Water isn't perfectly transparent. It absorbs part of the light that travels through it, and scatters some of it as well. This happens in any medium, but it is more noticeable in water than in air. Clear water absorbs a little bit of light, but different frequencies are absorbed at different rates. Blue light is absorbed the least, which is why things turn blue the deeper you go. This isn't the same as a partially-transparent water surface, because that doesn't change the underwater color based on depth.

Half-transparent water, no depth-based color change.

The underwater light absorption and scattering behaves somewhat like fog. Although a fog effect is a poor approximation of what really goes on, it is a cheap and easy-to-control way of having underwater depth affect the color of what we see. So we'll use the same approach as described in Rendering 18, Fog, except only underwater.

There are two ways that we could add underwater fog to our scene. The first is to use a global fog and apply it to everything that gets rendered before the water surface. This can work fine when you have a single uniform water level. The other approach is to apply the fog while rendering a water surface. That makes the fog specific to each surface, which allows water at different levels—and even at different orientations—without affecting anything that's not underwater. We'll use the second approach.

Finding the Depth Because we're going to change the color of whatever is below the water surface, we can no longer rely on the default transparent blending of the standard shader. When rendering the fragment of a water surface, we have to somehow determine what the final color behind the water surface should be. Let's create a ColorBelowWater function for that, and put it in a separate LookingThroughWater.cging include file. Initially, it just returns black. #if !defined(LOOKING_THROUGH_WATER_INCLUDED) #define LOOKING_THROUGH_WATER_INCLUDED float3 ColorBelowWater () { return 0; } #endif To test this color, we'll directly use it for our water's albedo, temporarily overriding its true surface albedo. Also set alpha to 1, so we're not distracted by regular transparency. #include "Flow.cginc" #include "LookingThroughWater.cginc" … void surf (Input IN, inout SurfaceOutputStandard o) { … fixed4 c = (texA + texB) * _Color; o.Albedo = c.rgb; o.Metallic = _Metallic; o.Smoothness = _Glossiness; o.Alpha = c.a; o.Albedo = ColorBelowWater(); o.Alpha = 1; } Black underwater color. To figure out how far light has traveled underwater, we have to know how far away whatever lies below the water is. Because the water is transparent, it doesn't write to the depth buffer. All opaque objects have already been rendered, so the depth buffer contains the information that we need. Unity makes the depth buffer globally available via the _CameraDepthTexture variable, so add it to our LookingThroughWater include file. sampler2D _CameraDepthTexture; Is _CameraDepthTexture always available? It only contains depth information if Unity decides to render a depth pass. This is always the case when deferred rendering is used. A depth pass is also used in forward rendering when the main directional light is rendered with screen-space shadow cascades, which is usually the case. Otherwise, you'll have to set the depth texture mode of the camera via a script. To sample the depth texture, we need the screen-space coordinates of the current fragment. We can retrieve those by adding a float4 screenPos field to our surface shader's input structure, then pass it to ColorBelowWater . struct Input { float2 uv_MainTex; float4 screenPos; }; … void surf (Input IN, inout SurfaceOutputStandard o) { … o.Albedo = ColorBelowWater( IN.screenPos ); o.Alpha = 1; } The screen position is simply the clip space position, with the range of its XY components changed from −1–1 to 0–1. Besides that, the orientation of the Y component might be changed, depending on the target platform. It is a four-component vector because we're dealing with homogeneous coordinates. As explained in Rendering 7, Shadows, we have to divide XY by W to get the final depth texture coordinates. Do this in ColorBelowWater . float3 ColorBelowWater ( float4 screenPos ) { float2 uv = screenPos.xy / screenPos.w; return 0; } Now we can sample the background depth via the SAMPLE_DEPTH_TEXTURE macro, and then convert the raw value to the linear depth via the LinearEyeDepth function. float2 uv = screenPos.xy / screenPos.w; float backgroundDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv)); This is the depth relative to the screen, not the water surface. So we need to know the distance between the water and the screen as well. We find it by taking the Z component of screenPos —which is the interpolated clip space depth—and converting it to linear depth via the UNITY_Z_0_FAR_FROM_CLIPSPACE macro. float backgroundDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, uv)); float surfaceDepth = UNITY_Z_0_FAR_FROM_CLIPSPACE(screenPos.z); The underwater depth is found by subtracting the surface depth from the background depth. Let's use that as our final color to see whether it is correct, scaled down so at least part of the gradient is visible. float surfaceDepth = UNITY_Z_0_FAR_FROM_CLIPSPACE(screenPos.z); float depthDifference = backgroundDepth - surfaceDepth; return depthDifference / 20; Depth difference, spotlight off. You could get an upside-down result at this point. To guard against that, check whether the texel size of the camera depth texture is negative in the V dimension. If so, invert the V coordinate. We only have to check this on platforms that work with top-to-bottom coordinates. In those cases, UNITY_UV_STARTS_AT_TOP is defined as 1 . sampler2D _CameraDepthTexture, _WaterBackground; float4 _CameraDepthTexture_TexelSize; float3 ColorBelowWater (float4 screenPos) { float2 uv = screenPos.xy / screenPos.w; #if UNITY_UV_STARTS_AT_TOP if (_CameraDepthTexture_TexelSize.y < 0) { uv.y = 1 - uv.y; } #endif … }

Grabbing the Background To adjust the color of the background, we have to retrieve it somehow. The only way that's possible with a surface shader is by adding a grab pass. This is done by adding GrabPass {} before the CGPROGRAM block in our shader. SubShader { Tags { "RenderType"="Transparent" "Queue"="Transparent" } LOD 200 GrabPass {} CGPROGRAM … ENDCG } Unity will now add an extra step in the rendering pipeline. Just before the water gets drawn, what's rendered up to this points gets copied to a grab-pass texture. This happens each time something that uses our water shader gets rendered. We can reduce this to a single extra draw by giving the grabbed texture an explicit name. That is done by putting a string with the texture's name inside the otherwise empty block of the grab pass. Then all water surfaces will use the same texture, which gets grabbed right before the first water gets drawn. Let's name the texture _WaterBackground. GrabPass { "_WaterBackground" } Add a variable for this texture, then sample it using the same UV coordinates that we used to sample the depth texture. Using that as the result of ColorBelowWater should produce the same image as the fully-transparent water earlier. sampler2D _CameraDepthTexture , _WaterBackground ; float3 ColorBelowWater (float4 screenPos) { … float3 backgroundColor = tex2D(_WaterBackground, uv).rgb; return backgroundColor; } Grabbed background. Shouldn't we use ComputeGrabScreenPos ? The rules for V coordinate orientation should be the same for both the depth texture and the grabbed texture. ComputeGrabScreenPos flips it based on UNITY_UV_STARTS_AT_TOP , which we also check. If this doesn't work, let me know.

Applying Fog Besides the depth and the original color, we also need settings to control the fog. We'll use simple exponential fog, so we need to add a color and a density property to our shader. Properties { … _WaterFogColor ("Water Fog Color", Color) = (0, 0, 0, 0) _WaterFogDensity ("Water Fog Density", Range(0, 2)) = 0.1 _Glossiness ("Smoothness", Range(0,1)) = 0.5 _Metallic ("Metallic", Range(0,1)) = 0.0 } I made the fog color the same as the water's albedo, which has hex code 4E83A9FF. I set the density to 0.15. Fog settings. Add the corresponding variables to the include file, then use them to compute the fog factor and interpolate the color. float3 _WaterFogColor; float _WaterFogDensity; float3 ColorBelowWater (float4 screenPos) { … float3 backgroundColor = tex2D(_WaterBackground, uv).rgb; float fogFactor = exp2(-_WaterFogDensity * depthDifference); return lerp(_WaterFogColor, backgroundColor, fogFactor) ; } Underwater fog.