A tutorial on how to re-create the Apple Fifth Avenue Cube animation using WebGL.

In September 2019 Apple reopened the doors of its historic store in the Fifth Avenue and to celebrate the special event it made a landing page with a really neat animation of a cube made of glass. You can see the original animation in this video.

What caught my attention is the way they played with the famous glass cube to make the announcement.

As a Creative Technologist I constantly experiment and study the potential of web technologies, and I thought it might be interesting to try to replicate this using WebGL.

In this tutorial I’m going to explain step-by-step the techniques I used to recreate the animation.

You will need an intermediate level of knowledge of WebGL. I will omit some parts of the code for brevity and assume you already know how to set up a WebGL application. The techniques I’m going to show are translatable to any WebGL library / framework.

Since WebGL APIs are very verbose, I decided to go with Regl for my experiment:

Regl is a new functional abstraction for WebGL. Using Regl is easier than writing raw WebGL code because you don’t need to manage state or binding; it’s also lighter and faster and has less overhead than many existing 3d frameworks.

Drawing the cube

The first step is to create the program to draw the cube.

Since the shape we’re going to create is a prism made of glass, we must guarantee the following characteristics:

It must be transparent

Cube internal faces must reflect the internal content

The cube edges must distort the internal content

Front and back faces

In order to get what we want, at render time we’ll draw the shape in two passes:

In the first pass we’ll draw only the back faces with the internal reflection. In the second pass we’ll draw the front faces with the content after being masked and distorted at the edges.

Draw the shape in two passes means nothing but calling the WebGL program two times, but with a different configuration. WebGL has the concept of front facing and back facing and this gives us the ability to decide what to draw turning on the culling face feature.

With that feature turned on, WebGL defaults to “culling” back facing triangles. “Culling” in this case is a fancy word for “not drawing”. – WebGL Fundamentals

// draw front faces gl.enable(gl.CULL_FACE); gl.cullFace(gl.BACK); // draw back faces gl.enable(gl.CULL_FACE); gl.cullFace(gl.FRONT);

Now that we have gone through the part of setting up the program, let’s start to render the cube.

Coloured borders

What we want to obtain is a transparent shape with coloured borders. From a flat white cube, in the first step we’ll add the rainbow color and then we’ll mask it with the borders:

First of all create the GLSL function that returns the rainbow:

const float PI2 = 6.28318530718; vec4 radialRainbow(vec2 st, float tick) { vec2 toCenter = vec2(0.5) - st; float angle = mod((atan(toCenter.y, toCenter.x) / PI2) + 0.5 + sin(tick), 1.0); // colors vec4 a = vec4(0.15, 0.58, 0.96, 1.0); vec4 b = vec4(0.29, 1.00, 0.55, 1.0); vec4 c = vec4(1.00, 0.0, 0.85, 1.0); vec4 d = vec4(0.92, 0.20, 0.14, 1.0); vec4 e = vec4(1.00, 0.96, 0.32, 1.0); float step = 1.0 / 10.0; vec4 color = a; color = mix(color, b, smoothstep(step * 1.0, step * 2.0, angle)); color = mix(color, a, smoothstep(step * 2.0, step * 3.0, angle)); color = mix(color, b, smoothstep(step * 3.0, step * 4.0, angle)); color = mix(color, c, smoothstep(step * 4.0, step * 5.0, angle)); color = mix(color, d, smoothstep(step * 5.0, step * 6.0, angle)); color = mix(color, c, smoothstep(step * 6.0, step * 7.0, angle)); color = mix(color, d, smoothstep(step * 7.0, step * 8.0, angle)); color = mix(color, e, smoothstep(step * 8.0, step * 9.0, angle)); color = mix(color, a, smoothstep(step * 9.0, step * 10.0, angle)); return color; } #pragma glslify: export(radialRainbow);

Glslify is a node.js-style module system that lets us split GLSL code into modules. https://github.com/glslify/glslify

Before going ahead, let’s talk a bit about gl_FragCoord .

Available only in the fragment language, gl_FragCoord is an input variable that contains the window canvas relative coordinate (x, y, z, 1/w) values for the fragment. – khronos.org

If you notice, the function radialRainbow needs a variable called st as first parameter, whose values must be the pixel coordinates relative to the canvas and, like UVs , go between 0 and 1. The variable st is the result of the division of gl_FragCoord by the resolution:

/** * gl_FragCoord: pixel coordinates * u_resolution: the resolution of our canvas */ vec2 st = gl_FragCoord.xy / u_resolution;

The following image explains the difference between using UVs and st .

Once we’re able to render the radial gradient, let’s create the function to get the borders:

float borders(vec2 uv, float strokeWidth) { vec2 borderBottomLeft = smoothstep(vec2(0.0), vec2(strokeWidth), uv); vec2 borderTopRight = smoothstep(vec2(0.0), vec2(strokeWidth), 1.0 - uv); return 1.0 - borderBottomLeft.x * borderBottomLeft.y * borderTopRight.x * borderTopRight.y; } #pragma glslify: export(borders);

And then our final fragment shader:

precision mediump float; uniform vec2 u_resolution; uniform float u_tick; varying vec2 v_uv; varying float v_depth; #pragma glslify: borders = require(borders.glsl); #pragma glslify: radialRainbow = require(radial-rainbow.glsl); void main() { // screen coordinates vec2 st = gl_FragCoord.xy / u_resolution; vec4 bordersColor = radialRainbow(st, u_tick); // opacity factor based on the z value float depth = clamp(smoothstep(-1.0, 1.0, v_depth), 0.6, 0.9); bordersColor *= vec4(borders(v_uv, 0.011)) * depth; gl_FragColor = bordersColor; }

Drawing the content

Please note that the Apple logo is a trademark of Apple Inc., registered in the U.S. and other countries. We are only using it here for demonstration purposes.

Now that we have the cube, it’s time to add the Apple logo and all texts.

If you notice, the content is not only rendered inside the cube, but also on the three back faces as reflection – that means render it four times. In order to keep the performance high, we’ll draw it only once off-screen at render time to then use it in the various fragments.

In WebGL we can do it thanks to the FBO:

The frame buffer object architecture (FBO) is an extension to OpenGL for doing flexible off-screen rendering, including rendering to a texture. By capturing images that would normally be drawn to the screen, it can be used to implement a large variety of image filters, and post-processing effects. – Wikipedia

In Regl it’s pretty simple to play with FBOs:

... // here we'll put the logo and the texts const textures = [ ... ] // we create the FBO const contentFbo = regl.framebuffer() // animate is executed at render time const animate = ({viewportWidth, viewportHeight}) => { contentFbo.resize(viewportWidth, viewportHeight) // we tell WebGL to render off-screen, inside the FBO contentFbo.use(() => { /** * – Content program * It'll run as many times as the textures number */ content({ textures }) }) /** * – Cube program * It'll run twice, once for the back faces and once for front faces * Together with front faces we'll render the content as well */ cube([ { pass: 1, cullFace: 'FRONT', }, { pass: 2, cullFace: 'BACK', texture: contentFbo, // we pass the FBO as a normal texture }, ]) } regl.frame(animate)

And then update the cube fragment shader to render the content:

precision mediump float; uniform vec2 u_resolution; uniform float u_tick; uniform int u_pass; uniform sampler2D u_texture; varying vec2 v_uv; varying float v_depth; #pragma glslify: borders = require(borders.glsl); #pragma glslify: radialRainbow = require(radial-rainbow.glsl); void main() { // screen coordinates vec2 st = gl_FragCoord.xy / u_resolution; vec4 texture; vec4 bordersColor = radialRainbow(st, u_tick); // opacity factor based on the z value float depth = clamp(smoothstep(-1.0, 1.0, v_depth), 0.6, 0.9); bordersColor *= vec4(borders(v_uv, 0.011)) * depth; if (u_pass == 2) { texture = texture2D(u_texture, st); } gl_FragColor = texture + bordersColor; }

Masking

In the Apple animation every cube face shows a different texture, that means that we have to create a special mask that follows the cube rotation.

We’ll render the informations to mask the textures inside an FBO that we’ll pass to the content program.

To each texture let’s associate a different maskId – every ID corresponds to a color that we’ll use as test-data:

const textures = [ { texture: logoTexture, maskId: 1, }, { texture: logoTexture, maskId: 2, }, { texture: logoTexture, maskId: 3, }, { texture: text1Texture, maskId: 4, }, { texture: text2Texture, maskId: 5, }, ]

To make each maskId correspond to a colour, we just have to convert it in binary and then read it as RGB:

MaskID 1 => [0, 0, 1] => Blue MaskID 2 => [0, 1, 0] => Lime MaskID 3 => [0, 1, 1] => Cyan MaskID 4 => [1, 0, 0] => Red MaskID 5 => [1, 0, 1] => Magenta

The mask will be nothing but our cube with the faces filled with one of the colours shown above – obviously in this case we just need to draw the front faces:

... maskFbo.use(() => { cubeMask([ { cullFace: 'BACK', colorFaces: [ [0, 1, 1], // front face => mask 3 [0, 0, 1], // right face => mask 1 [0, 1, 0], // back face => mask 2 [0, 1, 1], // left face => mask 3 [1, 0, 0], // top face => mask 4 [1, 0, 1], // bottom face => mask 5 ] }, ]) }); contentFbo.use(() => { content({ textures, mask: maskFbo }) }) ...

Our mask will look like this:

Now that we have the mask available inside the fragment of the content program, let’s write down the test:

precision mediump float; uniform vec2 u_resolution; uniform sampler2D u_texture; uniform int u_maskId; uniform sampler2D u_mask; varying vec2 v_uv; void main() { vec2 st = gl_FragCoord.xy / u_resolution; vec4 texture = texture2D(u_texture, v_uv); vec4 mask = texture2D(u_mask, st); // convert the mask color from binary (rgb) to decimal int maskId = int(mask.r * 4.0 + mask.g * 2.0 + mask.b * 1.0); // if the test passes then draw the texture if (maskId == u_maskId) { gl_FragColor = texture; } else { discard; } }

Distortion

The distortion at the edges is that characteristic that gives the feeling of a glass material.

The effect is achieved by simply shifting the pixels near the edges towards the center of each face – the following video shows how it works:

For each pixel to move we need two pieces of information:

How much to move the pixel The direction in which we want to move the pixel

These two pieces of information are contained inside the Displacement Map which, as before for the mask, we’ll store in an FBO that we’ll pass to the content program:

... displacementFbo.use(() => { cubeDisplacement([ { cullFace: 'BACK' }, ]) }); contentFbo.use(() => { content({ textures, mask: maskFbo, displacement: displacementFbo }) }) ...

The displacement map we’re going to draw will look like this:

Let’s see in detail how it’s made.

The green channel is the length, that is how much to move the pixel – the greener the greater the displacement. Since the distortion must be present only at the edges, we just have to draw a green frame on each face.

To get the green frame we just have to reuse the border function and put the result on the gl_FragColor green channel:

precision mediump float; varying vec2 v_uv; #pragma glslify: borders = require(borders.glsl); void main() { // Green channel – how much to move the pixel float length = borders(v_uv, 0.028) + borders(v_uv, 0.06) * 0.3; gl_FragColor = vec4(0.0, length, 0.0, 1.0); }

The red channel is the direction, whose value is the angle in radians. Finding this value is more tricky because we need the position of each point relative to the world – since our cube rotates, even the UVs follow it and therefore we loose any reference. In order to compute the position of every pixel in relation to the center we need two varying variables from the vertex shader:

v_point : the world position of the current pixel. v_center : the world position of the center of the face.

The vertex shader:

precision mediump float; attribute vec3 a_position; attribute vec3 a_center; attribute vec2 a_uv; uniform mat4 u_projection; uniform mat4 u_view; uniform mat4 u_world; varying vec3 v_center; varying vec3 v_point; varying vec2 v_uv; void main() { vec4 position = u_projection * u_view * u_world * vec4(a_position, 1.0); vec4 center = u_projection * u_view * u_world * vec4(a_center, 1.0); v_point = position.xyz; v_center = center.xyz; v_uv = a_uv; gl_Position = position; }

At this point, in the fragment, we just have to find the distance from the center, calculate the relative angle in radians and put the result on the gl_FragColor red channel – here the shader updated:

precision mediump float; varying vec3 v_center; varying vec3 v_point; varying vec2 v_uv; const float PI2 = 6.283185307179586; #pragma glslify: borders = require(borders.glsl); void main() { // Red channel – which direction to move the pixel vec2 toCenter = v_center.xy - v_point.xy; float direction = (atan(toCenter.y, toCenter.x) / PI2) + 0.5; // Green channel – how much to move the pixel float length = borders(v_uv, 0.028) + borders(v_uv, 0.06) * 0.3; gl_FragColor = vec4(direction, length, 0.0, 1.0); }

Now that we have our displacement map, let’s update the content fragment shader:

precision mediump float; uniform vec2 u_resolution; uniform sampler2D u_texture; uniform int u_maskId; uniform sampler2D u_mask; varying vec2 v_uv; void main() { vec2 st = gl_FragCoord.xy / u_resolution; vec4 displacement = texture2D(u_displacement, st); // get the direction by taking the displacement red channel and convert it in a vector2 vec2 direction = vec2(cos(displacement.r * PI2), sin(displacement.r * PI2)); // get the length by taking the displacement green channel float length = displacement.g; vec2 newUv = v_uv; // calculate the new uvs newUv.x += (length * 0.07) * direction.x; newUv.y += (length * 0.07) * direction.y; vec4 texture = texture2D(u_texture, newUv); vec4 mask = texture2D(u_mask, st); // convert the mask color from binary (rgb) to decimal int maskId = int(mask.r * 4.0 + mask.g * 2.0 + mask.b * 1.0); // if the test passes then draw the texture if (maskId == u_maskId) { gl_FragColor = texture; } else { discard; } }

Reflection

Since reflection is quite a complex topic, I’ll just give you a quick introduction on how it works so that you can more easily understand the source I shared.

Before continuing, it’s necessary to understand the concept of camera in WebGL. The camera is nothing but the combination of two matrices: the view and projection matrix.

The projection matrix is used to convert world space coordinates into clip space coordinates. A commonly used projection matrix, the perspective matrix, is used to mimic the effects of a typical camera serving as the stand-in for the viewer in the 3D virtual world. The view matrix is responsible for moving the objects in the scene to simulate the position of the camera being changed. – developer.mozilla.org

I suggest that you also get familiar with these concepts before we dig deeper:

In a 3D environment, reflections are obtained by creating a camera for each reflective surface and placing it accordingly based on the position of the viewer – that is the eye of the main camera.

In our case, every face of the cube is a reflective surface, that means we need 6 different cameras whose position depends on the viewer and the cube rotation.

WebGL Cubemaps

Every camera generates a texture for each inner face of the cube. Instead of creating a single framebuffer for every face, we can use the cube mapping technique.

Another kind of texture is a cubemap. It consists of 6 textures representing the 6 faces of a cube. Instead of the traditional texture coordinates that have 2 dimensions, a cubemap uses a normal, in other words a 3D direction. Depending on the direction the normal points one of the 6 faces of the cube is selected and then within that face the pixels are sampled to produce a color. – WebGL Fundamentals

So we just have to store what the six cameras “see” in the right cell – this is how our cubemap will look like:

Let’s update our animate function by adding the reflection:

... // this is a normal FBO const contentFbo = regl.framebuffer() // this is a cube FBO, that means it composed by 6 textures const reflectionFbo = regl.framebufferCube(1024) // animate is executed at render time const animate = ({viewportWidth, viewportHeight}) => { contentFbo.resize(viewportWidth, viewportHeight) contentFbo.use(() => { ... }) /** * – Reflection program * we'll iterate 6 times over the reflectionFbo and draw inside the * result of each camera */ reflection({ reflectionFbo, cameraConfig, texture: contentFbo }) /** * – Cube program * with the back faces we'll render the reflection as well */ cube([ { pass: 1, cullFace: 'FRONT', reflection: reflectionFbo, }, { pass: 2, cullFace: 'BACK', texture: contentFbo, }, ]) } regl.frame(animate)

And then update the cube fragment shader.

In the fragment shader we need to use a samplerCube instead of a sampler2D and use textureCube instead of texture2D. textureCube takes a vec3 direction so we pass the normalized normal. Since the normal is a varying and will be interpolated we need to normalize it. – WebGL Fundamentals

precision mediump float; uniform vec2 u_resolution; uniform float u_tick; uniform int u_pass; uniform sampler2D u_texture; uniform samplerCube u_reflection; varying vec2 v_uv; varying float v_depth; varying vec3 v_normal; #pragma glslify: borders = require(borders.glsl); #pragma glslify: radialRainbow = require(radial-rainbow.glsl); void main() { // screen coordinates vec2 st = gl_FragCoord.xy / u_resolution; vec4 texture; vec4 bordersColor = radialRainbow(st, u_tick); // opacity factor based on the z value float depth = clamp(smoothstep(-1.0, 1.0, v_depth), 0.6, 0.9); bordersColor *= vec4(borders(v_uv, 0.011)) * depth; // if u_pass is 1, we're drawing back faces if (u_pass == 1) { vec3 normal = normalize(v_normal); texture = textureCube(u_reflection, normal); } // if u_pass is 1, we're drawing back faces if (u_pass == 2) { texture = texture2D(u_texture, st); } gl_FragColor = texture + bordersColor; }

Conclusion

This article may give you a general idea of the techniques I used to replicate the Apple animation. If you want to learn more, I suggest you download the source and have a look ot how it works. If you have any questions, feel free to ask me on Twitter (@lorenzocadamuro); hope you have enjoyed it!