Welcome back to this third and final installment in our WebGL Essentials mini-series. In this lesson, we'll take a look at lighting and adding 2D objects to your scene. There's a lot of new information here, so let's dive straight in!

Light

Lighting can be the most technical and difficult aspect of a 3D application to understand. A firm grasp of lighting is absolutely essential.

How Does Light Work?

Before we get into the different kinds of light and code techniques, it's important to know how light works in the real world. Every light source (eg: a light bulb, the sun, etc) generates particles called photons. These photons bounce around objects until they eventually enter our eyes. Our eyes convert the photons to produce a visual "picture". This is how we see. Light is also additive, meaning that an object with more color is brighter than an object with no color (black). Black is the complete absence of color, whereas white contains all colors. This is an important distinction when working with very bright or "over saturating" lights.

Brightness is just one principle that has multiple states. Reflection, for example, can have a variety of different levels. An object, like a mirror, can be completely reflective, whereas other objects can have a matte surface. Transparency determines how objects bend the light and cause refraction; one object can be completely transparent while others can be opaque (or any stage in between).

The list continues, but I think you can already see that light is not simple.

If you wanted even a small scene to simulate real light, it would run at something like 4 frames an hour, and that's on a high-powered computer. To get around this problem, programmers use tricks and techniques to simulate semi-realistic lighting at a reasonable frame rate. You have to come up with some form of compromise between realism and speed. Let's take a look at a few of these techniques.

Before I start elaborating on different techniques, I would like to give you a small disclaimer. There is a lot of controversy on the exact names for the different lighting techniques, and different people will give you different explanations on what "Ray Casting" or "Light Mapping" is. So before I start getting the hate mail, I would like to say that I am going to use the names that I learned; some people might not agree on my exact titles. In any case, the important thing to know is what the different techniques are. So without further ado, let's get started.

You have to come up with some form of compromise between realism and speed.

Ray Tracing

Ray tracing is one of the more realistic lighting techniques, but it is also one of the more costly. Ray tracing emulates real light; it emits "photons" or "rays" from the light source and bounces them around. In most ray tracing implementations, the rays come from the "camera" and bounce onto the scene in the opposite direction. This technique is usually used in films or scenes that can be rendered ahead of time. This is not to say that you can't use ray tracing in a real-time application, but doing so forces you to tone down other things in the scene. For example, you might have to reduce the amount of "bounces" the rays should perform, or you can make sure there are no objects that have reflective or refractive surfaces. Ray tracing can also be a viable option if your application has very few lights and objects.

If you have a real-time application, you may be able to precompile parts of your scene.

If the lights in your application don't move around or only move around in a small area at a time, you can precompile the lighting with a very advanced ray tracing algorithm and recalculate a small area around the moving light source. For example, if you are making a game where the lights don't move around, you can precompile the world with all the desired lights and effects. Then, you can just add a shadow around your character when he moves. This produces a very high quality look with a minimal amount of processing.

Ray Casting

Ray casting is very similar to ray tracing, but the "photons" don't bounce off objects or interact with different materials. In a typical application, you would basically start off with a dark scene, and then you would draw lines from the light source. Anything the light hits is lit; everything else stays dark. This technique is significantly faster than ray tracing while still giving you a realistic shadow effect. But the problem with ray casting is its restrictiveness; you don't have a lot of room to work with when trying to add effects like reflections. Usually, you have to come up with some kind of compromise between ray casting and ray tracing, balancing between speed and visual effects.

The major problem with both of these techniques is that WebGL does not give you access to any vertices except the currently active one.

This means you either have to perform everything on the CPU (as apposed to the graphics card), or you have make a second shader that calculates all the lighting and stores the information in a fake texture. You would then need to decompress the texture data back into the lighting information and map it to the vertices. So basically, the current version of WebGL is not very well suited for this. I'm not saying it can't be done, I'm just saying WebGL won't help you.

Shadow Mapping

Ray tracing can also be a viable option if your application has very few lights and objects.

A much better alternative to ray casting in WebGL is called shadow mapping. It gives you the same effect as ray casting, but it uses a different approach. Shadow mapping will not solve all your problems, but WebGL is semi-optimized for it. You can think of it as kind of a hack, but shadow mapping is used in real PC and console applications.

So what is it you ask?

You have to understand how WebGL renders its scenes in order to answer this question. WebGL pushes all the vertices into the vertex shader, which calculates the final coordinates for each vertex after the transformations are applied. Then to save time, WebGL discards the vertices that are hidden behind other objects and only draws the essential objects. If you remember how ray casting works, it just casts light rays onto the visible objects. So we set the "camera" of our scene to the light source's coordinates and point it in the direction we want the light to face. Then, WebGL automatically removes all the vertices that are not in view of the light. We can then save this data and use it when we render the scene to know which of the vertices are lit.

This technique sounds good on paper but it has a few downsides:

WebGL doesn't allow you to access the depth buffer; you need to be creative in the fragment shader when trying to save this data.

Even if you save all the data, you still have to map it to the vertices before they go into the vertex array when you render your scene. This requires extra CPU time.

All these techniques require a fair amount of tinkering with WebGL. But I will show you a very basic technique for producing a diffuse light to give a little personality to your objects. I wouldn't call it realistic light, but it does give your objects definition. This technique uses the object's normals matrix to calculate the angle of the light compared to the object's surface. It is quick, efficient, and doesn't require any hacking with WebGL. Let's get started.

Adding Light

Let's start by updating the shaders to incorporate lighting. We need to add a boolean that determines whether or not the object should be lit. Then, we need the actual normals vertex and transform it so that it aligns with the model. Finally, we need to make a variable to pass the final result to the fragment shader. This is the new vertex shader:

If we do not use lights, then we just pass a blank vertex to the fragment shader and its color stays the same. When lights are turned on, we calculate the angle between the light's direction and the object's surface using the dot function on the normal, and we multiply the result by the light's color as a sort of mask to overlay onto the object.

Picture of surface normals by Oleg Alexandrov.

This works because the normals are already perpendicular to the object's surface, and the dot function gives us a number based on the angle of the light to the normal. If the normal and the light are almost parallel, then the dot function returns a positive number, meaning the light is facing the surface. When the normal and the light are perpendicular, the surface is parallel to the light, and the function returns zero. Anything higher than 90 degrees between the light and the normal results in a negative number, but we filter this out with the "max zero" function.

Now let me show you the fragment shader:

This shader is pretty much the same from earlier parts of the series. The only difference is that we multiply the texture's color by the light level. This brightens or darkens different parts of the object, giving it some depth.

That's all for the shaders, now let's go to the WebGL.js file and modify our two classes.

Updating our Framework

Let's start with the GLObject class. We need to add a variable for the normals array. Here is what the top portion of your GLObject should now look like:

This code is pretty straight forward. Now let's go back to the HTML file and add the normals array to our object.

In the Ready() function where we load our 3D model, we have to add the parameter for the normals array. An empty array means the model did not contain any normals data, and we will have to draw the object without light. In the event that the normals array contains data, we will just pass it onto the GLObject object.

We also need to update the WebGL class. We need to link variables to the shaders right after we load the shaders. Let's add the normals vertex; your code should now look like this:

Next, let's update the PrepareModel() function and add some code to buffer the normals data when it is available. Add the new code right before the Model.Ready statement at the bottom:

Last but not least, update the actual Draw function to incorporate all these changes. There is a couple changes here so bear with me. I'm going to go piece by piece through the entire function:

Up to here is the same as before. Now comes the normals part:

We check to see if the model has normals data. If so, it connects the buffer and sets the boolean. If not, the shader still needs some kind of data or it will give you an error. So instead, I passed the vertices buffer and set the UseLight boolean to false . You could get around this by using multiple shaders, but I thought this would be simpler for what we are trying to do.

Again this part of the function is still the same.

Here we calculate the normals transformation matrix. I will discuss the MatrixTranspose() and InverseMatrix() functions in a minute. To calculate the transformation matrix for the normals array, you have to transpose the inverse matrix of the object's regular transformation matrix. More on this later.

You can easily view the source of any WebGL application to learn more.

This is the rest of the Draw() function. It's almost the same as before, but there is the added code that connects the normals matrix to the shaders. Now, let's go back to those two functions I used to get the normals transformation matrix.

The InverseMatrix() function accepts a matrix and returns its inverse matrix. An inverse matrix is a matrix that, when multiplied by the original matrix, returns an identity matrix. Let's look at a basic algebra example to clarify this. The inverse of the number 4 is 1/4 because when 1/4 x 4 = 1 . The "one" equivalent in matrices is an identity matrix. Therefore, the InverseMatrix() function returns the identity matrix for the argument. Here is this function:

This function is pretty complicated, and to tell you the truth, I don't fully understand why the math works. But I have already explained the gist of it above. I did not come up with this function; it was written in ActionScript by Robin Hilliard.

The next function, MatrixTranspose() , is a lot simpler to understand. It returns the "transposed" version of its input matrix. In short, it just rotates the matrix on its side. Here's the code:

Instead of going in horizontal rows (i.e. A[0], A[1], A[2] ...) this function goes down vertically (A[0], A[4], A[8] ...).

You're good to go after adding these two functions to your WebGL.js file, and any model that contains the normals data should be shaded. You can play around with the light's direction and color in the vertex shader to get different effects.

There is one last topic that I wish to cover, and that is adding 2D content to our scene. Adding 2D components on a 3D scene can have many benefits. For example, it can be used to display coordinate information, a mini map, instructions for your app, and the list goes on. This process is not as straight forward as you might think, so let's check it out.

2D V.S. 2.5D

HTML will not let you use the WebGL API and the 2D API from the same canvas.

You might be thinking, "Why not just use the canvas's built in HTML5 2D API?" Well, the problem is that HTML will not let you use the WebGL API and the 2D API from the same canvas. Once you assign the canvas' context to WebGL, you cannot use it with the 2D API. HTML5 simply returns null when you try to get the 2D context. So how then do you get around this? Well, I'll give you two options.

2.5D

2.5D, for those who are unaware, is when you put 2D objects (objects with no depth) in a 3D scene. Adding text to a scene is an example of 2.5D. You can take the text from a picture and apply it as a texture to a 3D plane, or you can get a 3D model for the text and render it in your screen.

The benefits to this approach is that you don't need two canvases, and it would be faster to draw if you only used simple shapes in your application.

But in order to do things like text, you either need to have pictures of everything you want to write, or a 3D model for each letter (a little over the top, in my opinion).

2D

The alternative is to create a second canvas and overlay it on top of the 3D canvas. I prefer this approach because it seems better equipped for drawing 2D content. I am not going to start making a new 2D framework, but let's just create a simple example where we display the coordinates of the model along with its current rotation. Let's add a second canvas to the HTML file right after the WebGL canvas. Here is the new canvas along with the current one:

I also added some inline CSS to overlay the second canvas on top of the first. The next step is to create a variable for the 2D canvas an get its context. I am going to do this in the Ready() function. Your updated code should look something like this:

At the top, you can see that I added a global variable for the 2D canvas. Then, I added two lines to the bottom of the Ready() function. The first new line gets the 2D context, and the second new line sets the color to black.

The last step is to draw the text inside the Update() function:

We start by rotating the model on its Y axis, and then we clear the 2D canvas of any previous content. Next, we set the font size and draw some text for each axis. The fillText() method accepts three parameters: the text to draw, the x coordinate, and the y coordinate.

The simplicity speaks for itself. This may have been a bit of overkill to draw some simple text; you could have easily just written the text in a positioned <div/> or <p/> element. But if you are doing anything like drawing shapes, sprites, a health bar, etc, then this is probably your best option.

Final Thoughts

In the scope of the last three tutorials, we created a pretty nice, albeit basic, 3D engine. Despite its primitive nature, it does give you a solid base to work off of. Moving forward, I suggest looking at other frameworks like three.js or glge to get an idea of what is possible. Additionally, WebGL runs in the browser, and you can easily view the source of any WebGL application to learn more.

I hope you've enjoyed this tutorial series, and like always, leave your comments and questions in the comment section below.