In this tutorial you’ll learn how to render an f16 air force airplane 3d .obj model in your browser using small open source modules.

Setting up

Before getting started let’s download everything that we need for the tutorial upfront in case you want to work through it offline later.

# Run this in your command line to create # a new directory for this tutorial mkdir webgl-wavefront-obj-tutorial cd webgl-wavefront-obj-tutorial npm init -f

# Next we download our 3d model and texture curl -L http://chinedufn.com/assets/ \ f16/f16-model.obj -o f16-model.obj curl -L http://chinedufn.com/assets/ \ f16/f16-texture.bmp -o f16-texture.bmp

# Next we install our code dependencies # In order to prevent this tutorial from rotting # as years go by we're grabbing versions that # we know for sure will work npm install load-wavefront-obj@0.6.2 \ raf-loop@1.1.3 wavefront-obj-parser@0.3.0 \ browserify@13.1.1 http-server@0.9.0

Alright, we’ve downloaded everything that we need to learn how to render an F16 plane model. You will not need an internet connection from here on.

Parsing our model

Right now we have an f16-model.obj file. This file is a text file that contains data about our model’s vertices. In order to make use of this data, we parse it into more readily accessible structure and store it as JSON. This is the job of the wavefront-obj-parser dependency that we downloaded earlier.

# Convert our .obj file into a JSON file node_modules/wavefront-obj-parser/bin/obj2json.js \ f16-model.obj > f16-model.json

Creating our HTML file

# Create a new html file touch index.html

And edit your index.html to contain the following:

<!doctype html> <html> <body> F16 Plane! <!-- Load our tutorial application --> <script src= 'bundle.js' ></script> </body> </html>

And then start a server to serve our files

node_modules/http-server/bin/http-server \ -p 4040

Now if you visit http://localhost:4040 in your browser you should see the text “F16 Plane!”

Drawing a blank canvas

Let’s create a new file for our demo application.

# Create our app's js file touch f16-demo.js

Now let’s edit this f16-demo.js file to show an empty canvas.

// f16-demo.js // Create our canvas and WebGL context var canvas = document . createElement ( 'canvas' ) canvas . width = 500 canvas . height = 500 var gl = canvas . getContext ( 'webgl' ) // Make our canvas background black gl . clearColor ( 0.0 , 0.0 , 0.0 , 1.0 ) gl . enable ( gl . DEPTH_TEST ) gl . clear ( gl . COLOR_BUFFER_BIT | gl . DEPTH_BUFFER_BIT ) // Add our canvas to the DOM (browser) document . body . appendChild ( canvas )

The above code will add an all black canvas to our page. We can verify this by running:

# Run this and then refresh your browser node_modules/browserify/bin/cmd.js \ f16-demo.js > bundle.js

Now if you refresh your browser you should see an all black canvas.

Drawing our model

Alright we have a blank canvas, let’s draw our model onto it.

Add this new code to the bottom of the f16-demo.js file that you created above.

// f16-demo.js // ... // ... // This helps us use our model's JSON and texture // data to create the command that draws our model var loadWFObj = require ( 'load-wavefront-obj' ) var loaded3dModel // Here we import our model's JSON // That we generated earlier. // You could also use an xhr request var modelJSON = require ( './f16-model.json' ) // Download our model's texture image. // You could also pass in a Uint8Array // of image data var image = new window . Image () image . crossOrigin = 'anonymous' // Once our image downloads we buffer our // 3d model data for the GPU image . onload = loadModel image . src = 'f16-texture.bmp' // This prepare our data for the GPU // so that we can later draw it function loadModel () { loaded3dModel = loadWFObj ( gl , modelJSON , { textureImage : image }) } // Our model's x-axis rotation in radians var xRotation = 0 // We render our model every request animation frame var loop = require ( 'raf-loop' ) // dt is the number of milliseconds since // we last rendered our model loop ( function ( dt ) { gl . viewport ( 0 , 0 , canvas . width , canvas . height ) gl . clear ( gl . COLOR_BUFFER_BIT | gl . DEPTH_BUFFER_BIT ) // Once we've loaded our model we draw it every frame if ( loaded3dModel ) { // Pass in options whenever you want to draw // Your model. There are other options that // we'll link to below! loaded3dModel . draw ({ position : [ 0 , 0 , - 3.1 ], rotateX : xRotation }) } // Rotate our model by a little each frame xRotation += dt / 3000 }). start ()

The above code first loaded our model’s JSON file and our texture. It then passes those into load-wavefront-obj, a module that gives us a draw command that we can use to render our 3d model onto our canvas.

We use a raf-loop to redraw our model everytime the browser repaints.

Lastly, we call loaded3dModel.draw with our position and rotation. We’re only manipulating the rotation in this tutorial, but load-wavefront-obj documents the other other options that you can pass in.

Now if we view our demo app we should see our f16 ship rotating in our browser.

# Run this and then refresh your browser node_modules/browserify/bin/cmd.js \ f16-demo.js > bundle.js

Where to go from here

Rapid prototyping

Needing to run our browserify command everytime we make a change can become a bit frustrating, so a live reload server like budo can be very handy. Give budo --open --live f16-demo.js a try.

Camera

In a real application you’ll usually want to render your models in a scene relative to a camera that you have control over.

You can accomplish this by passing your camera’s viewMatrix into your model’s draw command.

What would you like to learn next? Let me know on Twitter!

Til’ next time,

- CFN

Update: Hey HN. Thanks for the feedback! Check out the discussion on Hacker News