I’ve been working on a browser-based word game, naturally written in JavaScript, and have been encountering some interesting technical challenges along the way. I’ve written up my thought process here for others to learn from (note that most of this happened over the course of a month, or so). I’ve often found that while a final solution to a problem may be rather elegant and “make perfect sense” when looking at it – it’s only through the result of much trial and error that the solution was arrived upon.

To start, in my game, the user is frequently re-arranging letters – causing the game to look up words in a dictionary to see if they are valid, or not.

I’ve taken multiple passes at implementing a solution to this problem, ranging all the way from “I don’t care about performance, I just want it to work” all the way up to “thousands of people could be playing simultaneously, how do I scale?”

For my seed dictionary I used a variation of the Scrabble dictionary, which can be found via some creative Googling. The full dictionary ends up being around 916KB (with words separated by an endline).

Server-Side Solution

The first pass was stupid simple. It worked, but just barely. I took the dictionary and split it up into 26 files – one for each letter of the alphabet – and put all the words that started with the corresponding letter in the text file.

I then made a little PHP script to handle user requests – reading in the portion of the dictionary file and returning “pass” or “fail” if the word was found.

<?php # Get the word to be checked from the user $word = $_GET['word']; # Get the first letter of that word $first = substr( $word, 0, 1 ); # Open the corresponding file $handle = fopen("words/" . $first . ".txt", "r"); if ( $handle ) { # Keep going until the end of the file while (!feof($handle)) { # Get the word in the dictionary # (removing the endline) $line = trim(fgets($handle)); # And see if the word matches if ( $line == $word ) { # If so return "pass" to the client echo "pass"; exit(); } } fclose($handle); } # If we made it to the end, then we failed echo "fail"; ?>

(Please don’t use the above code, it’s terrible.)

Thus you would call the PHP script like so:

/words.php?word=test

And it would return either “pass” or “fail” (meaning that it would only work on the same domain).

So while the above worked it wasn’t nearly fast enough and it consumed a ton of memory on the server – each request was reading in large portions of potentially 100KB+ files. Time for a better solution.

A Better Server-Side Solution

My next attempt was to write a simple little web server (in Perl, this time) that pre-loaded the entire dictionary into memory, and handled all the requests via JSONP.

#!/usr/bin/perl use CGI; # Read the dictionary into the word hash my %words = map { $_, 1 } split(/

/, `cat dict/dict.txt`); # Instantiate and start the web server use base qw(Net::Server::HTTP); __PACKAGE__->run( port => 8338 ); sub process_http_request { # Make a CGI object for the request my $cgi = CGI->new; # Get the word from the user my $word = $cgi->param( "word" ); $word =~ s/\W//g; # And the callback name my $callback = $cgi->param( "callback" ); $callback =~ s/\W//g; print "Content-type: text/javascript



"; if ( $word && $callback ) { # Dump back the results in a JSONP format print $callback . '({"word":"' . $word . '","pass":' . (defined $words{ $word } ? 'true' : 'false') . '})'; } }

You would call the above using something like this:

http://example.com:8338/?word=test&callback=wordFound

There are a bunch of things that I don’t like about the above code (like using a hash to lookup the words, when more memory-efficient solutions exist, and instantiating a CGI object on every request – things like that). Also if I were to do this today I’d probably write it in Node.js, which I’m becoming more familiar with.

Compared with the previous solution though there are a large number of advantages. Since the dictionary is being loaded into memory only once, and into a hash, it makes lookups very, very, fast. I was timing entire HTTP requests at only a handful of milliseconds, which is great. Additionally since this solution utilized JSONP it was possible to set up a server (or multiple servers) dedicated to looking up words and have the clients connect to them cross-domain.

A Client-Side Solution

It was at this point that I realized a couple problems with the previous server-side solutions. If my game was going to work offline (or as a distributable mobile application) then a constant connection to a dictionary server wasn’t going to be possible. (And if slow mobile connections were taken into account then the responsiveness of the server just wasn’t going to be sufficient enough.)

The dictionary was going to have to live on the client.

Thus I wrote a simple Ajax request to load the text dictionary and make a lookup object for later use.

// The dictionary lookup object var dict = {}; // Do a jQuery Ajax request for the text dictionary $.get( "dict/dict.txt", function( txt ) { // Get an array of all the words var words = txt.split( "

" ); // And add them as properties to the dictionary lookup // This will allow for fast lookups later for ( var i = 0; i < words.length; i++ ) { dict[ words[i] ] = true; } // The game would start after the dictionary was loaded // startGame(); }); // Takes in an array of letters and finds the longest // possible word at the front of the letters function findWord( letters ) { // Clone the array for manipulation var curLetters = letters.slice( 0 ), word = ""; // Make sure the word is at least 3 letters long while ( curLetters.length > 2 ) { // Get a word out of the existing letters word = curLetters.join(""); // And see if it's in the dictionary if ( dict[ word ] ) { // If it is, return that word return word; } // Otherwise remove another letter from the end curLetters.pop(); } }

Note that having the dictionary on the client actually allowed for an interesting new form of game play. Previously the player had to select a specific word and send it to the server – and only then would the server say if that word was valid or not. Since the dictionary now lives on the client (making lookups instantaneous, in comparison) we can change the logic a bit: The game now looks for the longest word at the start of the user’s letters. For example, if the user had the letters “rategk” then the function would work like so:

findWord( [ "r", "a", "t", "e", "g", "k" ] ) // => returns "rate" findWord( [ "k", "t", "a", "g", "e", "k" ] ) // => returns undefined

Naturally, something similar could’ve been done on the server-side – but that wasn’t readily apparent when working on the server. This is a case where a performance optimization actually created a new, emergent, form of gameplay.

But here’s the rub: We’re now sending a massive dictionary down to a client. It’s 916KB – and that’s absolutely massive.

Optimizing the Client-Side Solution

Compression and Caching

Turning on Gzip compression on the server reduces the dictionary file size from 916KB to a much-more-sane 276KB. Additionally configuring the cache-control settings of your server will ensure that the dictionary won’t be requested again for a very long time (assuming that the cache in the browser isn’t cleared).

There are some excellent articles already written on these techniques:

Content Delivery Networks

Of course, all of this is a bit of a given when you use a Content Delivery Network – which I most certainly do. Right now I’m using Amazon Cloudfront, due to its relatively simple API, but I’m open to other solutions. This means that the dictionary file will be positioned on a large number of servers around the globe and served to the user in the fastest manner possible (using both gzip and proper cache headers).

Cross-Domain Requests

There’s a problem, though: We can’t load our dictionary from a CDN! Since the CDN is located on another server (or on another sub-domain, as is the case here) we’re at the mercy of the browser’s cross-origin policy prohibiting those types of requests. All is not lost though – with a simple tweak to the dictionary file we can load it across domains.

First, we replace all endlines in the dictionary file with a space. Second, we wrap the entire line with a JSONP statement. Thus the final result looks something like this:

dictLoaded('aah aahed aahing aahs aal... zyzzyvas zzz');

This allows us to do an Ajax request for the file and have it work as would expected it to – while still benefitting from all the caching and compression provided by the browser.

I alluded to another problem already: If the browser doesn’t cache the file, for some reason, or if the cache runs out of space and the file is expunged – it’ll be downloaded again. I want to try and reduce the number of times in which 216KB file will be downloaded – if not for the users then for reducing my bandwidth bill.

Local Storage

This is where we can use a great feature of HTML 5: Local Storage. Mark Pilgrim has a great tutorial written up on the subject that I recommend to all. In a crude nutshell: You now have an object that you can stuff strings into and they’ll be persisted by the browser. Most browsers give you around 5MB to play with – which is more than enough for our dictionary file. It also has great cross-browser support – with the simple API working across all modern browsers.

With a little tweak to our Ajax logic (taking into account the JSONP request, the CDN, and Local Storage) we now end up with a revised solution:

// See if the property that we want is pre-cached in the localStorage if ( window.localStorage !== null && window.localStorage.gameDict ) { dictReady( window.localStorage.gameDict ); // Load in the dictionary from the server } else { jQuery.ajax({ url: cdnHREF + "dict/dict.js", dataType: "jsonp", jsonp: false, jsonpCallback: "dictLoaded", success: function( txt ) { // Cache the dictionary, if possible if ( window.localStorage !== null ) { window.localStorage.gameDict = txt; } // Let the rest of the game know // that the dictionary is ready dictReady( txt ); } // TODO: Add error/timeout handling }); }

This gives us an incredibly efficient solution, allowing us to load the dictionary from a CDN (gzipped and with proper cache headers) – and avoiding subsequent requests to get the file if we already have it cached.

Improving Memory Usage

One final tweak that we can make. If you remember the previous dictionary lookup we loaded the entire dictionary into an object and then checked to see if a specific property existed. While this works, and is fast, it also ends up consuming a lot of memory (more so than the existing 916KB, at least).

To avoid this we can be a little bit tricky. Instead of putting the words into an object we can just leave the entire dictionary string intact and then do searches using a JavaScript String’s indexOf method.

Thus inside the dictReady callback we’ll have something like the following:

function dictReady( txt ) { dict = " " + txt + " "; }

and our revised findWord function will look something like this:

// Finding and extracting words function findWord( letters ) { // Copy all the tiles var curRack = letters.slice( 0 ), word = ""; // We're going to keep going through the available letters // looking for a long-enough word while ( curRack.length >= 3 ) { // Find a "word" from the available tiles word = curRack.join(""); // ... and see if it's in the dictionary if ( dict.indexOf( " " + word + " " ) >= 0 ) { // We've found the word and we can stop return word; } // ... otherwise we need to remove the last letter curRack.pop(); } }

All together, as of this moment, this is the most optimal solution to this particular problem that I can think of. It’s likely that I’ll find some additional tweaks (or ways of improving memory usage) in the future – but at the very least this solution keeps HTTP requests to a minimum, bandwidth usage to a minimum, memory usage to a minimum, and lookups fast.