Why is finding the highest-scoring Boggle board so difficult? It’s because there are so many boards to consider: 2^72 for the 4×4 case and 2^40 for the 3×3 case. At 10,000 boards/second the former corresponds to about 2 billion years of compute time, and the latter just two years. Just enumerating all 2^72 boards would take over 100,000 years.

So we have to come up with a technique that doesn’t involve looking at every single board. And I’ve come up with just such a method! This is the “exciting news” I alluded to in the last post.

Here’s the general technique:

Find a very high-scoring board (maybe this way) Consider a large class of boards Come up with an upper bound on the highest score achieved by any board in the class. If it’s lower than the score in step #1, we can eliminate all the boards in the class. If it’s not, subdivide the class and repeat step #2 with each subclass.

Classes of Boards

By “class of boards”, I mean something like this:

{a,e,i,o,u} {a,e,i,o,u} r {b,c,d,f,g,h} a t d e {r,s,t,v}

The squares that contain a set of letters can take on any of those letters. So this board is part of that class:

a i r d a t d e s 189 points

and so is this:

o u r f a t d e t 114 points

All told, there are 5 * 5 * 6 * 4 = 600 boards that are part of this class, each with its own score. Other fun classes of boards include “boards with only vowels” (1,953,125 members) and “boards with only consonants” (794,280,046,581 members).

Follow me past the fold for more…



Upper Bounds

Now on to step #3 of the general technique: calculating an upper bound. This is going to be easier if we introduce some mathematical notation:

b = A boggle board Score(b) = sum of the scores of all the words contained on b B = a class of boards, i.e. Score(B) =

An upper bound is a function such that , i.e. .

There’s one really easy upper bound: ! This just enumerates all the boards in the class B, scores each and takes the maximum score. It’s very expensive to compute for a large class of boards and hence not very practical. You and I both know that no board in containing only consonants has any points on it. We don’t need to enumerate through all 794 billion such boards to determine this.

With upper bounds, there’s a trade-off between how hard they are to compute and how “tight” they are, i.e. how closely they approximate . is very tight but is hard to compute. At the other end of the spectrum, we know that all the words on a board are in the dictionary. So we could just sum up the scores of all the words in the dictionary and get a number, say 1,000,000. Then is an upper bound. It is very easy to compute, but is not very tight.

The trick is to hit some sort of sweet spot that strikes a good balance between “tightness” and ease of computation. Over the rest of this blog post, I’ll present two upper bounds that do this. Upper bounds have the nice property that if and are two upper bounds, then is also an upper bound. So by finding two bounds, we’ll get a third that’s better than either one alone.

sum/union

The idea of this bound is to find all the words that can possibly occur in a class of boards. Since each word can only be found once, we can add the scores of all these words to get an upper bound.

To get the list of words, we use the same depth-first search strategy as we did to find words on a single board. The wrinkle is that, when we encounter a cell with multiple possible letters, we have to do a separate depth-first search for each.

At first glance, it doesn’t seem like this would be tractable for a board class like this one (alternating vowels and consonants):

{a,e,i,o,u} {b-d,f-h,j-n,p-t,v-z} {a,e,i,o,u} {b-d,f-h,j-n,p-t,v-z} {a,e,i,o,u} {b-d,f-h,j-n,p-t,v-z} {a,e,i,o,u} {b-d,f-h,j-n,p-t,v-z} {a,e,i,o,u}

In addition to the branching from going different directions on each square, there’s also a huge amount of branching from trying each letter on each square. But we’re saved by the same lesson we learned in boggle post #3: the dictionary is exceptionally effective at pruning thorny search trees. If we prune search trees like ‘bqu’ that don’t begin words, then there doesn’t wind up being that much work to do.

We can find all possible words on the above board in just under 1 second. This is about 10,000 times slower than it takes to score a conventional board, but it’s certainly tractable. The resulting score is 195,944. Given that no board scores higher than 545 points, this is a wild overestimate. But at least it’s a better bound than a million!

This technique does especially well on boards like this one, which contains all consonants:

{b-d,f-h,j-n,p-t,v-z} {b-d,f-h,j-n,p-t,v-z} {b-d,f-h,j-n,p-t,v-z} {b-d,f-h,j-n,p-t,v-z} {b-d,f-h,j-n,p-t,v-z} {b-d,f-h,j-n,p-t,v-z} {b-d,f-h,j-n,p-t,v-z} {b-d,f-h,j-n,p-t,v-z} {b-d,f-h,j-n,p-t,v-z}

This takes 0.23 seconds to score and results in a bound of 208 points (it contains words like ‘crypt’ and ‘gypsy’). We’ve already found a single board that has 545 points on it. So we can eliminate this entire class of 794 billion boards. That’s a speed of over 3 trillion boards/second! Of course, this board class is not typical.

It’s also worth pointing out why this upper bound isn’t tight. Consider this class of boards:

{a,i} r z f z z z z z

You can find both “fir” and “far” on boards in this class, but there aren’t any boards that contain both. So while each “fir and “far” contribute a point to the upper bound, they should only really contribute a single point. The sum/union bound doesn’t take into account the relationships between various letter choices. It’s the best trade-off between computability and “tightness” we’ve seen so far, but it’s not good enough to make the problem tractable.

max/no mark

In the sum/union upper bound, we dealt with multiple possible letters on the same square by trying each and adding the resulting scores (taking care not to count any word twice). But why take the sum of all choices when we know that any given board can only take on one of the possibilities? It would result in a much better bound if we took the max of the scores resulting from each possible choice, rather than the sum. This is the idea behind the “max/no mark” bound.

This is a huge win over sum/union, especially when there are many squares containing many possible letters. It does have one major drawback, though. The sum/union bound took advantage of the fact that each word could only be found once. With the max/no mark bound, the bookkeeping for this becomes completely intractable. The words we find by making a choice on one square may affect the results of a choice somewhere else. We can’t make the choices independently. The optimal set of choices becomes an optimization problem in its own right.

Rather than deal with this, max/no mark just throws up its hands. This is what the “no mark” refers to. In the past, we’ve recorded the words we find by marking the Trie. By not marking the Trie with found words, we accept that we’ll double-count words sometimes. But it still winds up being an excellent upper bound.

Lets try some of our previous examples:

{a,e,i,o,u} {a,e,i,o,u} r {b,c,d,f,g,h} a t d e {r,s,t,v} sum/union: 2880 max/no mark: 1307

Alternating vowels and consonants:

{a,e,i,o,u} {b-d,f-h,j-n,p-t,v-z} {a,e,i,o,u} {b-d,f-h,j-n,p-t,v-z} {a,e,i,o,u} {b-d,f-h,j-n,p-t,v-z} {a,e,i,o,u} {b-d,f-h,j-n,p-t,v-z} {a,e,i,o,u} sum/union: 195944 max/no mark: 15692

A class that can be entirely eliminated:

{b,d,f,g,j,k,m,p,v,w,x,z} a {s,y} {i,o,u} y a {s,y} {c,h,l,n,r,t} {c,h,l,n,r,t} sum/union: 2497 max/no mark: 447

max/no mark isn’t always better than sum/union:

{b,d} a {b,d} a {b,d} a {b,d} a {b,d} sum/union: 9 max/no mark: 132

This is something of a worst-case because, while there are relatively few distinct words, there are many different ways to find them.

Putting it all together

Our two bounds do well in different situations. max/no mark works best when there are lots of choices to be made on particular cells and there are relatively few ways to make any particular word. sum/union works best when there are lots of possibilities but relatively few distinct words. Putting them together results in a bound that’s good enough to find the best 3×3 boggle board using the technique described at the beginning of this post.

Given an initial class of boards, we wind up with what I call a “breaking tree”. If the initial class has an upper bound less than 545 points, then we’re done. Otherwise, we pick a cell to split and try each possibility.

Here’s a relatively small breaking tree that results from running this program:

$ ./3x3/ibucket_breaker --best_score 520 --break_class "bdfgjkmpvwxz a sy iou xyz aeiou sy chlnrt chlnrt" ( 0%) (0;1/1) bdfgjkmpvwxz a sy iou xyz aeiou sy chlnrt chlnrt (820, 77760 reps) split cell 4 (xyz) Will evaluate 3 more boards... ( 0%) (1;1/3) bdfgjkmpvwxz a sy iou x aeiou sy chlnrt chlnrt (475, 25920 reps) (33.333%) (1;2/3) bdfgjkmpvwxz a sy iou y aeiou sy chlnrt chlnrt (703, 25920 reps) split cell 5 (aeiou) Will evaluate 5 more boards... (33.333%) (2;1/5) bdfgjkmpvwxz a sy iou y a sy chlnrt chlnrt (447, 5184 reps) ( 40%) (2;2/5) bdfgjkmpvwxz a sy iou y e sy chlnrt chlnrt (524, 5184 reps) split cell (iou) 3 Will evaluate 3 more boards... ( 40%) (3;1/3) bdfgjkmpvwxz a sy i y e sy chlnrt chlnrt (346, 1728 reps) (42.222%) (3;2/3) bdfgjkmpvwxz a sy o y e sy chlnrt chlnrt (431, 1728 reps) (44.444%) (3;3/3) bdfgjkmpvwxz a sy u y e sy chlnrt chlnrt (339, 1728 reps) (46.667%) (2;3/5) bdfgjkmpvwxz a sy iou y i sy chlnrt chlnrt (378, 5184 reps) (53.333%) (2;4/5) bdfgjkmpvwxz a sy iou y o sy chlnrt chlnrt (423, 5184 reps) ( 60%) (2;5/5) bdfgjkmpvwxz a sy iou y u sy chlnrt chlnrt (318, 5184 reps) (66.667%) (1;3/3) bdfgjkmpvwxz a sy iou z aeiou sy chlnrt chlnrt (509, 25920 reps)

The numbers in parentheses are the upper bounds. When they get below 520 (the parameter I set on the command line), a sub-class is fully broken.

Using this technique and the following partition of the 26 letters:

bdfgjvwxz

aeiou

lnrsy

chkmpt

I was able to go through all 262,144 (=4^9) board classes in about six hours on a single machine. This resulted in the boards I listed in the last post. Six hours is a big improvement over two years!

If that same factor (two years to six hours) held for the 4×4 case, then we’d be down to 380 years of compute time to find the best 4×4 boggle board. Or, equivalently, 138 days on 1000 machines. That’s still a lot. We’re not quite there yet, but we’re getting closer!

Code for the program that went through all possible board classes can be found here. While many people have found high-scoring boards, I haven’t found any previous work on this upper bounding approach. So if you have any ideas/suggestions on how to improve the bound, they’re probably novel and useful!

Permalink