Knowing AI programming techniques is not sufficient if you want to do either AI research or write AI applications. Even graduate students in AI ignore the fundamentals -- data structure and algorithm designs and complexity analysis -- much more than they should. Further, it is insufficient to know that reference books with these topics are available; you need to know about the techniques to know when and where to look in these references.

One place where lack of fundamentals shows up is in performance. AI techniques are generally difficult to implement and often slow. You shouldn't be too eager to lose performance because you didn't know that available algoithmic techniques could help. In this article, I look at a technique called "dynamic programming" and show an application of it.

Divide and Conquer

One of the first problem-solving techniques that a LISP programmer learns is divide and conquer -- attacking a problem by breaking it up into independent subproblems, solving them, and combining the results into the ultimate solution. Simple recursive programs use divide and conquer. Listing One shows a slightly unusual definition of factorial that uses a more aggressive approach to divide-and-conquer.

(defun fact (n) (labels ((prod (start end) (cond ((= start end) start) (t (let ((h (floor (+ start end) 2))) (* (prod start h) (prod (1+ h) end))))))) (prod 1 n)))

Instead of dividing the problem of n! into n and (n-1)!, this solution divides it into:

· 1 ·...n/2

and

(n/2 + 1) · ... · n

These problems are multiplied. The inner function prod computes:

end i · ... · n i = start

It is interesting to compare the running time of this program with the usual recursive definition of factorial:

(defun f (n) (if ( < n 2) 1 (* n (f (1- n)))))

When computing 5000!, fact is 2.67 times faster than f. Notice that each performs the same number of multiplications as the other, but the order in which they are performed is different. The multiplications to compute 8! are the following for f: 2*1, 3*2, 4*6, 5*24, 6*120, 7*720, and 8*5040. For fact they are 1*2, 3*4, 2*12, 5*6, 7*8, 30*56, and 24*1680.

This multiplication makes a difference in the amount of storage used during computation. Factorial of any number larger than 10 is a bignum, which is the name of the Common LISP type that holds integers that are larger than can be stored as machine integers. Bignums are allocated in much the same way that cons cells are allocated. Therefore, during a long calculation like factorial, intermediate bignums are consed and discarded. If the number of bits required to represent m! is denoted by s, then the number of bits required during the computation of m! using f is on the order of sm. The number of bits required using fact is on the order of s log m.

Function f keeps an accumulated value that is repeatedly multiplied by a fixnum, which is a fixed-size quantity represented generally in 32 bits. m multiplications are mostly of a bignum by a fixnum, and the final result is of size s. If we think of the nth intermediate result being of size sn/m, the total storage required is on the order of ms.

Function fact creates a bignum of the same size but produces smaller intermediate results. Let's look at the first time through the recursive clause in prod. The midpoint between 1 and m is computed, two subproducts (p and q) are computed and are multiplied together. The resulting bignum has size s, which equals the sum of the sizes of p and q.

Suppose, for ease of argument, that m is a power of 2, say 2k. We can look at the computation tree, which is a perfectly balanced binary tree, and see that the size of bignums at the root is s, as is the sum of the sizes of the two bignums at the level just below the root. Each level contains subproducts that need to be computed to produce the products on the next higher level. So, the third level contains four subproducts -- two that are multiplied to get p and two that are multiplied to get q. Therefore, every level contains numbers in which the sizes total s, and k+1 levels exist. The total amount of storage consumed is ks or s log 2 m. This argument relies on the fact that the implementation of the bignum algorithms does not create additional garbage.

Another way to appreciate this argument is to count the number of fixnums that are produced by the multiplication expressions in each function. While computing 1000!, f produces 11 fixnums and fact produces 1,033 fixnums. fact produces more small intermediate results.

Finally, when computing 3000!, f conses 5.5 megabytes, while fact conses 58 kilobytes; when computing 4000!, f conses 10 megabytes, while fact conses 78 kilobytes.

However, many optimization problems cannot be solved by using divide and conquer, which treats each subproblem independently from the others. In a typical optimization problem, the subproblems are not independent but repeated or shared, and divide and conquer will solve each shared subproblem several times, which might cause significant slowdowns.