Of the implementations of Fibonacci

Fibonacci’s sequence

My first post generated more interest than expected, but even more rewarding, some people pointed out some of my mistakes, corrected me, and actually taught me a couple of things. I’ll try to keep up the rythm and publish at least once a week.

I’ve been asked once in an interview about efficient ways to compute Fibonacci. Below is what I tried to explain during the interview, but probably sounded like some kind of incoherent babbling about matrix to my interviewer.

I was talking about a little trick my computer science teacher taught my class, and which does not seem too famous.

If you feel like you already know everything about the Fibonacci sequence’s implementations, jump to the last implementation. If you don’t feel confortable with fibonacci , tag along as we will make our way through different implementations of fibonacci .

What’s Fibonacci anyway?

So Fibonacci sequence F n is defined as follows :

If you’re not familiar with such notation, it basically says that it starts 0, 1 and the next value is defined as the sum of the two precedent numbers.

The first numbers are therefore :

0, 1, 1, 2, 3, 5, 8, 13, ...

Recursive Implementation

The simplest way to implement fibonacci’s sequence is to just write its definition.

def fibo ( n ): if n == 0 : return 0 elif n == 1 : return 1 else : return fibo ( n - 1 ) + fibo ( n - 2 )

Plain and simple right? Yes, as often with recursive algorithm the complexity of this algorithm is huge. Fun fact, the number of calls C n required to compute fibo(n) actually behaves like the fibonacci sequence itself. It is very easy to show that

Bad news, as we will see in analytical formula, F n is exponential. We will see an exact analytical formula for the n-th term of the fibonacci sequence in “Analytical formula”. At the moment, we can convince ourselves that Fibonacci is exponential by noticing that

Hence,

Let’s see how we work out a better algorithm.

Memoization

Most of the time, the no-brainer in optimization is to cache results. Fibonacci is an obvious candidate for memoization. mentally visualize the recursive calls of Fibonacci, and you’ll notice that the function is called many times with the same arguments. If you find recursivity confusing, just notice that computing fibo(n) requires an exponential number of calls with arguments that takes values in [0, n] . There must be some calls with the same arguments right?

Memoization is a technique which consists on keeping the results of your results in a cache. Next calls will return the cached result. In python, as long as arguments are [good candidates to be used as the keys of a hash, we can do that transparently with a decorator.

def memoize ( f ): cache = {} def aux ( * args , ** kargs ): # dictionaries are not hashable, # we transform kargs in a tuple # of (k,v) items. k = ( args , tuple ( sorted ( kargs . items ())) ) if k not in cache : cache [ k ] = f ( * args , ** kargs ) return cache [ k ] return aux @ memoize def fibo ( n ): if n == 0 : return 0 elif n == 1 : return 1 else : return fibo ( n - 1 ) + fibo ( n - 2 )

Computing fibo(n) now requires only n calls to fibo. We achieved linear complexity here.

Dynamic Programming

What is nice with memoization is that results are cached on-the-fly, without requiring to make any change to the function. What is not so nice however is that the memory required. In order to compute fibo(n) , we have been storing (n+1) (key,value)-pairs.

What we are doing naturally when trying to compute the first numbers of the sequence by hand is actually much more efficient. You just iteratively look at the two precedent numbers, sum them up, and write the new number. To compute the next number, you only had to look at the two precedent numbers.

def fibo_dynamic ( n ): cur = 0 # F_i next = 1 # F_i+1 for i in xrange ( n ): ( cur , next ) = ( next , cur + next ) return cur

We now have a solution which is linear in computational time and constant in memory.

Analytical Formula and the gold number

Let’s now try to find a closed-form expression of fibonacci. To do so, we’ll recycle an idea from fibo_dynamic . Let’s introduce X n , a sequence of vector that contains the two consecutives Fibonacci number F n , F n+1 .

Now, we can reexpress python’s line

(cur, next) = (next, cur+next)

as a simple matrix multiplication :

Multiplying by the matrix

therefore makes us advance one step ahead. That’s one iteration of our loop in fibo_dynamic . Going n step ahead from the initial values X 0 = [ 0, 1 ]t</sub>, is therefore equivalent to multiplying X 0 by M n-times. That’s matrix exponentiation.

If you are allergic to such computations, skip to the next section.

One way to obtain the analytical formula of the fibonacci sequence is to diagonalize the matrix.

Wolfram Alpha gives us S and J diagonal such that

We can then use this diagonalized form to compute Mn

Now, the exponentiation of the diagonal matrix J is simple as it consists only on the exponentiation of its diagonal terms (called eigen-values).

Eventually, you’ll come up with the following formula, where φ is also called the gold number :

Now what is the complexity of computing this formula? Computing the power is typically done doing so-called fast exponentiation.

The idea is to use the fact that, x2p = ( xp )2. In python, the algorithm goes as follows

def power ( x , n ): if n == 0 : return 1 if n % 2 == 1 : return x * power ( x , n - 1 ) else : root = power ( x , n / 2 ) return root * root

and requires log 2 (n) floating point multiplications.

Smarter, better, stronger

We found a solution with a logarithmic complexity, but it is based on floating point multiplication. Our algorithm will give us back floating point results that we will want to round or truncate. Floating point have limitations, both in precision and they have an upper bound as well.

Also, I didn’t enjoy at all going through the diagonalization of the matrix. If I were to write an algorithm to compute the n-th term of such a sequence, I don’t want to go through the trouble of writing an algorithm to diagonalize the matrix.

Finally, this is a bit crazy, why do I have to use float, where everything in my problem is screaming integer.

Let’s get back at the matrix exponentiation :

The trick here is to apply the fast exponentiation algorithm directly on the matrix, without diagonalizing it. To do so, we need to define matrix, and matrix multiplication, but this is way less cumbersome than doing what we did before, isn’t it?

Here is now the fibonacci implementation based on integer matrix fast exponentiation.

def matmult ( A , B ): # multiplies 2x2 matrix def val ( i , j ): return A [ i ][ 0 ] * B [ 0 ][ j ] + A [ i ][ 1 ] * B [ 1 ][ j ] return ( ( val ( 0 , 0 ), val ( 0 , 1 )), ( val ( 1 , 0 ), val ( 1 , 1 )), ) def matrix_power ( A , n ): if n == 0 : return ( ( 1 , 0 ), ( 0 , 1 ) ) if n % 2 == 1 : return matmult ( A , matrix_power ( A , n - 1 )) else : root = matrix_power ( A , n / 2 ) return matmult ( root , root ) def fibo_matrix ( n ): M = ( ( 0 , 1 ), ( 1 , 1 ) ) return matrix_power ( M , n )[ 0 ][ 1 ]

We are doing 8 times more multiplications in this version, but that’s integer multiplications.

Moreover, python is handling big integers like a champ. You will be able to compute fibonacci(100000) efficiently now.

Where I lied

Actually everything here is a lie. The truth is that big integer multiplication shouldn’t be considered constant time, so that all the complexity described here does not relate to actual computational time.