Jacobi Method (via wikipedia):

An algorithm for determining the solutions of a diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The process is then iterated until it converges. This algorithm is a stripped-down version of the Jacobi transformation method of matrix diagonalization.

In other words, Jacobi’s method is an iterative method for solving systems of linear equations, very similar to Gauss-Seidel Method. Beginning with the standard Ax = b, where A is a known matrix and b is a known vector we can use Jacobi’s method to approximate/solve x. The one caveat being the A matrix must be diagonally dominant to ensure that the method converges, although it occasionally converges without this condition being met.

Using the Jacobi Method

For Jacobi’s method, A is decomposed to the diagonal matrix and remainder matrix:

Where, given A:

[1, 1, 1]

[1, 1, 1]

[1, 1, 1]

Diagonal Matrix (D):

[1, 0, 0]

[0, 1, 0]

[0, 0, 1]

Remainder Matrix (R):

[0, 1, 1]

[1, 0, 1]

[1, 1, 0]

Then using the following method we iterate (updating the X vector) until the vector converges (within some margin of error):

That is all there is to this method! Simply calculate the solution ten to hundreds of times and you can solve for x.

Using python this method is relatively easy to program:

View the code on Gist.

In the python program above, ‘n’ represents the number of iterations, ‘b’ represents the solution to Ax = b and A represents the matrix, and ‘x’ is what we are attempting to solve for (we first make an initial guess).

Output (Each iteration printed):

init [1.0, 1.0, 1.0]

000 [ 0.5 0.33333333 0.33333333]

001 [ 0.33333333 -0.27777778 0.47222222]

002 [-0.00694444 -0.24074074 0.64814815]

003 [-0.03240741 -0.23688272 0.57908951]

004 [-0.01321373 -0.29140947 0.57355967]

005 [-0.03909465 -0.28869813 0.5949342 ]

006 [-0.04308262 -0.28307542 0.58971694]

007 [-0.03896694 -0.28788291 0.58717804]

008 [-0.04073597 -0.28820362 0.58946648]

009 [-0.04146843 -0.28726767 0.58927855]

010 [-0.04095347 -0.28763711 0.58884448]

011 [-0.04102968 -0.28775483 0.58905346]

012 [-0.04114078 -0.28764092 0.58908 ]

013 [-0.04109046 -0.28766026 0.58902351]

014 [-0.04108601 -0.28768115 0.58903834]

015 [-0.04110016 -0.28766977 0.58904605]

016 [-0.0410964 -0.28766935 0.5890399 ]

017 [-0.04109465 -0.2876722 0.58904039]

018 [-0.0410962 -0.28767129 0.58904162]

019 [-0.04109605 -0.28767098 0.58904107]

020 [-0.04109576 -0.28767131 0.58904099]

021 [-0.0410959 -0.28767126 0.58904114]

022 [-0.04109592 -0.2876712 0.5890411 ]

023 [-0.04109588 -0.28767124 0.58904108]

024 [-0.04109589 -0.28767124 0.5890411 ]

Sol [-0.04109589 -0.28767124 0.5890411 ]

Act [-0.04109589 -0.28767123 0.5890411 ]

As you can see after ~25 iterations the iterative solution was stable and very close to the actual solution (however not quite). If you are interested in seeing the full code feel free to fork/download from my github.

Related Articles