When we have a triangular system, we can solve it easily in \(O(n^2)\). What do we do if our system is not triangular? Well, obviously, call sv. By going the easy route, we'd leave a lot of potential on the table. Here's why: remember the Gaussian elimination from the start of the article? We first do some procedure on our system to make it triangular, and then another to find a solution. What if we can keep the result of that first procedure, and reuse it many times later? That's exactly what LU factorization is for.

LU factorization is practically the triangularization of a general matrix into a couple of triangular matrices, that can later used in its place for solving linear systems and other useful things. It is an algorithmic description of Gaussian elimination.

Look at this example (\(A=LU\)):

\begin{equation} \begin{bmatrix} 3 & 5\\ 6 & 7\\ \end{bmatrix}= \begin{bmatrix} 1 & 0\\ 2 & 1\\ \end{bmatrix} \begin{bmatrix} 3 & 5\\ 0 & -3\\ \end{bmatrix} \end{equation}

A general matrix has been described as a product of two triangular matrices: L, the lower triangle, and U, the upper triangle.

How do we now proceed to use them? Our simple triangular system has only one triangle, what do we do with two?

The original system is \(Ax=b\). We proceed as follows: we decide that \(y=Ux\) and then \(Ax=LUx=Ly=b\). First we solve the system \(Ly=b\), then \(Ux=y\), and we have our solution. Once we have triangularized the original general system, we can solve it by solving two triangular systems!

Lesson number 3? Reuse the results as much as you can, since they are so expensive to compute.

LU factorization seems quite straightforward once it's been laid out, but I've seen so many people trying to solve this problem as described in math textbooks: they see the formula \(Ax=b\), (correctly) conclude that \(x=A^{-1}y\), and (correctly but unwisely) start the quest of computing the inverse matrix. The inverse matrix is something that is really hard to compute, not only because it requires lots of FLOPs, but even more because there is a large possibility that the result will be rather imprecise.

Computing \(A^{-1}\) is seldom necessary! The right approach is to solve the related system of linear equations.

If the determinant of \(A\) is not 0, LU factorization exists. When \(A\) is non-singular, then the factorization is unique.

On with our simple example, but in Clojure. We do a LU factorization by calling the function trf!, or its pure cousin trf. TRF stands for TRiangular Factorization.

( let [ a ( dge 3 3 [ 1 0 1 1 -1 1 3 1 4 ] ) lu ( trf a ) ] [ ( :lu lu ) ( view-tr ( :lu lu ) { :uplo :lower :diag :unit } ) ( view-tr ( :lu lu ) { :uplo :upper } ) ] )

'(#RealGEMatrix(double mxn:3x3 layout:column offset:0) ▥ ↓ ↓ ↓ ┓ → 1.00 1.00 3.00 → 0.00 -1.00 1.00 → 1.00 -0.00 1.00 ┗ ┛ #RealUploMatrix(double type:tr mxn:3x3 layout:column offset:0) ▥ ↓ ↓ ↓ ┓ → ·1· * * → 0.00 ·1· * → 1.00 -0.00 ·1· ┗ ┛ #RealUploMatrix(double type:tr mxn:3x3 layout:column offset:0) ▥ ↓ ↓ ↓ ┓ → 1.00 1.00 3.00 → * -1.00 1.00 → * * 1.00 ┗ ┛ )

We called trf, and got a record that contains (among other things that we'll look at later) a general matrix accessible through the :lu key. The :lu general matrix contains both L and U. The reason for that is that they perfectly fit together and can be tightly packed in memory and used efficiently in further computations. Since we are never particularly interested in L and U themselves, but in the results that we can get by supplying them to the other procedures, separating them at this point would be both inefficient and necessary.

Maybe Lesson number 4 at this point? What looks more elegant at first glance (returning neatly separated L and U so we can look at them and worship them in all their glory) is sometimes exactly the naive and wrong choice.

But anyway, if we really need to (or just like to because of their artistic appeal) see L and U, Neanderthal offers an easy way to do this: just take a view of :upper or :lower triangle of that general matrix, by calling the view-tr function. Just for your information, you can also take different view-XX of most matrix structures in Neanderthal.

I know (by learning it from a numerical math textbook) that L is a lower triangular unit matrix that have 1 on the diagonal, and U is a upper triangular. If L weren't unit-diagonal, L and U couldn't have been so neatly packed into a general matrix. A nice and useful coincidence :)

The key difference to sv, again, is that we can reuse the LU to not only solve multiple systems, but to compute the condition number (con), or even the notorious inverse matrix (tri; again, we rarely need the inverse).

Let's do that:

( let [ a ( dge 3 3 [ 1 0 1 1 -1 1 3 1 4 ] ) b ( dge 3 5 [ 11 -2 9 6 3 4 7 8 5 -3 -1 2 0 9 -1 ] ) lu ( trf a ) ] [ ( con lu ( nrm1 a ) ) ( det lu ) ( trs! lu b ) ( tri! lu ) ] )

'(0.017857142857142856 1.0 #RealGEMatrix(double mxn:3x5 layout:column offset:0) ▥ ↓ ↓ ↓ ↓ ↓ ┓ → 17.00 17.00 23.00 -24.00 13.00 → -0.00 -5.00 -10.00 6.00 -10.00 → -2.00 -2.00 -2.00 5.00 -1.00 ┗ ┛ #RealGEMatrix(double mxn:3x3 layout:column offset:0) ▥ ↓ ↓ ↓ ┓ → 5.00 1.00 -4.00 → -1.00 -1.00 1.00 → -1.00 0.00 1.00 ┗ ┛ )

That LU was reused to compute the reverse condition number, and the determinant, and to solve five systems of linear equations, and, finally, to compute the inverse matrix. I find that quite neat.