The previous shortcut is cute, but nothing to write home about. Fortunately, there is a much better optimization available for a special subset of symmetric matrices - those that are positive definite.

A matrix \(A\in{R^{n\times{n}}}\) is positive definite if \(x^TAx>0\) for all nonzero \(x\in{R^n}\), positive semidefinite if \(x^TAx\geq{0}\), and positive indefinite if there are \(x_1,x_2\in{R^n}\) such that \((x_1^TAx_1)(x_2^TAx_2)<0\). Huh? So, is my system positive definite? How would I know that?

Before I show you that, let me tell you that the reason why symmetric positive definite matrices are handy is that for them, there is a special factorization available - Cholesky factorization - which preserves symmetry and definiteness. Now, it turns out that discovering whether a matrix is positive definite is not easy. That's why I will not try to explain here how to do that, nor it would help you in practical work. The important (and fortunate) thing is that you don't have to care; Neanderthal will determine that automatically, and return the Cholesky factorization if possible. If not, it will return the \(LDL^T\) (or \(UDU^T\))!

Let's see how to do this in Clojure:

( let [ a ( dsy 3 [ 1 1 2 1 2 3 ] { :layout :row :uplo :lower } ) fact ( trf a ) b ( dge 3 2 [ -1 4 0.5 0 2 -1 ] { :layout :row } ) ] [ a fact ( trs fact b ) ] )

'(#RealUploMatrix(double type:sy mxn:3x3 layout:row offset:0) ▤ ↓ ↓ ↓ ┓ → 1.00 * * → 1.00 2.00 * → 1.00 2.00 3.00 ┗ ┛ #uncomplicate.neanderthal.internal.common.PivotlessLUFactorization(:lu #RealUploMatrix(double type:sy mxn:3x3 layout:row offset:0) ▤ ↓ ↓ ↓ ┓ → 1.00 * * → 1.00 1.00 * → 1.00 1.00 1.00 ┗ ┛ :master true :fresh #atom(true 0x8cf263e)) #RealGEMatrix(double mxn:3x2 layout:row offset:0) ▤ ↓ ↓ ┓ → -2.50 8.00 → 0.00 -3.00 → 1.50 -1.00 ┗ ┛ )

Notice how the code is virtually the same as in the previous example. The only thing that is different is the data. In this example, Neanderthal could do the Cholesky factorization, instead of more expensive LU with symmetric pivoting. Later it adapted the solver to use the available factorization for solving the linear equation, but everything went automatically.

In fact, Cholesky factorization is a variant of LU, just like \(LDL^T\) is. The difference is that L and U in Cholesky are \(G\) and \(G^T\). Notice: L is G, U is a transpose of G, and there is no need for the D in the middle. Also, no pivoting is necessary, which makes Cholesky quite nice; compact and efficient.

There are more subtle controls related to this in Neanderthal; look at the treasure trove of API documentation and tests. Writing these blog posts takes time and energy, and now I don't feel like taking too much time delving more into details related to this. :)

Of course, sv is also available, as well as destructive variants of the functions I've shown, and with them, too, you can rely on Neanderthal to select the appropriate algorithm automatically. I didn't show those here because that is used virtually in the same way as in the examples I've shown in previous articles. Let it be a nice easy exercise to try them on your own.