Let \(A\) be a \(m \times n\) matrix. If there are scalar \(\lambda\) and a non-zero vector \(\mathbf{x}\) such that \(A\mathbf{x} = \lambda\mathbf{x}\), we call such scalar eigenvalue, and such vector eigenvector.

There can be more than one eigenvalue for a given matrix, and there is an infinite number of eigenvectors corresponding to one eigenvalue. All eigenvectors that correspond to one a unique eigenvalue lie on the same line, but have different magnitudes.

Seems simple, and it is, but so what? It looks like a trivial thing; how come these eigenvectors and eigenvalues are so ubiquitous in linear algebra? It turns out that some useful special matrices can be computed, and some useful theorems can be build upon this simple definition. IANM (I Am Not a Mathematician), so I'll let you use your math textbook to discover that further.

Let's do some Clojure: given a matrix, how do I find eigenvalues and eigenvectors? The function is called ev! and it can be found in the uncomplicate.neanderthal.linalg namespace:

( require ' [ uncomplicate.neanderthal [ core :refer [ col entry nrm2 mv scal axpy copy mm dia ] ] [ native :refer [ dge ] ] [ linalg :refer [ ev! tri! trf ] ] ] )

I'm following the example 1 from page 210; there is a matrix a and we are looking for 2 eigenvalues with corresponding eigenvectors. Calling def inside a let block is not a coding style to be proud of, but here I do it because I need an easy way to produce printable outputs in org-mode and org-babel, which is used to generate this nice text from live code.

( let [ a ( dge 2 2 [ -4 3 -6 5 ] ) ;; note: column-oriented! eigenvectors ( dge 2 2 ) eigenvalues ( ev! a nil eigenvectors ) ] ( def lambda1 ( entry eigenvalues 0 0 ) ) ( def x1 ( col eigenvectors 0 ) ) ( def lambda2 ( entry eigenvalues 1 0 ) ) ( def x2 ( col eigenvectors 1 ) ) )

The first eigenvector is \(\lambda = 1\) (the order is not important):

lambda1

-1.0

An infinite number of vectors correspond to this λ value, but they are all linearly dependent: find one base vector, and you can construct any other by scaling that one. That base vector forms a subspace, or eigenspace. The base corresponding to \(\lambda = 1\) is \(r(-0.89, 0.45)\):

x1

#RealBlockVector[double, n:2, offset: 0, stride:1] [ -0.89 0.45 ]

Mathematicians warn us to always be skeptical. Let's check that λ 1 and \(\mathbf{x_1}\) are really an eigenvalue and eigenvector:

( let [ a ( dge 2 2 [ -4 3 -6 5 ] ) ] ( axpy -1 ( mv a x1 ) ( scal lambda1 x1 ) ) )

#RealBlockVector[double, n:2, offset: 0, stride:1] [ 0.00 0.00 ]

Yes, they are.

Perhaps you wonder why I haven't simply check these two vectors for equality with = . Recall from the part 1 of this tutorial that comparing floating point numbers for equality is a tricky business. Even a small difference of 0.00000001, that can appear due to inevitable rounding errors, would break such equality check. Even those numbers, 0.89, and 0.45 are not very precise - they have much more digits, but Neanderthal rounds them to two decimals for readability. These are the actual values to the max precision available by 64 bits:

( doall ( seq x1 ) )

-0.8944271909999159 0.4472135954999579

Another benefit of using a good numerical software (such as Neanderthal) is that the eigenvectors we get are normalized:

( nrm2 x1 )

1.0

We can repeat the same procedure for λ 2 ; it is a good idea if you test them as an exercise in your trusty Clojure REPL.

lambda2

2.0

x2

#RealBlockVector[double, n:2, offset: 2, stride:1] [ 0.71 -0.71 ]

Eigenvalues are not necessarily distinct. Consider example 2 from page 214:

( let [ a ( dge 3 3 [ 5 4 2 4 5 2 2 2 2 ] ) ;; note: column-oriented! eigenvectors ( dge 3 3 ) eigenvalues ( ev! a nil eigenvectors ) ] [ ( col eigenvalues 0 ) eigenvectors ] )

'(#RealBlockVector(double n:3 offset: 0 stride:1) ( 1.00 10.00 1.00 ) #RealGEMatrix(double mxn:3x3 layout:column offset:0) ▥ ↓ ↓ ↓ ┓ → -0.75 0.67 -0.03 → 0.60 0.67 -0.42 → 0.30 0.33 0.91 ┗ ┛ )

\(\lambda = 1\) appears two times. The space corresponding to it has multiplicity 2, and the dimension of its corresponding eigenspace is 2. As an exercise, you might check whether any linear combination of these two eigenvectors (column 0 and column 2 from the result matrix) is indeed an eigenvector (it should be!).

You might also wonder why those eigenvalues are in the first column of a result matrix. Eigenvalues are usually complex numbers. That matrix has \(m\times{2}\) dimensions. The first column contains the real part, and the second the imaginary part. These examples from the textbook are well-designed to be easily computed by hand, so I knew that the imaginary part was zero, and didn't bother to clutter the introductory code. In general, I guess that eigenvalues will have imaginary component more often than not (IANM).