Not only that we can construct matrices that represent transformations, it turns out that every matrix defines a transformation!

According to the textbook definition: Let \(A\) be a \(m\times{n}\) matrix, and \(\mathbf{x}\) an element of \(R^n\). \(A\) defines a matrix transformation \(T(\mathbf{x})=A\mathbf{x}\) of \(R^n\) (domain) into \(R^m\) (codomain). The resulting vector \(A\mathbf{x}\) is the image of the transformation.

Note that matrix dimensions m and n correspond to the dimensions of the codomain ( m , number of rows) and domain ( n , number of columns).

Matrix transformations have the following geometrical properties:

They map line segments into line segments (or points);

If \(A\) is invertible, they also map parallel lines into parallel lines.

Example 1 from page 248 illustrates this. Let \(T:R^2\rightarrow{R^2}\) be a transformation defined by the matrix \(A=\begin{bmatrix}4&2\\2&3\end{bmatrix}\). Define the image of the unit square under the transformation.

The code:

( let [ a ( dge 2 2 [ 4 2 2 3 ] ) p ( dv 1 0 ) q ( dv 1 1 ) r ( dv 0 1 ) o ( dv 0 0 ) ] [ ( mv a p ) ( mv a q ) ( mv a r ) ( mv a o ) ] )

'(#RealBlockVector(double n:2 offset: 0 stride:1) ( 4.00 2.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 6.00 5.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 2.00 3.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 0.00 0.00 ) )

And, we got a parallelogram, since \(A\) is invertible. Check that as an exercise; a matrix is invertible if its determinant is non-zero (you can use the det function).

Something bugs the programmer in me, though: what if we wanted to transform many points (vectors)? Do we use this pedestrian way, or we put those points in some sequence and use good old Clojure higher order functions such as map , reduce , filter , etc.? Let's try this.

( let [ a ( dge 2 2 [ 4 2 2 3 ] ) points [ ( dv 1 0 ) ( dv 1 1 ) ( dv 0 1 ) ( dv 0 0 ) ] ] ( map ( partial mv a ) points ) )

'(#RealBlockVector(double n:2 offset: 0 stride:1) ( 4.00 2.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 6.00 5.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 2.00 3.00 ) #RealBlockVector(double n:2 offset: 0 stride:1) ( 0.00 0.00 ) )

I could be pleased with this code. But I am not. I am not, because we only picked up the low-hanging fruit, and left a lot of simplicity and performance on the table. Consider this:

( let [ a ( dge 2 2 [ 4 2 2 3 ] ) square ( dge 2 4 [ 1 0 1 1 0 1 0 0 ] ) ] ( mm a square ) )

#RealGEMatrix[double, mxn:2x4, layout:column, offset:0] ▥ ↓ ↓ ↓ ↓ ┓ → 4.00 6.00 2.00 0.00 → 2.00 5.00 3.00 0.00 ┗ ┛

By multiplying matrix \(A\) with matrix \((\vec{p},\vec{q},\vec{r},\vec{o})\), we did the same operation as transforming each vector separately.

I like this approach much more.

It's simpler . Instead of maintaining disparate points of a unit square, we can treat them as one entity. If we still want access to points, we just call the col function.

. Instead of maintaining disparate points of a unit square, we can treat them as one entity. If we still want access to points, we just call the col function. It's faster. Instead of calling mv four times, we call mm once. In addition to that, our data is concentrated next to each other, in a structure that is cache-friendly. This can give huge performance yields when we work with large matrices.

This might be obvious in graphics programming, but I've seen too much data crunching code that uses matrices and vectors as dumb data structures, that I think this point is worth reiterating again and again.