Recall from the Vector Spaces post that a vector space has two operations: addition and scalar multiplication. Consider these matrix transformations: \(T(\mathbf{u} + \mathbf{v}) = A(\mathbf{u} + \mathbf{v}) = A\mathbf{u}+A\mathbf{v} = T(\mathbf{u})+T(\mathbf{v})\), and \(T(c\mathbf{u}) = A(c\mathbf{u}) = cA\mathbf{u} = cT(\mathbf{u})\). From this, it's easy to understand the textbook definition:

Let \(\mathbf{u}\text{ and }\mathbf{v}\) be vectors in \(R^n\) and \(c\) be a scalar. A transformation \(T:R^n\rightarrow{R^n}\) is a linear transformation if \(T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u})+T(\mathbf{v})\) and \(T(c\mathbf{u}) = cT(\mathbf{u})\).

These properties tell us that all linear transformations preserve addition and scalar multiplication. Every matrix transformation is linear, but translations and affine transformations are not (I'll show you later that there is a workaround).

We can refer to a transformation where domain and codomain are the same (such as \(R^n\rightarrow{R^n}\)) as an operator.

In the previous post, I (and the textbook author) used ad hoc ways of arriving at matrices that describe transformations such as dilations, rotations, and reflections. Now, we will learn a formal method.

Let's start with the example, but first require the namespaces that we're going to need:

( require ' [ uncomplicate.neanderthal [ native :refer [ dv dge ] ] [ core :refer [ mv mm col axpy copy ] ] [ linalg :refer [ ev! sv! trf tri ] ] [ math :refer [ cos sin pi ] ] ] )

The example 3 from page 259 of the textbook tries to find a matrix that describes the following transformation: \(T\left(\begin{bmatrix}x\\y\\\end{bmatrix}\right)=\begin{bmatrix}2x+y\\3y\end{bmatrix}\).

Here's what we'll do in Clojure: first we find the effect of \(T\) on the standard basis of the domain \(R^n\), and then form a matrix whose columns are those images. Easy!

( let [ t! ( fn [ v ] ( v 0 ( + ( * 2 ( v 0 ) ) ( v 1 ) ) ) ( v 1 ( * 3 ( v 1 ) ) ) ) a ( dge 2 2 [ 1 0 0 1 ] ) ] ( t! ( col a 0 ) ) ( t! ( col a 1 ) ) a )

#RealGEMatrix[double, mxn:2x2, layout:column, offset:0] ▥ ↓ ↓ ┓ → 2.00 1.00 → 0.00 3.00 ┗ ┛

Excuse the imperative code in function t! , but we've found the matrix that represents the linear transformation. Please keep in mind that vectors and matrices are functions that can retrieve or update values at specific index, but that these functions are usually boxed. In tight loops, you'd want to use entry and entry! from the uncomplicate.neanderthal.real namespace, or some other operation optimized for performance.

Let \(T\) be a linear transformation on \(R^n\), \(\left\{\mathbf{e_1}, \mathbf{e_2}, \dots , \mathbf{e_n}\right\}\) the standard basis (see the Vector Spaces post) and \(\mathbf{u}\) arbitrary vector in \(R^n\). \(\mathbf{e_1}=\begin{bmatrix}1\\0\\\vdots\\0\end{bmatrix}\), \(\mathbf{e_2}=\begin{bmatrix}0\\1\\\vdots\\0\end{bmatrix}\), …, \(\mathbf{e_n}=\begin{bmatrix}0\\0\\\vdots\\1\end{bmatrix}\), and \(\mathbf{u}=\begin{bmatrix}c_1\\c_2\\\vdots\\c_n\end{bmatrix}\).

We can express \(\mathbf{u}\) as a linear combination of \({\mathbf{e_1}, \mathbf{e_2}, \dots , \mathbf{e_n}}\): \(\mathbf{u} = c_1\mathbf{e_1}+c_2\mathbf{e_2}+\dots+c_n\mathbf{e_n}\). If we fill that in \(T(\mathbf{u})\), and apply the properties of linear transformations (look this up in the textbook, or do your own pen and paper exercise), we find that \(A=[T(\mathbf{e_1})\cdots T(\mathbf{e_n})]\). \(A\) is called the standard matrix of \(T\).

The previous definition of linear transformations in \(R^n\) can be expanded for any vector space (not only \(R^n\)).