$\begingroup$

Just to expand on the comment of @Myself above...

Adjoining algebraics as matrix maths

Sometimes in mathematics or computation you can get away with adjoining an algebraic number $\alpha$ to some simpler ring $R$ of numbers like the integers or rationals $\mathbb Q$, and these characterize all of your solutions. If this number obeys the algebraic equation $$\alpha^n = \sum_{k=0}^{n-1} q_k \alpha^k$$ for all $q_k \in R,$ we call the above polynomial equation $Q(\alpha) = 0$, and then we can adjoin this number by using polynomials of degree $n - 1$ with coefficients from $R$ and evaluated at $\alpha$: the ring is formally denoted $R[\alpha]/Q(\alpha),$ "the quotient group of the polynomials with coefficients in $R$ of some parameter $\alpha$ given their equivalence modulo polynomial division by $Q(\alpha).$"

If you write down the action of the multiplication $(\alpha\cdot)$ on the vector in $R^n$ corresponding to such a polynomial in this ring, it will look like the matrix $$\alpha \leftrightarrow A = \begin{bmatrix}0 & 0 & 0 & \dots & 0 & q_0\\ 1 & 0 & 0 & \dots & 0 & q_1 \\ 0 & 1 & 0 & \dots & 0 & q_2 \\ \dots & \dots & \dots & \dots & \dots & \dots \\ 0 & 0 & 0 & \dots & 0 & q_{n-2} \\ 0 & 0 & 0 & \dots & 1 & q_{n-1} \\ \end{bmatrix},$$ and putting such a matrix in row-reduced echelon form is actually so simple that we can immediately strengthen our claim to say that the above ring $R[\alpha]/Q(\alpha)$ is a field when $R$ is a field and $q_0

e 0.$ The matrix $\sum_k p_k A^k$ is then a matrix representation of the polynomial $\sum_k p_k \alpha^k$ which implements all of the required operations as matrix operations.

A simple example before \$#!& gets real.

For example, there is a famous "O(1)" solution to generating the $k^\text{th}$ Fibonacci number which comes from observing that the recurrence $F_k = F_{k-1} + F_{k-2}$ can be solved by other functions for other boundary conditions than $F_{0,1} = 0,1$, and one very special set of solutions looks like $F_k = \phi^k$ for some $\phi$. Plugging and chugging we get the algebraic numbers $\phi^2 = \phi + 1,$ which we can solve for the golden ratio $\varphi = (1 + \sqrt{5})/2$ and its negative reciprocal $\bar\varphi = -\varphi^{-1}= (1-\sqrt{5})/2.$ However since the Fibonacci recurrence relation is linear, this means that any linear combination $$F_n = A \varphi^n + B \bar\varphi^n$$obeys the Fibonacci recurrence, and we can actually just choose $A = \sqrt{1/5},\; B = -\sqrt{1/5}$ to get the standard $F_{0,1} = 0,1$ starting points: this is the Fibonacci sequence defined purely in terms of exponentiation.

But, there is a problem with using this on a computer: the Double type that a computer has access to has only finite precision and the above expressions will round off wildly. What we really want is to use our arbitrary-precision Integer type to calculate this. We can do this with matrix exponentiation in a couple of different ways. The first would be to adjoin the number $\sqrt{5}$ to the integers, solving $\alpha^2 = 5.$ Then our ring consists of the numbers $a + b \sqrt{5}$ which are the matrices: $$\begin{bmatrix}a & 5b\\b & a\end{bmatrix}.$$ And that's easy-peasy to program. Your "unit vectors" can similarly be chosen as $1$ and $\varphi$ however, and that leads to the "no-nonsense" matrix $$a + b \varphi = \begin{bmatrix}a & b\\ b & a + b\end{bmatrix}$$ which I'm calling "no-nonsense" because for $a = 0, b = 1$ this is actually the Fibonacci recurrence relation on vectors $[F_{n-1}, F_{n}],$ which is a way to get to this result without going through the above hoops. There is also an interesting "symmetric" version where $\varphi$ and $\bar\varphi$ are our "unit vectors" and the matrix is (I think) $a \varphi + b \bar\varphi \leftrightarrow [2a-b,\; -a+b;\;a - b,\; -a + 2b].$

(In any case, it turns out that the supposedly "O(1)" algorithm is not: even when we exponentiate-by-squaring we have to perform $\log_2(k)$ multiplications of numbers $m_i = F_{2^i}$ which are growing asymptotically like $F_k \approx \varphi^k/\sqrt{5},$ taking some $O(k^2)$ time, just like adding up the numbers directly. The big speed gain is that the adding-bignums code will "naturally" allocate new memory for each bignum and will therefore write something like $O(k^2)$ bits in memory if you don't specially make it intelligent; the exponentiation improves this to $O(k~\log(k))$ and possibly even to $O(k)$ since the worst of these happen only at the very end.)

Going off of the algebraics into complex numbers

Interestingly, we don't need to restrict ourselves to real numbers when we do the above. We know that in $\mathbb R$ there is no $x$ satisfying $x^2 = -1,$ so the above prescribes that we extend our field to the field $$ a + b \sqrt{-1} \leftrightarrow \begin{bmatrix}a & -b\\b & a\end{bmatrix}.$$When we replace $a$ with $r\cos\theta$ and $b$ with $r\sin\theta$ we find out that in fact these "complex numbers" are all just scaled rotation matrices:$$ r (\cos\theta + i~\sin\theta) = r \begin{bmatrix}\cos\theta & -\sin\theta\\\sin\theta & \cos\theta\end{bmatrix} = r~R_\theta,$$giving us an immediate geometric understanding of a complex number as a scaled rotation (and then analytic functions are just the ones which locally look like a scaled rotation.)

Going way off the algebraics into infinitesimals.

Another interesting way to go with this is to consider adjoining a term $\epsilon$ which is not zero, but which squares to zero. This idea formalizes the idea of an "infinitesimal" with no real effort, although as mentioned before, the resulting algebra is doomed to be a ring. (We could adjoin an inverse $\infty = \epsilon^{-1}$ too but presumably we'd then have $\infty^2 = \infty$ which breaks associativity, $(\epsilon\cdot\infty)\cdot\infty

e \epsilon\cdot(\infty\cdot\infty),$ unless we insert more infinities to push the problem out to infinity.)

Anyway, we then have the matrix: $$a + b \epsilon \leftrightarrow a I + b E = \begin{bmatrix}a & 0\\ b & a\end{bmatrix}.$$ It's precisely the transpose of what you were looking at. Following the rules, $(a + b \epsilon)^n = a^n + n~a^{n-1}~b~\epsilon$ with all of the other terms disappearing. Expanding it out we find by Taylor expansion that $f(x + \epsilon) = \sum_n f^{(n)}(x) \epsilon^n = f(x) + f'(x) \epsilon,$ and this is the property that you have seen in your own examination.

We can similarly keep infinitesimals out to second order with a 3x3 matrix $$a + b \epsilon + c \epsilon^2 \leftrightarrow \begin{bmatrix}a & 0 & 0\\ b & a & 0\\ c & b & a\end{bmatrix}$$Then $f(x I + E) = f(x) I + f'(x) E + f''(x) E^2 / 2$ straightforwardly.