30
Jul 19

## Action of a matrix in its root subspace

The purpose of the following discussion is to reveal the matrix form of $A$ in $N_{\lambda }^{(p)}.$

Definition 1. Nonzero elements of $N_{\lambda }^{(p)}$ are called root vectors. This definition can be detailed as follows:

Elements of $N_{\lambda }^{(1)}\setminus \{0\}$ are eigenvalues.

Elements of $N_{\lambda }^{(2)}\setminus N_{\lambda }^{(1)}$ are called root vectors of 1st order.

...

Elements of $N_{\lambda }^{(p)}\setminus N_{\lambda }^{(p-1)}$ are called root vectors of order $p-1$.

Thus, root vectors belong to

$N_{\lambda }^{(p)}\setminus \{0\}=\left( N_{\lambda }^{(p)}\setminus N_{\lambda }^{(p-1)}\right) \cup ...\cup \left( N_{\lambda }^{(2)}\setminus N_{\lambda }^{(1)}\right) \cup \left( N_{\lambda }^{(1)}\setminus \{0\}\right)$

where the sets of root vectors of different orders do not intersect.

Exercise 1. $(A-\lambda I)\left( N_{\lambda }^{(k)}\setminus N_{\lambda }^{(k-1)}\right) \subset \left( N_{\lambda }^{(k-1)}\setminus N_{\lambda}^{(k-2)}\right) .$

Proof. Suppose $x\in N_{\lambda }^{(k)}\setminus N_{\lambda }^{(k-1)},$ that is, $(A-\lambda I)^{k}x=0$ and $(A-\lambda I)^{k-1}x\neq 0.$ Denoting $y=(A-\lambda I)x,$ we have $(A-\lambda I)^{k-1}y=0$ and $(A-\lambda I)^{k-2}y\neq 0,$ which means that $y\in N_{\lambda }^{(k-1)}\setminus N_{\lambda }^{(k-2)}$ and $A-\lambda I$ maps $N_{\lambda }^{(k)}\setminus N_{\lambda }^{(k-1)}$ into $N_{\lambda }^{(k-1)}\setminus N_{\lambda}^{(k-2)}.$

Now, starting from some $x_{p}\in N_{\lambda }^{(p)}\setminus N_{\lambda}^{(p-1)},$ we extend a chain of root vectors all the way to an eigenvector. By Exercise 1, the vector $x_{p-1}=(A-\lambda I)x_{p}$ belongs to $N_{\lambda }^{(p-1)}\setminus N_{\lambda }^{(p-2)}.$ From the definition of $x_{p-1}$ we see that

(1) $Ax_{p}=\lambda x_{p}+x_{p-1},$   $x_{p-1}\in N_{\lambda}^{(p-1)}\setminus N_{\lambda }^{(p-2)}$

($x_{p}$ is an "eigenvector" up to a root vector of lower order). Similarly, denoting $x_{p-2}=(A-\lambda I)x_{p-1},$ we have

(2) $Ax_{p-1}=\lambda x_{p-1}+x_{p-2},$   $x_{p-2}\in N_{\lambda}^{(p-2)}\setminus N_{\lambda }^{(p-3)}.$

...

Continuing in the same way, we get $x_{1}=(A-\lambda I)x_{2}\in N_{\lambda}^{(1)}\setminus \{0\},$

(3) $Ax_{2}=\lambda x_{2}+x_{1},$   $x_{1}\in N_{\lambda}^{(1)}\setminus \{0\},$

(4) $Ax_{1}=\lambda x_{1},$   $x_{1}\neq 0.$

Exercise 2. The vectors $x_{1},...,x_{p}$ defined above are linearly independent.

Proof. If $\sum_{j=1}^{p}a_{j}x_{j}=0,$ then $a_{p}x_{p}=-\sum_{j=1}^{p-1}a_{j}x_{j}.$ Here the left side belongs to $N_{\lambda}^{(p)}\setminus N_{\lambda }^{(p-1)}$ and the right side belongs to $N_{\lambda }^{(p-1)}$ because of inclusion relations. Hence, $a_{p}=0.$ Similarly, all other coefficients are zero.

By Exercise 2, the vectors $x_{1},...,x_{p}$ form a basis in $L=span(x_{1},...,x_{p}).$

Exercise 3. The transformation $A$ in $L$ is given by the matrix

(5) $A=\left(\begin{array}{ccccc}\lambda & 1 & 0 & ... & 0 \\0 & \lambda & 1 & ... & 0 \\ ... & ... & ... & ... & ... \\0 & 0 & 0 & ... & 1 \\0 & 0 & 0 & ... & \lambda\end{array} \right) =\lambda I+J$

where $J$ is a matrix with unities in the first superdiagonal and zeros everywhere else.

Proof. Since $x_{1},...,x_{p}$ is taken as the basis, $x_{i}$ can be identified with the unit column-vector $e_{i}.$ The equations (1)-(4) take the form

$Ae_{1}=\lambda e_{1},$ $Ae_{j}=\lambda e_{j}+e_{j-1},$ $j=2,...,p.$

Putting these equations side by side we get

$AI=A\left( e_{1},...,e_{p}\right) =\left( Ae_{1},...,Ae_{p}\right) =\left(\lambda e_{1},\lambda e_{2}+e_{1},...,\lambda e_{p}+e_{p-1}\right) =$

=$\left(\begin{array}{cccc}\lambda & 1 & ... & 0 \\0 & \lambda & ... & 0 \\ ... & ... & ... & ... \\0 & 0 & ... & 1 \\0 & 0 & ... & \lambda\end{array}\right) .$

Definition 2. The matrix in (5) is called a Jordan cell.