8
Sep 18

Applications of the diagonal representation I

Applications of the diagonal representation I

When A is symmetric, we can use the representation

(1) A=UA_UU^T where A_U=diag[\lambda_1,...,\lambda_n].

1. Matrix positivity in terms of eigenvalues

Exercise 1. Let A be a symmetric matrix. It is positive (non-negative) if and only if its eigenvalues are positive (non-negative).

Proof. By definition, we have to consider the quadratic form x^TAx=x^TUA_UU^Tx=(U^Tx)^TA_UU^Tx. Denoting y=U^Tx, we know that an orthogonal matrix preserves the norm: \|y\|=\|x\|. This allows us to obtain a lower bound for the quadratic form

x^TAx=y^TA_Uy=\sum\lambda_iy_i^2\geq\min_i\lambda_i\sum y_i^2=\min_i\lambda_i\|x\|^2.

This implies the statement.

2. Analytical functions of a matrix

Definition 1. For a square matrix A all non-negative integer powers A^m are defined. This allows us to define an analytical function of a matrix f(A)=\sum_{m=0}^\infty\frac{f^{(m)}(0)}{m!}A^m whenever a function f has the Taylor decomposition f(t)=\sum_{m=0}^\infty\frac{f^{(m)}(0)}{m!}t^m, t\in R.

Example 1. The exponential function f(t)=e^t has the decomposition e^t=\sum_{m=0}^\infty\frac{1}{m!}t^m. Hence, e^{At}=\sum_{m=0}^\infty\frac{1}{m!}A^mt^m. Differentiating this gives

\frac{de^{At}}{dt}=\sum_{m=1}^\infty\frac{1}{m!}A^mmt^{m-1} ( the constant term disappears)

=A\sum_{m=1}^\infty\frac{1}{(m-1)!}A^{m-1}t^{m-1}=Ae^{At}.

This means that e^{At} solves the differential equation x^\prime(t)=Ax(t). To satisfy the initial condition x(t_0)=x_0, instead of e^{At} we can consider x(t)=e^{A(t-t_0)}x_0. This matrix function solves the initial value problem

(2) x^\prime(t)=Ax(t), x(t_{0})=x_0.

Calculating all powers of a matrix can be a time-consuming business. The process is facilitated by the knowledge of the diagonal representation.

Exercise 2. If A is symmetric, then A^m=UA_U^mU^{-1} for all non-negative integer m. Hence, f(A)=U\left(\sum_{m=0}^\infty\frac{f^{(m)}(0)}{m!}A_U^m\right)U^{-1}=Uf(A_U)U^{-1}.

Proof. The equation

(3) A^2=UA_UU^{-1}UA_UU^{-1}=UA_UA_UU^{-1}=UA_U^2U^{-1}=Udiag[\lambda_1^2,...,\lambda_n^2]U^{-1}

shows that to square A it is enough to square A_{U}. In a similar fashion we can find all non-negative integer powers of A.

Example 2. e^{At}=U\left(\sum_{m=0}^\infty\frac{1}{m!}diag[\lambda_{1}^m,...,\lambda_n^m]t^m\right)U^{-1}.

3. Linear differential equation

Example 1 about the exponential matrix function involves a bit of guessing. With the diagonal representation at hand, we can obtain the same result in a more logical way.

One-dimensional case. \frac{dx}{dt}=ax is equivalent to \frac{dx}{x}=adt. Upon integration this gives

\log x(t)-\log x(t_0)=\int_{t_0}^td(\log x)=\int_{t_0}^tadt=a(t-t_0)

or

(4) x(t)=e^{a(t-t_0)}x_0.

General case. In case of a 2\times 2 matrix the first equation of (2) is \frac{dx_1}{dt}=a_{11}x_1+a_{12}x_2. The fact that the x's on the right are mixed up makes the direct solution difficult. The idea is to split the system and separate the x's.

Premultiplying (2) by U^{-1} we have \frac{d(U^{-1}x)}{dt}=U^{-1}Ax=U^{-1}AUU^{-1}x or, denoting U^{-1}x=y and using (1), \frac{dy}{dt}=A_Uy. The last system is a collection of one-dimensional equations \frac{dy_i}{dt}=\lambda_iy_i, i=1,...,n. Let y_0=U^{-1}x_0 be the initial vector. From (4) y_i=e^{\lambda_i(t-t_0)}y_{0i}. In matrix form this amounts to y=e^{A_U(t-t_0)}y_0. Hence, as in Exercise 2, x=Uy=Ue^{A_U(t-t_0)}U^{-1}Uy_0=e^{A(t-t_0)}x_0.

This is the same solution obtained above. The difference is that here we assume symmetry and, as a consequence, can use Example 2.