22
Feb 19

Cramer's rule and invertibility criterion

Cramer's rule and invertibility criterion

Consequences of multilinearity

For a fixed j, \det A is a linear function of column A^{(j)}. Such a linear function generates a row-vector L_{j} by way of a formula (see Exercise 3)

(1) \det A=L_jA^{(j)}.

Exercise 1. In addition to (1), we have

(2) L_jA^{(k)}=0 for any k\neq l.

Proof. Here and in the future it is useful to introduce the coordinate representation for L_j=(l_{j1},...,l_{jn}) and put

(3) L=\left(\begin{array}{c}L_1 \\... \\L_n\end{array}\right).

Then we can write (1) as \det A=\sum_{i=1}^nl_{ji}a_{ij}. Here the element l_{ji} does not involve a_{ij} and therefore by the different-columns-different-rows rule it does not involve elements of the entire column A^{(j)}. Hence, the vector L_j does not involve elements of the column A^{(j)}.

Let A^{\prime } denote the matrix obtained from A by replacing column A^{(j)} with column A^{(k)}. The vector L_j for the matrix A^{\prime } is the same as for A because both vectors depend on the elements from columns other than the column numbered j. Since A^\prime contains linearly dependent (actually two identical) columns, \det A^\prime=0. Using in (1) A^\prime instead of A we get 0=\det A^\prime=L_jA^{(k)}, as required.

After reading the next two sections, come back and read this statement again to appreciate its power and originality.

Cramer's rule

Exercise 2. Suppose \det A\neq 0. For any y\in R^n denote B_j the matrix formed by replacing the j-th column of A by the column vector y. Then the solution of the system Ax=y exists and the components of x are given by

x_j=\frac{\det B_j}{\det A},\ j=1,...,n.

Proof. Premultiply Ax=y by L_{j}:

(4) L_jy=L_jAx=\left(L_jA^{(1)},...,L_jA^{(n)}\right)x=(0,...,\det A,...,0)x.

Here we applied (1) and (2) (the j-th component of the vector (0,...,\det  A,...,0) is \det A and all others are zeros). From (4) it follows that (\det  A)x_{j}=L_{j}y. On the other hand, from (1) we have \det B_j=L_jy (the vector L_j for B_j is the same as for A, see the proof of Exercise 1). The last two equations prove the statement.

Invertibility criterion

Exercise 3. A is invertible if and only if \det A\neq 0.

Proof. If A is invertible, then AA^{-1}=I. By multiplicativity of determinant and Axiom 3 this implies \det A\det (A^{-1})=1. Thus, \det  A\neq 0.

Conversely, suppose \det A\neq 0. (1), (2) and (3) imply

(5) LA=\left(\begin{array}{c}L_1 \\... \\L_n\end{array}\right) (A^{(1)},...,A^{(n)})=\left(\begin{array}{ccc}L_1A^{(1)}&...&L_1A^{(n)} \\...&...&... \\L_nA^{(1)}&...&L_nA^{(n)}\end{array}  \right)

=\left(\begin{array}{ccc}\det A&...&0 \\...&...&... \\0&...&\det A\end{array}  \right)=\det A\times I.

This means that the matrix \frac{1}{\det A}L is the inverse of A. Recall that existence of the left inverse implies that of the right inverse, so A is invertible.

Definition 1. The matrix L is more than a transient technical twist; it is called an adjugate matrix and property (5), correspondingly, is called an adjugate identity.