23
Jul 18

Geometry of linear equations: orthogonal complement and equation solvability

Geometry of linear equations: orthogonal complement and equation solvability

From orthogonality of vectors to orthogonality of subspaces

Definition 1. Let L be a linear subspace of R^{n}. Its orthogonal complement L^{\perp } is defined as the set of all vectors orthogonal to all elements of L: L^{\perp }=\{ x\in R^n:x\cdot y=0 \text{ for all}\ y\in L\}.

Exercise 1. Let L=\{ (x_1,0):x_1\in R\} be the x axis on the plane. Find L^{\perp}.

Solution. The condition x\cdot y=x_1y_1=0 for all x=(x_1,x_2)\in L implies y_1=0. The component y_2 is free. Thus, L^{\perp}=\{ (0,y_2):y_2\in R\} is the y axis.

Similarly, the orthogonal complement (L^{\perp})^{\perp} of the y axis is the x axis. This fact is generalized as follows:

Theorem (on second orthocomplement) For any linear subspace L, one has (L^{\perp})^{\perp}=L.

This statement is geometrically simple but the proof is rather complex and will be omitted (see Akhiezer & Glazman, Theory of Linear Operators in Hilbert Space, Dover Publications, 1993, Chapter 1, sections 6 and 7). It's good to keep in mind what makes it possible. For any set A\subset R^n, its orthogonal complement A^{\perp} can be defined in the same way: A^{\perp}=\{ x\in R^n:x\cdot y=0 \text{ for all }y\in A\}. As in Exercise 1, you can show that if A=\{(1,0)\} contains just the unit vector of the x axis, then A^{\perp} is the y axis and therefore (A^{\perp})^{\perp} is the x axis. Thus, in this example we have a strict inclusion A\subset (A^{\perp})^{\perp}. The equality is achieved only for linear subspaces.

Exercise 2. In R^3, consider the x axis L=\{(x_1,0,0):x_1\in R\}. Can you tell what L^{\perp} is?

Exercise 3. \{ 0\}^{\perp}=R^n and (R^{n})^{\perp}=\{ 0\} .

Link between image of a matrix and null space of its transpose

Exercise 4. The rule for a transpose of a product (AB)^T=B^TA^T implies that for any x\in R^k, y\in R^n

(1) (Ax)\cdot y=(Ax)^Ty=x^TA^Ty=x\cdot (A^Ty).

Exercise 5 (second characterization of matrix image) \text{Img}(A)=N(A^T)^{\perp} (and, consequently, \text{Img}(A^T)=N(A)^{\perp}).

Proof. Let us prove that

(2) \text{Img}(A)^{\perp}=N(A^T).

To this end, we first prove the inclusion \text{Img}(A)^{\perp}\subset N(A^T). Let y\in \text{Img}(A)^{\perp}. For an arbitrary x, we have Ax\in \text{Img}(A). Hence by (1) 0=(Ax)\cdot y=x\cdot(A^Ty). Since x is arbitrary, we can put x=A^Ty and obtain \|A^Ty\|^2=0. This implies A^Ty=0 and y\in N(A^T).

Conversely, to prove N(A^T)\subset\text{Img}(A)^{\perp}, suppose y\in N(A^T). Then A^Ty=0 and for an arbitrary x we have 0=x\cdot(A^Ty)=(Ax)\cdot y. Since Ax runs over \text{Img}(A), this means that y\in\text{Img}(A)^{\perp}.

Passing to orthogonal complements in (2), by the theorem on second orthocomplement we get what we need.

Definition 2. Let L_{1},\ L_{2} be two subspaces. We say that L=L_{1}\oplus L_{2} is their orthogonal sum if 1) every element l\in L can be decomposed as l=l_{1}+l_{2} with l_{i}\in L_{i}, i=1,2, and 2) every element of L_{1} is orthogonal to every element of L_{2}. Orthogonality of L_{1} to L_{2} implies L_{1}\cap L_{2}=\{0\} which, in turn, guarantees uniqueness of the representation l=l_{1}+l_{2}.

Exercise 6. Assume that A is of size n\times k. Exercise 5 and Exercise 3 imply R^n=\text{Img}(A)\oplus N(A^T) and R^k=\text{Img}(A^T)\oplus N(A).

The importance of Exercise 5 is explained by the fact that the null space of a matrix is easier to describe analytically than the image. For the next summary, you might want to review the conclusion on the role of the null space.

Summary. 1) In order to see if Ax=y has solutions, check if y is orthogonal to N(A^T). In particular, if A^T is one-to-one, then N(A^T)=\{0\} and Ax=y has solutions for all y (see Exercise 3 above).

2) If A is one-to-one, then Ax=y may have only unique solutions.

3) If both A and A^T are one-to-one, then Ax=y has a unique solution for any y.

Leave a Reply

You must be logged in to post a comment.