Linear dependence of vectors: definition and principal result
This is a topic most students have trouble with. It's because there is a lot of logic involved, so skimming it is not a good idea. We start with the most common definition.
Definition 1. Let be some vectors. They are called linearly dependent if there exist numbers
not all of which are zero, such that
(1)
The sheer length of this definition scares some people and all they remember is equation (1). Stating just (1) instead of the whole definition is like skinning an animal.
Seizing the bull by the horns
The first matter of business is to shorten the definition and relate it to what we already know.
It's better to join the set of numbers into a vector
Then the requirement that "not all of
are zero" is equivalent to a single equation
Further, let us write the vectors
as columns and put them into a matrix
Using multiplication of partitioned matrices we see that (1) is equivalent to
(2)
and therefore Definition 1 is equivalent to the following.
Definition 2. Vectors are called linearly dependent if the homogeneous equation (2) has nonzero solutions
that is, the null space of
is not
Negating Definition 2 gives the definition of linear independence. Negating Definition 2 is easier than negating Definition 1.
Definition 3. Vectors are called linearly independent if
(for any nonzero
we have
or, alternatively,
only for
).
Exercise 1. For any matrix one has
(3)
Proof. 1) Proving that If
then
and
so
2) Proving that
If
then
and
so
Exercise 2. is a square symmetric matrix, for any
Proof. If is of size
then
is
It is symmetric:
By the way, some students write
You cannot do this if
is not square.
Criterion of linear independence. Vectors are linearly independent if and only if
Proof. We are going to use (3). By Exercise 2, is a square matrix and for a square matrix we have the equivalence
Application of this result proves the statement.
Direct application of Definition 1 can be problematic. To prove that some vectors are linearly dependent, you have to produce such that Definition 1 is satisfied. This usually involves some guesswork, see exercises below. The criterion above doesn't require guessing and can be realized on the computer. The linear independence requirement is common in multiple regression analysis but not all econometricians know this criterion.
Putting some flesh on the bones
These are simple facts you need to know in addition to the above criterion.
Exercise 3. 1) Why do we exclude the case when all are zero?
2) What happens if among there are zero vectors?
3) Show that in case of two non-zero vectors Definition 1 is equivalent to just proportionality of one vector to another.
Solution. 1) If all are zero, (1) is trivially satisfied, no matter what the vectors are.
2) If one of the vectors is zero, the coefficient of that vector can be set to one and all others to zero, so such vectors will be linearly dependent by Definition 1.
3) Consider two non-zero vectors If they are linearly dependent, then
(4)
where at least one of is not zero. Suppose
Then
because otherwise
would be zero. Hence
(5)
where the proportionality coefficient is not zero. Conversely, (5) implies (4) (you have to produce the coefficients).
Exercise 4. Prove that if to the system of unit vectors we add any vector , the resulting system will be linearly dependent.
Leave a Reply
You must be logged in to post a comment.