13
Jun 18

## Vector and matrix multiplication

Don't jump to conclusions, and you will be fine.

### Vector multiplication

Intuition. When I ask students how they would multiply two vectors, by analogy with vector summation they suggest

(1) $DC=\left(\begin{array}{ccccc}c_1d_1&...&c_jd_j&...&c_nd_n\end{array}\right).$

This is a viable definition and here is the situation when it can be used. Let $d_j$ denote the initial deposit (the principal) of client $j$ of a bank. Assuming that clients deposited their money at different times and are paid different interest rates, the bank now owes client $j$ the amount $c_jd_j$ where the factor $c_j$ depends on the interest rate and the length of time the money has been held in the deposit. Thus, (1) describes the amounts owed to customers on their deposits.

However, one might ask how much in total the bank owes on all deposits, and the answer will be given by

(2) $D\cdot C=c_1d_1+...+c_jd_j+...+c_nd_n.$

Definition. (2) is called a dot product (because the dot is used as the multiplication sign) or a scalar product (because the result is a scalar). Although we provided a real life situation when (1) is useful, in fact there are deep mathematical reasons for using (2) instead of (1).

Exercise 1. Do you think that $A\cdot B=B\cdot A$ for any two vectors $A,B$ (symmetry of the scalar product)?

### Matrix multiplication

We start with a special case that gave rise to the whole matrix algebra.

Global idea 2. We want to write a linear system of equations

(3) $\left\{\begin{array}{c}ax_1+bx_2=e\\cx_1+dx_2=f\end{array}\right.$

in a form $Ax=B$.

Put $A=\left(\begin{array}{cc}a&b\\c&d\end{array}\right),$ $x=\left(\begin{array}{c} x_1\\x_2\end{array}\right),$ $B=\left(\begin{array}{c}e\\f\end{array}\right).$ Define the product $Ax$ by

(4) $Ax=\left(\begin{array}{c}ax_1+bx_2\\cx_1+dx_2\end{array}\right).$

Then, taking into account the definition of equality of two matrices, we see that (3) is equivalent to

(5) $Ax=B.$

Digesting definition (4). Denote $A_1=\left(\begin{array}{cc}a&b\end{array}\right),$ $A_2=\left(\begin{array}{cc}c&d\end{array}\right)$ the rows of $A.$ Then $A$ can be written as $A=\left(\begin{array}{c}A_1\\A_2\end{array}\right).$ Such a representation of a large matrix in terms of smaller submatrices is called partitioning. Then (4) shows that elements of the product are dot products of vectors

(6) $Ax=\left(\begin{array}{c}A_1\cdot x\\A_2\cdot x\end{array}\right).$

Note that this definition is correct because the number of elements of $A_1,$ $A_2$ is equal to the number of elements of $x.$ Alternatively, the number of columns of $A$ equals the number of rows of $x.$

### General definition

It consists of two parts:

Part 1 (compatibility rule, or rule for counting dimensions) $A_{n\times m}B_{m\times k}=C_{n\times k}$ (the number of columns $m$ of $A$ equals the number of rows $m$ of $B$).

Part 2 (rows by columns rule, or rule for multiplication) Let us partition $A$ into rows and $B$ into columns: $A=\left(\begin{array}{c}A_1\\...\\A_n\end{array}\right),$ $B=\left(B^1\ ...\ B^k\right).$

Then the elements of the product $C$ are found as dot products of the rows of $A$ by columns of $B$: $C=\left(\begin{array}{ccc}A_1\cdot B^1&...&A_1\cdot B^k\\...&...&...\\A_n\cdot B^1&...& A_n\cdot B^k\end{array}\right).$

Cayley appears to be the inventor of this rule, although many other parts of matrix algebra had been discovered before him.