Determinant

$\spadesuit$Cramer's Rule $\spadesuit$

Before defining the determinant, we consider the system of linear equations with 2 unknowns.

$\displaystyle \left \{ \begin{array}{rrl}
a_{11}x_{1} + a_{12}x_{2} &=& b_{1} \\
a_{21}x_{1} + a_{22}x_{2} &=& b_{2}
\end{array}\right. $

Solve for $x_{1},x_{2}$. Multiply $a_{22}$ to the first equation and multiply $-a_{12}$ to the second equation. Then we add these two equations. We have

$\displaystyle (a_{11}a_{22} - a_{12}a_{21})x_{1} = b_{1}a_{22} - a_{12}b_{2} $

For $a_{11}a_{22} - a_{12}a_{21} \neq 0$, we have

$\displaystyle x _{1} = \frac{b_{1}a_{22} - a_{12}b_{2}}{a_{11}a_{22} - a_{12}a_{21}}$

Similarly, we have

$\displaystyle x_{2} = \frac{a_{11}b_{2} - b_{1}a_{21}}{a_{11}a_{22} - a_{12}a_{21}}$

Now denote

$\displaystyle a_{11}a_{22} - a_{12}a_{21} = \left \vert \begin{array}{lr}
a_{11}&a_{12}\\
a_{21}&a_{22}
\end{array}\right \vert $

This expression os called a determinant of the coefficient matrix $(a_{ij})$ and denoted by $\det(a_{ij})$ or $\vert(a_{ij})\vert$. Using this notation, we can write

$\displaystyle x_{1} = \frac{\left \vert \begin{array}{lr}
b_{1}&a_{12}\\
b_{2}...
...\vert\begin{array}{lr}
a_{11}&a_{12}\\
a_{21}&a_{22}
\end{array}\right \vert} $

This method is called a Cramer's rule.

$\spadesuit$Cofactor Expansions $\spadesuit$

Definition 2..5  

Let $A = (a_{ij})$ be a matrix with the order of $n$.
$(a)$ For $n = 1$, $\det(A) = a_{11}$
$(b)$ For $n = 2$, $\det(A) = a_{11}a_{22} - a_{12}a_{21}$
$(c)$ For $n \geq 2$, the determinat of a matrix deleting $i$th row and $j$th column is called a minor and denoted by $M_{ij}$. The cofactor of $a_{ij}$ is define as follows:

$\displaystyle A_{ij} = (-1)^{i+j}M_{ij}.$

The determinant of A denoted by $\det(A)$ is defined as follows:

$\displaystyle \det{A} = \sum_{j=1}^{n}a_{ij}A_{ij}. $

This way of finding the determinant is called a cofactor expansion using the $i$th row. Similarly, the following way of finding the deteminant is called a cofactor expansion using the $j$th column:

$\displaystyle \det{A} = \sum_{i=1}^{n}a_{ij}A_{ij}. $

For a square matrix with the order of $n$, there are $n$ ways of cofactor expansions using rows. Similarly, there are $n$ ways of cofactor expansions using columns. Surprisingly, the result using which row or column is not important. They are all the same.

Theorem 2..13  

For the squre matrix $A$, the result of the cofactor expansions is the same.

Example 2..13  

Find the determinant of the following:. $\det \left(\begin{array}{rrr}
0&-2&0\\
-1&3&1\\
4&2&1
\end{array}\right) $

Answer Using the $1$st row, apply the cofactor expansion.

$\displaystyle \left \vert\begin{array}{rrr}
0&-2&0\\
-1&3&1\\
4&2&1
\end{arra...
...rr}
-1&1\\
4&1
\end{array}\right\vert + 0 = -10 .
\ensuremath{ \blacksquare}
$

Permutation

A one-to-one mapping $\sigma$ of the set $\{1,2,\ldots,n\}$ onto itself is called a permutation. We denote the permutation $\sigma$ by

$\displaystyle \sigma = \left(\begin{array}{llll}
1 & 2 & \ldots & n\\
j_1 & j_...
...\right) \ mbox{or} \sigma = j_1 j_2 \ldots j_n, \ mbox{where} j_i = \sigma(i)$

Note that since $\sigma$ is one-to-one and onto, the sequence $j_1 j_2 \ldots j_n$ is simply a rearrangement of the numbers $1,2,\ldots ,n$. Note also that the number of such permutations is $n!$, and that the set of them is usually denoted by $S_n$. We also note that if $\sigma \in S_n$, then the inverse mapping $\sigma^{-1} \in S_n$; and if $\sigma , \tau \in S_n$, then the composition mapping $\sigma \circ \tau \in S_n$. In particular, the identity mapping

$\displaystyle \varepsilon = \sigma \circ \sigma^{-1} = \sigma^{-1} \circ \sigma$

Determinant

Let $A = (a_{ij})$ be a square matrix of the order $n$. Then consider a product of $n$ elements of $A$ such that one and only one element comes from each row and one and only one element comes from each column. Such a product can be written in the form

$\displaystyle a_{1j_1}a_{2j_2}\ldots a_{nj_n}$

Now since the factors come from different columns, the sequence of second subscripts form a permuatation $\sigma = j_1 j_2 \ldots j_n$ in $S_n$. Conversely, each permutation in $S_n$, determines a product of the above form. Thus the matrix $A$ contains $n!$ such products.

Definition 2..6  

The determinant of the matrix $A$ of the order $n$, denoted by ${\rm det}(A)$ or $\vert A\vert$, is the following sum which is summed over all permutations $\sigma = j_{1j_2} \ldots j_n$ in $S_n$:

$\displaystyle \vert A\vert = \sum_{\sigma \in S_n}({\rm sgn}\sigma a_{1j_1}a_{2j_2}\ldots a_{nj_n}$

We next explain how to determine ${\rm sgn}\sigma$. We say $\sigma$ is even or odd according as to whether there is an even or odd number of pairs $(i,k)$ for which

$\displaystyle i > k {\rm but} i {\rm precedes} k {\rm in} \sigma$

We then define the sign of $\sigma$, written ${\rm sgn}\sigma$ by

$\displaystyle {\rm sgn}\sigma = \left\{\begin{array}{ll}
1 & \mbox{if} \sigma \mbox{ is even}\\
-1 & \mbox{ if } \sigma \mbox{ is odd}
\end{array}\right.$

For example, (1432) can be written as

$\displaystyle (1432) \longrightarrow (1423) \longrightarrow (1243) \longrightarrow (1234) $

or

$\displaystyle (1432) \longrightarrow (1234)$

In this case, the number of transpositions is diffetent. But both of them required the odd number of transposition. So, we have $sgn(1432) = -$

Example 2..14  

Find $sgn(154632)$.

Answer

$\displaystyle (154632) \longrightarrow (124635) \longrightarrow (123645) \longrightarrow (123465) \longrightarrow (123456) $

Thus, $sgn(154632) = + .$ $ \blacksquare$

$\spadesuit$Properties of Determinants $\spadesuit$

We now list the basic properties of the determinant..

Theorem 2..14  

Suppose that $B = A^{t}$. Then $\det{B} = \det{A}.$

Proof Suppose $A = (a_{ij}$. Then $A^{t} = (b_{ij}$ where $b_{ij} = a_{ji}$. Hence

$\displaystyle \vert A^t\vert$ $\displaystyle =$ $\displaystyle \sum_{\sigma \in S_n}({\rm sgn}\sigma) b_{1\sigma(1)}b_{2\sigma(2)}\ldots b_{n\sigma(n)}$  
  $\displaystyle -$ $\displaystyle \sum_{\sigma \in S_n}({\rm sgn}\sigma)a_{\sigma(1),1}a_{\sigma(2),2}\ldots a_{\sigma(n),n}$  

Let $\tau = \sigma^{-1}$. Then ${\rm sgn}\tau = {\rm sgn}\sigma$, and

$\displaystyle a_{\sigma(1),1}a_{\sigma(2),2}\ldots a_{\sigma(n),n} = a_{1\tau(1)}a_{2\tau(2)}\ldots a_{n\tau(n)}$

Therefore,

$\displaystyle \vert A^t\vert = \sum_{\sigma \in S_n}({\rm sgn}\tau)a_{1\tau(1)}a_{2\tau(2)}\ldots a_{n\tau(n)}$

However, as $\sigma$ runs through all the elements of $S_n$, $\tau = \sigma^{-1}$ runs through all the elementsof $S_n$. Thus $\vert A^t\vert = \vert A\vert$.

Proof by cofactor expansion The cofactor expansion of $A^{t}$ using the $j$th column is the same as the cofactor expansion of $A$ using the $j$th row. Thus, $\det{A} = \det{B}$. $ \blacksquare$

With this theorem, all properties true for the rows are true for columns.

Theorem 2..15  

Suppose that the matrix $B$ is a constant multiple $\alpha$ of $A$. Then $\det{B} = \alpha \det{A}$.

Proof Let $B = (b_{ij}$. Then

$\displaystyle \vert B\vert = \sum_{\sigma \in S_{n}}({\rm sgn}\sigma)b_{1j_1}b_{2j_2}\ldots b_{nj_n}$

Since $b_{ij} = \alpha a_{ij}$, we have

$\displaystyle \vert B\vert = \sum_{\sigma \in S_{n}}({\rm sgn}\sigma)\alpha a_{1j_1} \alpha a_{2j_2}\ldots \alpha a_{nj_n} = \alpha \vert A\vert$

Alternate proof Let $B$ be the matrix so that the $k$th row of $A$ is multiplied by $\alpha$. Now using the cofactor expansion on the $k$th row, we have,

$\displaystyle \det{B} = \sum_{i=1}^{n}(-1)^{k+i}b_{ki}M_{ki} $

Here, $b_{ki} = \alpha{a_{ki}}$. Since $M_{ki}$ is the same for $B$ and $A$, we have

$\displaystyle \det{B} = \sum_{i=1}^{n}(-1)^{k+i}\alpha a_{ki}M_{ki} = \alpha \sum_{i=1}^{n}(-1)^{k+i}a_{ki}M_{ki} = \alpha \det{A}.
\ensuremath{ \blacksquare}
$

Theorem 2..16  

Suppose that $B$ is the matrix obtained by interchangin two rows(columns) of $A$. Then we have

$\displaystyle \det{B} = -\det{A}.$

Proof We prove the theorem for the case that two columns are interchanged. Let $\tau$ be the transposition which interchanges the two numbers corresponding to the two columns of $A$ that are interchanged. If $A = (a_{ij})$ and $B = (b_{ij})$, then $b_{iJ} = a_{i\tau(j)}$. Hence, for any permutation $\sigma$,

$\displaystyle b_{1\sigma(1)}b_{2\sigma(2)}\ldots b_{n\sigma(n)} = a_{1\tau \sigma(1)}a_{2\tau\sigma(2)}\ldots a_{n\tau\sigma(n)}$

Thus
$\displaystyle \vert B\vert$ $\displaystyle =$ $\displaystyle \sum_{\sigma \in S_{n}}({\rm sgn}\sigma)b_{1\sigma(1)}b_{2\sigma(2)}\ldots b_{n\tau(n)}$  
  $\displaystyle =$ $\displaystyle \sum_{\sigma \in S_{n}}({\rm sgn}\sigma) a_{1\tau \sigma(1)}a_{2\tan\sigma(2)}\ldots a_{n\tau\sigma(n)}$  

Since the transpositoin $\tau$ is an odd permutation, ${\rm sgn}\tau \sigma = {\rm sgn}\tau * {\rm sgn}\sigma = - {\rm sgn}\sigma$. Thus ${\rm sgn}\sigma = -{\rm sgn}\tau \sigma$, and so

$\displaystyle \vert B\vert = -\sum_{\sigma \in S_n}({\rm sgn}\tau \sigma)a_{1\tau\sigma(1)}a_{2\tau\sigma(2)}\ldots a_{n\tau\sigma(n)}$

But as $\sigma$ runs through all the elements of $S_n$, $\tau \sigma$ also runs through all the elements of $S_n$. Therefore, $\vert B\vert = -\vert A\vert$.

Theorem 2..17  

Suppose that $B$ is obtained by adding a multiple of a row of $A$. Then $\det{B} =\det{A}$.

Proof Suppose $\alpha$ times the $k$th row is added to the $j$th row of $A$. Then

$\displaystyle \vert B\vert$ $\displaystyle =$ $\displaystyle \sum_{\sigma}({\rm sgn}\sigma) a_{1i_1}a_{2i_2}\ldots (ca_{ki_k}+a_{ji_j}\ldots a_{ni_n}$  
  $\displaystyle =$ $\displaystyle c\sum_{\sigma}({\rm sgn}\sigma)a_{1i_1}a_{2i_2}\ldots a_{ki_k}\ld...
...+ \sum_{\sigma}({\rm sgn}\sigma) a_{1i_1}a_{2i_2}\ldots a_{ji_j}\ldots a_{ni_n}$  

The first sum is the determinant of a matrix whose $k$th and $j$th rows are identical, hence the sum is zero. The second sum is the determinant to $A$. Thus, $\vert B\vert = c \cdot 0 + \vert A\vert = \vert A\vert$.

From those $3$ theorems above, the matrix $B$ obtained by elementary row operation on $A$ is the product of the elementary matrix and $A$.

Theorem 2..18  

$\det(EA) = \det(E)\det(A)$, where $E$ is an elementary matrix.

Proof For $3$ elementary row operation $L_{1},L_{2},L_{3}$,
$L\_{1}$ interchanging of two rows of $A$, $L\_{2}$ multiplying a row of $A$ by a scalar $\alpha$; $L\_{3}$ adding a multiple of a row of $A$ to another) Elementary matrices corresponds to the above, let $E_{1},E_{2},E_{3}$. Then by the theorems 2.5,2.5,2.5,

$\displaystyle \vert E_{1}\vert = -1,  \vert E_{2}\vert = 1,  E_{3}A = \alpha . $

Since $E_{i}A$ is obtained by applying an elementary operation $L_{i}$ to $A$ ,

$\displaystyle \vert E_{1}A\vert = -\vert A\vert = \vert E_{1}\Vert A\vert,  \v...
... A\vert,  \vert E_{3}A\vert = \alpha \vert A\vert = \vert E_{3}\Vert A\vert . $

Thus, $\det(EA) = \det(E)\det(A)$. $ \blacksquare$

Theorem 2..19  

If $B$ has any of the following properties, then $\det{B} = 0$.
(1) If $A$ has a row of zeros.
(2) If $A$ has two identical rows.
(3) If A has a row which is a constant multiple of other row.

Proof

(1) In 2.5, take $\alpha = 0$.
(2) Let $B$ be the matrix obtained by interchanging two rows of $A$. Then by th theorem 2.5, $\det{B} = -\det{A}$. But the matrix $A$ and $B$ are the same. Thus, $\det{B} =\det{A}$ which implies that $\det{A} = 0$.
(3) Let the $i$th row of $A$ be equal to $\alpha$ times $k$th row. Then $\alpha = 0$ implies that $\det{A} = 0$. Thus assume that $\alpha \neq 0$. Let $B$ be the matrix obtained by multiplying the $j$th row of a matrix $A$. Then by the theorem 2.5, we have $\displaystyle{\det{B} = \frac{1}{\alpha}\det{A}}$. Also by the theorem 2.5(2), $\det{B} = 0$. Thus, $\det{A} = 0$.

Example 2..15  

Calculate the following determinants:
$(a)  \det \left(\begin{array}{rrr}
2&-1&2\\
-4&3&-3\\
0&1&1
\end{array}\righ...
...)  \det \left(\begin{array}{rrr}
1&-2&0\\
-1&3&1\\
2&-3&1
\end{array}\right)$

Answer % latex2html id marker 34237
$ (a) \ \left\vert\begin{array}{rrr}
2&-1&2\\
-4&3...
...1&1
\end{array}\right \vert \stackrel{\mbox{theorem}\ref{teiri:2-18}(2)}{=} 0 .$
Here $2C_{1} + C_{2}$ means that adding the double of $1$st column to the $2$nd column. $ \blacksquare$ t

Example 2..16  

Find the determinant of the following matrices.
$(a)  A = \left(\begin{array}{rrr}
-4&-1&6\\
1&2&3\\
2&-3&4
\end{array}\right...
...  B = \left(\begin{array}{rrr}
2&-3&4\\
1&2&3\\
-4&-1&6
\end{array}\right). $

Answer (a)

$\displaystyle \left\vert\begin{array}{rrr}
-4&-1&6\\
1&2&3\\
2&-3&4
\end{array}\right\vert \!\!$ $\displaystyle \stackrel{R_{1} \leftrightarrow R_{2}}{=}$ $\displaystyle - \left\vert\begin{array}{rrr}
1&\!\!2&3\\
-4&\!\!-1&6\\
2&\!\!...
...{rrr}
1&\!\!2&\!\!3\\
0&\!\!7&\!\!18\\
0&\!\!-7&\!\!-2
\end{array}\right\vert$  
  $\displaystyle \stackrel{\begin{array}{cc}
{}^{\frac{R_{2}}{7}}\\
{}^{R_{2}+R_{3}}
\end{array}}{=}$ $\displaystyle -7 \left\vert\begin{array}{rrr}
1&2&3\\
0&1&\frac{18}{7}\\
0&0&16
\end{array}\right\vert = -7 \cdot 16 = -112 .$  

(b) The matrix $B$ is obtained by interchanginf the $1$st row and the $3$rd row of a matrix $A$. Thus by the theorem 2.5, we have $\det B = 112.$ $ \blacksquare$

Example 2..17  

Using the theorem above, factor the following matrix.

$\displaystyle \left\vert \begin{array}{rrr}
1&a&a^3\\
1&b&b^3\\
1&c&c^3
\end{array}\right\vert $

Answer

$\displaystyle \left\vert \begin{array}{rrr}
1&a&a^3\\
1&b&b^3\\
1&c&c^3
\end{array}\right\vert $ $\displaystyle \stackrel{\begin{array}{ll}
{}^{-R_{1}+R_{2}}\\
{}^{-R_{1}+R_{3}}
\end{array}}{=}$ $\displaystyle \left \vert \begin{array}{rrr}
1&a&a^3\\
0&b-a&b^3-a^3\\
0&c-a&c^3-a^3
\end{array}\right \vert$  
  $\displaystyle \stackrel{\begin{array}{ll}
{}^{\frac{R_{2}}{b-a}}\\
{}^{\frac{R_{2}}{c-a}}
\end{array}}{=}$ $\displaystyle (b-a)(c-a)\left \vert \begin{array}{rrr}
1&a&a^3\\
0&1&b^2+ba+a^2\\
0&1&c^2+ca+a^2
\end{array}\right \vert$  
  $\displaystyle \stackrel{-R_{2}+R_{3}}{=}$ $\displaystyle (b-a)(c-a)\left \vert \begin{array}{rrr}
1&a&a^3\\
0&1&b^2+ba+a^2\\
0&0&c^2+ca-b^2-ba
\end{array}\right \vert$  
  $\displaystyle =$ $\displaystyle (b-a)(c-a)(c-b)(a+b+c) .
\ensuremath{ \blacksquare}$  

We introduce two of the most important theorem about the determinant.

$\spadesuit$Product of determinants $\spadesuit$

Theorem 2..20  

$\det{AB} = \det{A}\det{B}.$

Proof A matrix $A$ can be written by taking a suitable elementary matrix $E_{i}$ such that $A = E_{k}E_{k-1} \cdots E_{1}A_{R}$. Thus by the theorem 2.5, we have

$\displaystyle \vert A\vert = \vert E_{k}\Vert E_{k-1}\vert \cdots \vert E_{1}\Vert A_{R}\vert. $

Suppose that $\vert A\vert = 0$. Then $\vert A_{R}\vert = 0$ and some row vector of $A_{R}$ must be zero vector. In other words, some row vector of $AB$ is zero vector. Then $\vert AB\vert = 0$. Suppose that $\vert A\vert \neq 0$. Then $\vert A_{R}\vert \neq 0$. Thus by the theorem 2.3, we have $A_{R} = I$. Hence,

$\displaystyle \vert A\vert = \vert E_{k}\Vert E_{k-1}\vert \cdots \vert E_{1}\vert, $

$\displaystyle AB = E_{k}E_{k-1} \cdots E_{1}B. $

which implies that

$\displaystyle \vert AB\vert = \vert E_{k}E_{k-1} \cdots E_{1}B\vert = \vert E_{...
...ots \vert E_{1}\Vert B\vert = \vert A\Vert B\vert.
\ensuremath{ \blacksquare}
$

Theorem 2..21  

Suppose that a square matrix $A$ is the order $n$. Then the followings are equivalent
$1)$ $A$ is regular
$2)$ $\vert A\vert \neq 0$
$3)$ $A^{-1}$ exists and $A^{-1} = \frac{1}{\vert A\vert}\left(\begin{array}{rrr}
A_{11}&\cdots&A_{1n}\\
\vdots&\vdots&\vdots\\
A_{n1}&\cdots&A_{nn}
\end{array}\right )^{t}$. Here, $A_{ij}$ is a cofactor of $A$.
$4)$ $A{\mathbf x} = {\bf b}$ has a unique solution and the solution is given by the following equation.

$\displaystyle x_{j} = \frac{1}{\vert A\vert} \left\vert\begin{array}{rrrrr}
a_{...
...{nn}
\end{array}\right \vert = \frac{\vert[A_{j}:{\bf b}]\vert}{\vert A\vert}. $

これをクラメールの公式という.
$5)$ ${\rm rank}(A) = n$
$6)$ $A_{R} = I$

Before prooving this theorem, the matrix $\left(\begin{array}{rrr}
A_{11}&\cdots&A_{1n}\\
\vdots&\vdots&\vdots\\
A_{n1}&\cdots&A_{nn}
\end{array}\right )^{t}$ in this theorem is called an ajoint of $A$ and denoted by ${\rm adj}A$.

Also, the matrix

$\displaystyle \left(\begin{array}{ccccc}
a_{11}&\cdots&b_{1}&\cdots&a_{1n}\\
\vdots& &\vdots&&\vdots\\
a_{n1}&\cdots&b_{n}&\cdots&a_{nn}
\end{array}\right)$

is obtained by replacing the $j$th column of $A$ by the ${\bf b} = \left(\begin{array}{c}
b_{1}\\
b_{2}\\
\vdots\\
b_{n}
\end{array}\right)$ and denoted by $[A_{j}:{\bf b}]$.

Proof 1) $\Rightarrow$ 2)
Suppose that $A$ is regular. Then $AA^{-1} = I$ and by the theorem 2.5, $\vert A\Vert A^{-1}\vert = \vert AA^{-1}\vert =\vert I\vert = 1$. Thus $\vert A\vert \neq 0$.
2) $\Rightarrow$ 3)
Since $\vert A\vert \neq 0$, let $X = \frac{1}{\vert A\vert}\left(\begin{array}{rrr}
A_{11}&\cdots&A_{1n}\\
\vdots&\vdots&\vdots\\
A_{n1}&\cdots&A_{nn}
\end{array}\right )^{t}$. Then

$\displaystyle XA$ $\displaystyle =$ $\displaystyle \frac{1}{\vert A\vert}\left(\begin{array}{rrr}
A_{11}&\cdots&A_{1...
...}& \cdots &a_{1n}\\
\vdots&&\vdots\\
a_{n1}&\cdots&a_{nn}
\end{array}\right )$  
  $\displaystyle =$ $\displaystyle \frac{1}{\vert A\vert}\left(\begin{array}{rrr}
\vert A\vert&\cdots&0\\
\vdots&&\vdots\\
0&\cdots&\vert A\vert
\end{array}\right ) = I .$  

hence, $XA = I$ and $X = A^{-1}$.
3) $\Rightarrow$ 4)
Multiply $A^{-1} = \frac{1}{\vert A\vert}\left(\begin{array}{rrr}
A_{11}&\cdots&A_{1n}\\
\vdots&\vdots&\vdots\\
A_{n1}&\cdots&A_{nn}
\end{array}\right )^{t}$ to the equation $A{\mathbf x} = {\bf b}$ from the left. Then we have
$\displaystyle {\mathbf x} = A^{-1}{\bf b}$ $\displaystyle =$ $\displaystyle \frac{1}{\vert A\vert}\left(\begin{array}{rrr}
A_{11}&\cdots&A_{1...
...}\right )^{t}\left(\begin{array}{c}
b_{1}\\
\vdots\\
b_{n}
\end{array}\right)$  
  $\displaystyle =$ $\displaystyle \frac{1}{\vert A\vert}\left(\begin{array}{rcr}
b_{1}A_{11}&+ \cdo...
...\vdots&\vdots&\vdots\\
b_{1}A_{1n}&+ \cdots + &b_{n}A_{nn}
\end{array}\right )$  

.

The component $b_{1}A_{1j}+b_{2}A_{2j}+\cdots+b_{n}A_{nj}$ is the cofactor expansion of $[A_{j}:{\bf b}]$, where

$\displaystyle [A_{j}:{\bf b}] = \left(\begin{array}{rrrrr}
a_{11}&\cdots&b_{1}&...
...vdots&&\vdots&&\vdots\\
a_{n1}&\cdots&b_{n}&\cdots&a_{nn}
\end{array}\right ) $

Then,

$\displaystyle x_{j} = \frac{1}{\vert A\vert} \left\vert\begin{array}{rrrrr}
a_{...
...{nn}
\end{array}\right \vert = \frac{\vert[A_{j}:{\bf b}]\vert}{\vert A\vert}. $

4) $\Rightarrow$ 5)
Suppose that the equation $A{\mathbf x} = {\bf b}$ has the unique solution${\bf p}$. Then let the fundamental solution of $A{\mathbf x} = {\bf0}$ be ${\bf C}$ . By the theorem 2.3, we ahve ${\bf p}+{\bf C}$ is also a solution of $A{\mathbf x} = {\bf b}$. Since ${\bf p} = {\bf p}+{\bf C}$, we have ${\bf C} = {\bf0}$. Thus by the theorem 2.3, we have $0 = n - {\rm rank}(A)$ . Hence, ${\rm rank}(A) = n$.

5) $\Rightarrow$ 6), 6) $\Rightarrow$ 1) is the theorem 2.3. $ \blacksquare$

We introduce some of the useful idea about finding the determinant. First one is called a Vandermonde determinant.

Suppose some solution of differential euqations is given by

$\displaystyle y = c_1e^{ax} + c_2e^{bx} + c_3e^{cx}$

Suppose also that the solution satisfies the initial condition such as $y(0) = 1, y'(0) = 1, y''(0) = 1$.  Then we have the following system of linear equations:.
$\displaystyle 1$ $\displaystyle =$ $\displaystyle c_1 + c_2 + c_3$  
$\displaystyle 1$ $\displaystyle =$ $\displaystyle c_1 a + c_2 b + c_3 c$  
$\displaystyle 1$ $\displaystyle =$ $\displaystyle c_1 a^2 + c_2 b^2 + c_3 c^2$  

Rewrite this equation using the matrix. Then we have

$\displaystyle \begin{pmatrix}1 & 1 & 1\\
a & b & c\\
a^2 & b^2 & c^2
\end{pma...
...{pmatrix}c_1\ c_2\ c_3\end{pmatrix} = \begin{pmatrix}1 \ 1\ 1
\end{pmatrix}$

To find the solution of this equation by Cremer's rule, we have the determinant in the denominator. Now we need to find the determinant.

$\displaystyle \det \begin{pmatrix}1 & 1 & 1\\
a & b & c\\
a^2 & b^2 & c^2
\en...
...1 & 1\\
0 & b-a & c-a\\
0 & b^2-a^2 & c^2-a^2
\end{pmatrix} = (c-a)(c-b)(b-a)$

Now the matrix of this form, the determinant can be found by subtracting the first element from the seconf and third elements of the second row. This technique can be applied to a square matrix of the order $n$ and called Vandermonde.

Another useful technique is block matrices. Consider a matrix $A$ such that

$\displaystyle A = \begin{pmatrix}
a_{11} & a_{12} & a_{13} & a_{14} & a_{15}\ ...
... a_{44} & a_{45}\\
a_{51} & a_{52} & a_{53} & a_{54} & a_{55}\\
\end{pmatrix}$

Using a vertical line to cut the matrix at the 3rd and 4th columns. Next using a horizaontal line to cut the matrix at the 4th and 5th row. Then we have

$\displaystyle A_{11} = \begin{pmatrix}
a_{11} & a_{12} & a_{13} \\
a_{21} & a_...
...2} & a_{53}\end{pmatrix}, A_{22} = \begin{pmatrix}
a_{54} & a_{55}\end{pmatrix}$

Then we can write the matrix $A$ as the following block matrices:

$\displaystyle A = \begin{pmatrix}A_{11} & A_{12}\ A_{21} & A_{22}\end{pmatrix}$

Now consider the matrix $X$ which has block matrices $A$, $B$, and $O$, where $A$ is $n \times n$, $B$ is $m\times m$ and $O$ is zero matrix. The we have the followings:

$\displaystyle 1. \det \begin{pmatrix}A & O\ O & B\end{pmatrix} = \det(A)\det(B)$      
$\displaystyle 2. \det \begin{pmatrix}A & C\ O & B\end{pmatrix} = \det(A)\det(B)$      

Proof Note that $\det \begin{pmatrix}A & O\ O & I\end{pmatrix} = \det(A)$. Note also that、 $\det \begin{pmatrix}I & O\ O & B\end{pmatrix} = \det(B)$. Now note that

$\displaystyle \det \begin{pmatrix}A & 0\ O & B\end{pmatrix} = \det \begin{pmatrix}A & O\ O & I\end{pmatrix} \det \begin{pmatrix}I & O\ O & B\end{pmatrix}.$

This showw 1. Next we study the determinant of the upper triangular matrix $\det \begin{pmatrix}A & C\ O & B\end{pmatrix}$.

$\displaystyle \begin{pmatrix}A & C\ O & B \end{pmatrix} = \begin{pmatrix}I & O\ O&B\end{pmatrix}\begin{pmatrix}A & C\ O & I\end{pmatrix}$

Here, we show

$\displaystyle \det \begin{pmatrix}A & C\ O & I\end{pmatrix} = \det(A)$

Proof. Let $A$ be the square matrix of the order$n$ and $I$ be an identiry matrix of the ordr $m$. Then let $X = \begin{pmatrix}A & C\ O & I\end{pmatrix}$. Then

$\displaystyle \det(X) = \sum_{\sigma \in S_{n+m}}{\rm sgn}(\sigma)x_{1\sigma(1)}x_{2\sigma(2)}\cdots x_{n\sigma(n)}\cdots x_{n+m\sigma(n+m)}$

Note that for $1 \leq i \leq n$ and $n+1 \leq j \leq n+m$, we have $x_{i\sigma(j)} = 0$. Also for $n+1 \leq i \leq n+m$ and $n+1 \leq j \leq n+m$, we have

$\displaystyle x_{i\sigma(j)} = \left\{\begin{array}{l} 1  ,  i = \sigma(j)\\
0  ,  i \neq \sigma(j)
\end{array}\right.$

Now let 、

$\displaystyle \tau = \left(\begin{array}{lllll}1 & 2 & 3 & \ldots & n\\
\sigma(1) & \sigma(2) & \sigma(3) & \ldots & \sigma(n)
\end{array}\right)$

$\displaystyle \rho = \left(\begin{array}{lllll}n+1 & n+2 & n+3 & \ldots & n+m\\...
...gma(n+1) & \sigma(n+2) & \sigma(n+3) & \ldots & \sigma(n+m)
\end{array}\right).$

Then、 $\sigma = \tau\rho$, where、 $\tau \in S_{n}$, $\rho \in S_{m}$ and 、

$\displaystyle {\rm sgn}(\sigma) = {\rm sgn}(\tau \rho) = {\rm sgn}(\rho) {\rm sgn}(\rho)$

Thus, we have
    $\displaystyle \det(X) = \sum_{\tau \rho}{\rm sgn}(\tau \rho)x_{1\tau(1)}x_{2\tau(2)}\ldots x_{n\tau(n)}x_{n+1\rho(n+1)}x_{n+2\rho(n+2)}\ldots x_{n+m\rho(n+m)}$  
    $\displaystyle \left(\sum_{\tau}{\rm sgn}(\tau)x_{1\tau(1)}x_{2\tau(2)}\ldots x_...
...o}{\rm sgn}(\rho)x_{n+1\rho(n+1)}x_{n+2}\rho(n+2)\ldots x_{n+m\rho(n+m)}\right)$  
    $\displaystyle = \det(A)\cdot 1 = \det(A)$  

Exercise2-8

1. Find the determinant of the following matricex:

(a) $\left \vert \begin{array}{rrr}
2&-3&1\\
1&0&2\\
1&-1&1
\end{array}\right\vert $ (b) $\left \vert \begin{array}{rrrr}
2&4&0&5\\
1&-2&-1&3\\
1&2&3&0\\
3&3&-4&-4
\end{array}\right\vert $ (c) $\left \vert \begin{array}{ccccc}
0&0&0&1&0\\
0&1&0&0&0\\
0&0&0&0&1\\
1&0&0&0&0\\
0&0&1&0&0
\end{array}\right\vert$ (d) $\left\vert\begin{array}{rrrrr}
3 & 5 & 1 & 2 & -1\\
2 & 6 & 0 & 9 & 1\\
0 & 0 & 7 & 1 & 2\\
0 & 0 & 3 & 2 & 5\\
0 & 0 & 0 & 0 & -6
\end{array}\right\vert$

2. Factor the following matrices:

(a) $\left\vert\begin{array}{rrr}
1&a^2&(b+c)^2\\
1&b^2&(c+a)^2\\
1&c^2&(a+b)^2
\end{array}\right\vert $ (b) $\left\vert\begin{array}{rrr}
b+c&b&c\\
a&c+a&c\\
a&b&a+b
\end{array}\right\vert $ (c) $\left\vert\begin{array}{rrrr}
1&1&1&1\\
a & b & c & d\\
a^2 & b^2 & c^2 & d^2\\
a^3 & b^3 & c^3 & d^3
\end{array}\right\vert $ 3. Solve the following equations:. $\left\vert\begin{array}{rrr}
1-x&2&2\\
2&2-x&1\\
2&1&2-x
\end{array}\right\vert = 0$

4. Show the equation of the straight line going through two points $(a_{1},a_{2})$ and $(b_{1},b_{2})$is given by

$\displaystyle \left\vert\begin{array}{rrr}
x&y&1\\
a_{1}&a_{2}&1\\
b_{1}&b_{2}&1
\end{array}\right\vert = 0$

5. Show the equation of the plane going through 3 points $(a_{1},a_{2},a_{3}),(b_{1},b_{2},b_{3}),(c_{1},c_{2},c_{3})$ is given by

$\displaystyle \left\vert\begin{array}{cccc}
x&y&z&1\\
a_{1}&a_{2}&a_{3}&1\\
b_{1}&b_{2}&b_{3}&1\\
c_{1}&c_{2}&c_{3}&1
\end{array}\right\vert = 0$

.

6. Suppose that a system of linear equation $A{\mathbf x} = {\bf0}$ has a fundamental solution ${\mathbf x} \neq {\bf0}$. Then show that $\vert A\vert = 0$.

7. Solve the following system of linear equations using Cramer's rule.

(a) $\left\{\begin{array}{rrr}
x-3y&=&5\\
3x-5y&=&7
\end{array}\right . $

(b) $\left\{\begin{array}{rrr}
x+y+z&=&3\\
x+2y+2z&=&5\\
x+2y+3z&=&6
\end{array}\right . $