Diagonalization of Matrix

Theorem 4..1  

Suppose that the matrices $A$ and $B$ are similar. Then their characteristic polynomials are the same and so are the eigenvalues.

Proof

$\displaystyle \Phi_{P^{-1}AP}(t)$ $\displaystyle =$ $\displaystyle \vert P^{-1}AP - t I\vert = \vert P^{-1}(A - t I)P\vert$  
  $\displaystyle =$ $\displaystyle \vert P^{-1}\vert\cdot \vert A - t I\vert\cdot\vert P\vert = \vert A - t I\vert = \Phi_{A}(t).
\ensuremath{ \blacksquare}$  

A matrix $A$ is called diagonalizable if there exists $P$ so that $P^{-1}AP$ is diagnal matrix.

Theorem 4..2  

For $n$-square matrix $A$, the following conditions are equivalent.
$(1)$ $A$ is diagonalizable.
$(2)$ $A$ has $n$ independent eigenvalues.
$(3)$ Let the eigenvalues of $A$ be $\lambda_{1},\lambda_{2},\ldots,\lambda_{p}$ and corresponding eigenspaces be $V(\lambda_{1}),\\
V(\lambda_{2}),\ldots,V(\lambda_{p})$. Then,

$\displaystyle n = \dim V(\lambda_{1}) + \dim V(\lambda_{1}) + \cdots + \dim V(\lambda_{p}) $

Proof $(1) \Rightarrow (2)$
Let $P$ be a regular matrix so that $P^{-1}AP$ is diagonal. Then

$\displaystyle P^{-1}AP = \left(\begin{array}{rrrr}
\lambda_{1}&0&0&0\\
0&\lambda_{2}&0&0\\
\vdots&\vdots&\ddots&\vdots\\
0&0&0&\lambda_{n}
\end{array}\right) $

Now multiply $P =({\bf p}_{1},{\bf p}_{2},\ldots,{\bf p}_{n})$ from the left. Then

$\displaystyle P(P^{-1}AP) = A({\bf p}_{1},{\bf p}_{2},\ldots,{\bf p}_{n}) = ({\...
...{2}&0&0\\
\vdots&\vdots&\ddots&\vdots\\
0&0&0&\lambda_{n}
\end{array}\right) $

and

$\displaystyle (A{\bf p}_{1},A{\bf p}_{2},\ldots,A{\bf p}_{n}) = (\lambda_{1}{\bf p}_{1},\lambda_{2}{\bf p}_{2},\ldots,\lambda_{n}{\bf p}_{n}) $

Compare the column vectors on both sides, we see $\lambda_{1},\lambda_{2},\ldots,\lambda_{n}$ are eigenvalues of $A$ and ${\bf p}_{1},{\bf p}_{2},\ldots,{\bf p}_{n}$ are corresponding eigenvectors. Also, $P$ is regular. By the theorem2.5, ${\bf p}_{1},{\bf p}_{2},\ldots,{\bf p}_{n}$ are linearly independent.
$(2) \Rightarrow (3)$
First we show $V(\lambda_{1})+V(\lambda_{2}) + \cdots + V(\lambda_{p})$ is direct sum. To do so, we need to show by (Exercise4.1)

$\displaystyle V(\lambda_{i}) \cap \{V(\lambda_{1}) + V(\lambda_{2}) + \cdots + V(\lambda_{i-1})\} = \{{\bf0}\}  (i = 2,3,\ldots,p) $

Let the eigenvector corresponds to $\lambda_{j}$ be

$\displaystyle {\mathbf x}_{j}   (j=1,2,\ldots,i)$

Then

$\displaystyle {\mathbf x}_{i} \in V(\lambda_{i}) \cap \{V(\lambda_{1}) + V(\lambda_{2}) + \cdots + V(\lambda_{i-1})\} $

implies that

$\displaystyle {\mathbf x}_{i} = {\mathbf x}_{1} + {\mathbf x}_{2} + \cdots + {\mathbf x}_{i-1},  {\mathbf x}_{j} \in V(\lambda_{j})  (j = 1,2,\ldots,i) $

For any scalar $\lambda, \mu$, $(A - \lambda I)\cdot(A - \mu I) = (A - \mu I)\cdot(A - \lambda I)$ and $(A - \lambda I){\mathbf x}_{j} = A{\mathbf x}_{j} - \lambda {\mathbf x}_{j} = (\lambda_{j} - \lambda){\mathbf x}_{j}$, Thus,
    $\displaystyle (A - \lambda_{1} I)(A - \lambda_{2} I)\cdots(A - \lambda_{i-1} I){\mathbf x}_{i}$  
  $\displaystyle =$ $\displaystyle (\lambda_{i} - \lambda_{1})(\lambda_{i} - \lambda_{2})\cdots(\lam...
...i} - \lambda_{i-1}){\mathbf x}_{i} = {\bf0} + {\bf0} + \cdots + {\bf0} = {\bf0}$  

‚Æ‚È‚è, ${\mathbf x}_{i} = {\bf0}$. Hence,

$\displaystyle V(\lambda_{i}) \cap \{V(\lambda_{1})+\cdots+V(\lambda_{i-1})\} = \{{\bf0}\} $


$(3) \Rightarrow (1)$
let $\{{\bf p}_{1},{\bf p}_{2},\ldots,{\bf p}_{q}\},\{{\bf p}_{q+1},{\bf p}_{q+2},\ldots,{\bf p}_{r}\},\ldots,\{{\bf p}_{s+1},{\bf p}_{s+2},\ldots,{\bf p}_{n}\}$ be a linearly independent vectors in

$\displaystyle V(\lambda_{1}), V(\lambda_{2}), \ldots, V(\lambda_{p})$

Then since

$\displaystyle n = \dim V(\lambda_{1}) + \dim V(\lambda_{2}) + \cdots + \dim V(\lambda_{p}) ,$

$\displaystyle V(\lambda_{i}) \cap V(\lambda_{j}) = \{{\bf0}\}  (i \neq j). $

Since,

$\displaystyle \{{\bf p}_{1},{\bf p}_{2},\ldots,{\bf p}_{q},{\bf p}_{q+1},{\bf p}_{q+2},\ldots,{\bf p}_{r},\ldots,{\bf p}_{s+1},{\bf p}_{s+2},\ldots,{\bf p}_{n}\}$

are linearly independent, let $P =({\bf p}_{1},{\bf p}_{2},\ldots,{\bf p}_{n})$. Then
$\displaystyle AP$ $\displaystyle =$ $\displaystyle (A{\bf p}_{1},A{\bf p}_{2},\ldots,A{\bf p}_{n}) = (\lambda{\bf p}_{1},\lambda{\bf p}_{2},\ldots,\lambda{\bf p}_{n})$  
  $\displaystyle =$ $\displaystyle ({\bf p}_{1},{\bf p}_{2},\ldots,{\bf p}_{n})\left(\begin{array}{c...
...in{array}{ccc}
\lambda_{1}&&0\\
&\ddots&\\
0&&\lambda_{n}
\end{array}\right).$  

Thus, $P^{-1}AP$ is diagonal. $ \blacksquare$

From this theoerm, if $A$ is diagonalizable, then the diagonal elements of $A$ are eigenvalues and the number is the same as the dimension of the eigenspace.,

Example 4..1  

Diagonalize the folowing matrix.

$\displaystyle A = \left(\begin{array}{rrr}
0&1&1\\
1&0&1\\
1&1&0
\end{array}\right) $

Answer By the example3.2, the eigenvalue of $A$ are $\lambda = 2,-1$ and the eigenspace is

$\displaystyle V(2) = \{\alpha \left(\begin{array}{r}
1\\
1\\
1
\end{array}\ri...
...rray}\right) + \gamma \left(\begin{array}{r}
-1\\
0\\
1
\end{array}\right)\}.$

Then with the matrix $P$ with its columns are eigenvectors,

$\displaystyle P = \left(\begin{array}{rrr}
1&-1&-1\\
1&1&0\\
1&0&1
\end{array...
...{-1}AP = \left(\begin{array}{rrr}
2&0&0\\
0&-1&0\\
0&0&-1
\end{array}\right) $

$ \blacksquare$

A necessary and sufficient condition for which the square matrix is diagonalizable is the dimension of eigenapsce and the multiplicity of eigenvalues are the same.

$\spadesuit$Triangular Matrix $\spadesuit$

Given a square matrix $A$, if we can find a regular matrix $P$ so that $P^{-1}AP$ is an upper triangular matrix, then $A$ is called a triangular by $P$. For $n$-square matrix $A = (a_{ij})$, $A^{*} = (\bar{a_{ij}})^{t} = (\bar{a_{ji}})$ is called a conjugate transpose of $A$. Also, the matrix satisfies $A = A^{*}$ is caleed a Hermitian matrix. For $A$ is real matrix, $A^{*}$ is the same as $A^{t}$ and Hermitian matrix is the same as the symmetric matrix.

Example 4..2  

For $A = \left(\begin{array}{cc}
2&1 + 2i\\
1 - 2i&1
\end{array}\right)$, find $A^{*}$.

Answer $A^{*} = \left(\begin{array}{cc}
2&1 + 2i\\
1 - 2i&1
\end{array}\right)$. Then $A$ is a Hermitian matrix. $ \blacksquare$

If $U^{*}U = UU^{*} = I$ for some $n$-square complex matrix $U$, then $U$ is calle a unitary matrix. If $A^{t}A = AA^{t} = I$ for some $n$-square real matrix $A$, then $A$ is called a orthogonal matrix. From this for if $U$ is a unitary matrix, then $U^{*} = U^{-1}$ and if $A$ is a orthogonal matrix, then $A^{t} = A^{-1}$.

Theorem 4..3  

Let the eigenvalues of $n$-square matrix $A$ be $\lambda_{1},\lambda_{2},\ldots,\lambda_{n}$. Then $A$ can be transformed to the upper traiangular matrix by siutable unitary matrix $U$.

$\displaystyle U^{-1}AU = U^{*}AU = \left(\begin{array}{rrrrr}
\lambda_{1}&&&&\\
&\lambda_{2}&&*&\\
&&\ddots&&\\
&0&&&\\
&&&&\lambda_{n}
\end{array}\right)
$

Proof We use mathematical induction on the degree $n$ of $A$. For $n = 1$, $A = (a_{11})$ it self is an upper triangular. Assume that true for $n-1$-square matrix. Then show the theorem is true for $n$-square matrix.

Let $\lambda_{1}$ be an eigenvalue of $A$ and ${\bf u}_{1}$ be th corresponding eigenvector. Now choose unit vectors ${\bf u}_{1}$, ${\bf u}_{2}$, $\ldots$, ${\bf u}_{n}$ ‚ð ${ C}^{n}$ to be the basis of the orthonormal system. Then by Exercise4.1,

$\displaystyle U_{1} = ({\bf u}_{1},{\bf u}_{2},\ldots,{\bf u}_{n}) $

is a unitary matrix,
$\displaystyle U_{1}^{-1}AU_{1}$ $\displaystyle =$ $\displaystyle U_{1}^{-1}(\lambda_{1}{\bf u}_{1},A{\bf u}_{2},\ldots,A{\bf u}_{n})$  
  $\displaystyle =$ $\displaystyle (\lambda_{1}{\bf e}_{1},U^{-1}A{\bf u}_{2},\ldots,U^{-1}A{\bf u}_{n})$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{ccc}
\lambda_{1}&\vdots &*\\
\cdots&\cdots&\cdots\\
0&\vdots& B
\end{array}\right)$  

Here $B$ is $n-1$-square matrix. By the theorem 4.1, $A$ and $U^{-1}AU$ have the same eigenvalues, Eigenvalues of $B$ are eigenvalues $\lambda_{2},\lambda_{3},\ldots,\lambda_{n}$ of $A$ except $\lambda_{1}$. By assumption, for $n-1$-square matrix $B$, there exists $n-1$-square unitary matrix $U_{2}$such that $U_{2}^{-1}BU_{2}$ is upper traiangluar matrix.

$\displaystyle U_{2}^{-1}BU_{2} = \left(\begin{array}{rrrrr}
\lambda_{2}&&&&\\
&\lambda_{3}&&*&\\
&&\ddots&&\\
&0&&&\\
&&&&\lambda_{n}
\end{array}\right) $

Now let $U = U_{1}\left(\begin{array}{rr}
1&0\\
0&U_{2}
\end{array}\right) $. Then $U$ is unitary and
$\displaystyle U^{-1}AU$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rr}
1&0\\
0&U_{2}
\end{array}\right)^{-1}U_{1}^{-1}AU_{1}\left(\begin{array}{rr}
1&0\\
0&U_{2}
\end{array}\right)$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{rr}
1&0\\
0&U_{2}^{-1}
\end{array}\right)\le...
...0&B
\end{array}\right)\left(\begin{array}{rr}
1&0\\
0&U_{2}
\end{array}\right)$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{rr}
\lambda_{1}&*\\
0&U_{2}^{-1}BU_{2}
\end{array}\right)$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{rrrrr}
\lambda_{1}&&&&\\
&\lambda_{2}&&*&\\
&&\ddots&&\\
&0&&&\\
&&&&\lambda_{n}
\end{array}\right) .$  

Thus $U^{-1}AU$ is an upper traiangular matrix. $ \blacksquare$

Example 4..3  

Determine whether $A = \left(\begin{array}{rr}
3&1\\
-1&1
\end{array}\right)$ is diagonalizable. If not, traiangulate.

Answer $\Phi_{A}(t) = \left\vert \begin{array}{rr}
3-\lambda&1\\
-1&1-\lambda
\end{array}\right\vert = (\lambda - 2)^{2}$ implies that the eigenvalue is $2$. Also,

$\displaystyle A - 2I = \left(\begin{array}{rr}
1&1\\
-1&-1
\end{array}\right) \longrightarrow \left(\begin{array}{rr}
1&1\\
0&0
\end{array}\right) $

implies that the corresponding eigenvector is

$\displaystyle {\mathbf x} = \alpha \left(\begin{array}{r}
1\\
-1
\end{array}\right) . $

Thus, the eigenspace is $\{\alpha \left(\begin{array}{r}
1\\
-1
\end{array}\right) \}$ and $\dim V(2) = 1 < 2$. Thus,by the theorem4.1, it is impossible to diagonalize. So, we will try to get an upper traiangular matrix.

From the eigenvector $\left(\begin{array}{r}
1\\
-1
\end{array}\right)$, we obtain the unit eigenvector $\frac{1}{\sqrt{2}} \left(\begin{array}{r}
1\\
-1
\end{array}\right)$. Now using the Gram-Schmidt orthonormalization, we create the orthonormal basis $\{\frac{1}{\sqrt{2}}\left(\begin{array}{c}
1\\
-1
\end{array}\right), \frac{1}{\sqrt{2}}\left(\begin{array}{c}
1\\
1
\end{array}\right) \}$. Let $U = \frac{1}{\sqrt{2}}\left(\begin{array}{rr}
1&1\\
-1&1
\end{array}\right)$. Then $U$ is orthogonal matrix and

$\displaystyle U^{-1}AU = U^{t}AU = \left(\begin{array}{rr}
2&2\\
0&2
\end{array}\right)$

$ \blacksquare$

Theorem 4..4  

Suppose that $n$-square matrix $A$ has $n$ distince real eigenvalues. Then there exists an orthogonal matrix $P$ so that $P^{-1}AP$ is an upper triangular matrix.

Exercise4-2

1. Determine whether the following matices are diagonalizable. If so find a regular matrix $P$ and diagonalize. If not, find an upper triangluar matrix.

(a) $\left(\begin{array}{rr}
1&2\\
0&-1
\end{array}\right) $ (b) $\left(\begin{array}{rrr}
2&1&1\\
1&2&1\\
0&0&1
\end{array}\right) $ (c) $\left(\begin{array}{rrr}
1&1&6\\
-1&3&6\\
1&-1&-1
\end{array}\right) $

2. Suppose $U,W$ are subspaces of the vector space $V$. Show that $U + W$ is a direct sum if and only if $U \cap W = \{\bf0\}$.

3. Let $U,W$ be finite dimensional. Then show the following is true.

$\displaystyle \dim (U \oplus W) = \dim U + \dim W $

4. For 3 dimensional vector space ${\mathcal R}^{3}$, let

$\displaystyle U = \{(x_{1},x_{2},x_{3}) : x_{1}+x_{2}+x_{3} = 0\}, W = \{(x_{1},x_{2},x_{3}) : x_{1} = x_{2} = x_{3} \}. $

Then show that ${\mathcal R}^{3} = U \oplus W$.

5. Show the absolute value of the eigenvalue $\lambda$ of an orthogonal matrix is $1$.

6. Suppose that the column vectors of $U$ is orthonormal basis. Then show that $U$ is unitary matrix.