Solution

Chapter 1
Exercise1.2.1

1.

(a) Add the corresponding component:

${\bf A} + {\bf B} = (2,-1,3) + (-1,1,4) = (2-1,-1+1,3+4) = (1,0,7).$

(b) Scalar multiplication to each component:

$3{\bf B} = 3(-1,1,4) = (-3,3,12).$

(c) $\Vert{\bf A}\Vert = \sqrt{a_{1}^2 + a_{2}^2 + a_{3}^2}$. Then

$\Vert{\bf A}\Vert = \Vert(2,-1,3)\Vert = \sqrt{2^2 + (-1)^2 + 3^2} = \sqrt{14}.$

(d) First scalar multiplication then vector addition:

$3{\bf B} - 2{\bf A} = 3(-1,1,4) - 2(2,-1,3) = (-3,3,12) - (4,-2,6) = (-7,5,6).$

2.

(a) ${\bf B} - {\bf A} = (3,0,1) - (-2,-1,3) = (5,1,-2) = 5{\bf i} + {\bf j} - 2{\bf k}.$

(b) $3{\bf A} + {\bf B} = 3(-2,-1,3) + (3,0,1) = (-6,-3,9) + (3,0,1)
= (-3,-3,10)\ = -3{\bf i} -3{\bf j} + 10{\bf k}.$

(c) $3{\bf B} - 2{\bf A} = 3(3,0,1) - 2(-2,-1,3) = (9,0,3) + (4,2,-6) = 13{\bf i} + 2{\bf j} -3{\bf k}.$

(d) $\displaystyle{\frac{{\bf A} + {\bf B}}{2} = \frac{(-2,-1,3) + (3,0,1)}{2} = \frac{(1,-1,4)}{2} = \frac{{\bf i}}{2} - \frac{{\bf j}}{2} + 2{\bf k}}.$

3. Geometric vectors can be treated as space vectors see Exercise 1-2-1.5 .

4. Geometric vectors can be treated as space vectors see Exercise 1-2-1.5.

5. For ${\bf A} = (a_{1},a_{2},a_{3}),  {\bf B} = (b_{1},b_{2},b_{3}),  {\bf C} = (c_{1},c_{2},c_{3}) \in R^{3}$,

(1)

$\displaystyle  {\bf A} + {\bf B}$ $\displaystyle =$ $\displaystyle (a_{1},a_{2},a_{3}) + (b_{1},b_{2},b_{3})$  
  $\displaystyle =$ $\displaystyle (a_{1}+b_{1},a_{2}+b_{2},a_{3}+b_{3}) \in R^{3}.$  

(2)

$\displaystyle  {\bf A} + {\bf B}$ $\displaystyle =$ $\displaystyle (a_{1}+b_{1},a_{2}+b_{2},a_{3}+b_{3})$  
  $\displaystyle =$ $\displaystyle (b_{1}+a_{1},b_{2}+a_{2},b_{3}+a_{3})$  
  $\displaystyle =$ $\displaystyle {\bf B} + {\bf A}.$  

(3)

$\displaystyle  ({\bf A} + {\bf B}) + {\bf C}$ $\displaystyle =$ $\displaystyle (a_{1}+b_{1},a_{2}+b_{2},a_{3}+b_{3}) + (c_{1},c_{2},c_{3})$  
  $\displaystyle =$ $\displaystyle ((a_{1}+b_{1})+c_{1},(a_{2}+b_{2})+c_{2},(a_{3}+b_{3})+c_{3})$  
  $\displaystyle =$ $\displaystyle (a_{1}+(b_{1}+c_{1}),a_{2}+(b_{2}+c_{2}),a_{3}+(b_{3}+c_{3}))$  
  $\displaystyle =$ $\displaystyle (a_{1},a_{2},a_{3})+(b_{1}+c_{1},b_{2}+c_{2},b_{3}+c_{3})$  
  $\displaystyle =$ $\displaystyle {\bf A} + ({\bf B} + {\bf C}).$  

(4)

$\displaystyle  $   Suppose$\displaystyle {\bf0}$ $\displaystyle =$ $\displaystyle (0,0,0)  $   Then$\displaystyle \
{\bf A} + {\bf0} = {\bf A}.$  

(5)

$\displaystyle  {\bf A^{*}}$ $\displaystyle =$ $\displaystyle (-a_{1},-a_{2},-a_{3})  $   implies  
$\displaystyle {\bf A} + {\bf A^{*}}$ $\displaystyle =$ $\displaystyle (a_{1}+(-a_{1}),a_{2}+(-a_{2}),a_{3}+(-a_{3}))$  
  $\displaystyle =$ $\displaystyle (0,0,0) = {\bf0}.$  

(6)

$\displaystyle  \alpha {\bf A}$ $\displaystyle =$ $\displaystyle (\alpha a_{1},\alpha a_{2}, \alpha a_{3}) \in R^{3}.$  

(7)

$\displaystyle \alpha (\beta {\bf A})$ $\displaystyle =$ $\displaystyle \alpha(\beta a_{1},\beta a_{2},\beta a_{3})$  
  $\displaystyle =$ $\displaystyle (\alpha\beta a_{1},\alpha \beta a_{2},\alpha \beta a_{3})$  
  $\displaystyle =$ $\displaystyle \alpha \beta (a_{1},a_{2},a_{3})$  
  $\displaystyle =$ $\displaystyle \alpha \beta ({\bf A}).$  

(8)

$\displaystyle (\alpha + \beta){\bf A}$ $\displaystyle =$ $\displaystyle (\alpha + \beta)(a_{1},a_{2},a_{3})$  
  $\displaystyle =$ $\displaystyle ((\alpha + \beta)a_{1},(\alpha + \beta)a_{2}, (\alpha + \beta)a_{3})$  
  $\displaystyle =$ $\displaystyle (\alpha a_{1} + \beta a_{1},\alpha a_{2} + \beta a_{2},\alpha a_{3} + \beta a_{3})$  
  $\displaystyle =$ $\displaystyle (\alpha a_{1},\alpha a_{2}, \alpha a_{3}) + (\beta a_{1},\beta a_{2}, \beta a_{3})$  
  $\displaystyle =$ $\displaystyle \alpha {\bf A} + \beta {\bf B}.$  

(9)

$\displaystyle 1 {\bf A}$ $\displaystyle =$ $\displaystyle 1 (a_{1},a_{2},a_{3})$  
  $\displaystyle =$ $\displaystyle (1 a_{1},1 a_{2},1 a_{3}) = (a_{1},a_{2},a_{3}) = {\bf A},$  
$\displaystyle 0 {\bf A}$ $\displaystyle =$ $\displaystyle 0 (a_{1},a_{2},a_{3})$  
  $\displaystyle =$ $\displaystyle (0 a_{1},0 a_{2},0 a_{3}) = (0,0,0) = {\bf0},$  
$\displaystyle \alpha {\bf0}$ $\displaystyle =$ $\displaystyle \alpha (0,0,0)$  
  $\displaystyle =$ $\displaystyle (\alpha 0,\alpha 0,\alpha 0) = (0,0,0) = {\bf0}.$  

6. For $\Vert{\bf A}\Vert = \Vert{\bf B}\Vert$, ${\bf A} + {\bf B}$ is a diagonal vector bisecting the angle between ${\bf A}$ and ${\bf B}$. For $\Vert{\bf A}\Vert \neq \Vert{\bf B}\Vert$, Multiply ${\bf B}$ by some scalar so that $\Vert{\bf A}\Vert = \Vert\alpha {\bf B}\Vert$. Then ${\bf A} + \alpha {\bf B}$ is a vector bisecting the angle between ${\bf A}$ and ${\bf B}$.

7.

$\displaystyle x(1,1,1) + y(1,1,0) + z(1,0,0)$ $\displaystyle =$ $\displaystyle (x,x,x) + (y,y,0) + (z,0,0)$  
  $\displaystyle =$ $\displaystyle (x+y+z,x+y,x) = (2,-2,4)$  

implies that $x+y+z = 2,  x+y = -2, x = 4 $. Now solve this equations from the backward. $x = 4, y = -7, z = 5.$

Exercise1.4.1

1.

(a) $\Vert{\bf B}\Vert = ({\bf B} \cdot {\bf B})^{1/2} = (2^2 + 4^2 + (-3)^2)^{1/2} = \sqrt{29}.$

(b) ${\bf A} \cdot {\bf B} = (-1,3,1) \cdot (2,4,-3) = -2 + 12 -3 = 7.$

(c) ${\bf A} \cdot {\bf B} = \vert{\bf A}\vert \vert{\bf B}\vert \cos{\theta}$implies

$\displaystyle \cos{\theta} = \frac{{\bf A}\cdot{\bf B}}{\Vert{\bf A}\Vert \Vert...
...rt{(-1)^2 + 3^2 + 1}\sqrt{2^2 + 4^2 + (-3)^2}} = \frac{7}{\sqrt{11}\sqrt{29}}. $

よって ${\theta} = \cos^{-1}{(\frac{7}{\sqrt{309}})}.$

(d) A unit vector can be obtained by dividing the magnitude of itself.

$\displaystyle \frac{{\bf A}}{\Vert{\bf A}\Vert} = \frac{(-1,3,1)}{\sqrt{11}} .$

2. every element of an orthogonal system is orthogonal to each other.

every element of an orthonormal system is unit element and orthogonal to each other.

(a) Since $(1,3) \cdot (6,-2) = 6 - 6 = 0$, we have $(1,3) \perp (6, -2)$.
Change to orthonormal system, we have $\{\frac{(1,3)}{\sqrt{10}}, \frac{(6,-2)}{2\sqrt{10}} \}.$

(b)

$\displaystyle (1,2,2) \cdot (-2,2,-1) = -2 + 4 - 2 = 0, $

$\displaystyle (1,2,2) \cdot (2,1,-2) = 2 + 2 -4 = 0, $

$\displaystyle (-2,2,-1) \cdot (2,1,-2) = -4 + 2 + 2 = 0. $

Thus, $(1,2,2) \perp (-2,2,-1),  (1,2,2) \perp (2,1,-2),  (-2,2-1) \perp (2,1,-2) $. Change to otthonormal system, we have $\{\frac{(1,2,2)}{3},\frac{(-2,2,-1)}{3},\frac{(2,1,-2)}{3} \}.$

(c) ${\bf i} - 2 {\bf j} + 3{\bf k} \cdot 2{\bf i} - \frac{1}{2}{\bf j} - \frac{1}{3}{\bf k} = 2 + 1 -1 \neq 0$ implies that non orthonormal system.

3.Let $(x,y,z)$ be an arbitray point on the plane. Consider the vector with the initial point $(5,-1,3)$ and the endpoint $(x,y,z)$. This vector $(x-5,y+1,z-3)$ is on the plane. Thus, orthogonal to the vector $(2,1,-1)$. Then the inner product is 0. Hence,

$\displaystyle (x-5,y+1,z-3) \cdot (2,1,-1)$ $\displaystyle =$ $\displaystyle 2(x-5) + y+1 -(z-3)$  
  $\displaystyle =$ $\displaystyle 2x + y -z -6 = 0.$  

4.For all $\theta$, $\vert\cos{\theta}\vert \leq 1$ implies

$\displaystyle \vert{\bf A}\cdot{\bf B}\vert = \Vert{\bf A}\Vert  \Vert{\bf B}\Vert \vert\cos{\theta}\vert \leq \Vert{\bf A}\Vert  \Vert{\bf B}\Vert. $

5.

$\displaystyle \Vert{\bf A} - {\bf B}\Vert^{2}$ $\displaystyle =$ $\displaystyle \Vert {\bf A} - {\bf C} + {\bf C} - {\bf B}\Vert^{2}$  
  $\displaystyle =$ $\displaystyle ({\bf A} - {\bf C} + {\bf C} - {\bf B}) \cdot ({\bf A} - {\bf C} + {\bf C} - {\bf B})$  
  $\displaystyle =$ $\displaystyle ({\bf A} - {\bf C}) \cdot ({\bf A} - {\bf C}) + 2({\bf A} - {\bf C}) \cdot ({\bf C} - {\bf B})$  
  $\displaystyle +$ $\displaystyle ({\bf C} - {\bf B}) \cdot ({\bf C} - {\bf B})$  
  $\displaystyle =$ $\displaystyle \Vert{\bf A} - {\bf C}\Vert^{2} + 2\Vert{\bf A} - {\bf C}\Vert  \Vert{\bf C} - {\bf B}\Vert\cos{\theta} + \Vert{\bf C} - {\bf B}\Vert^{2}$  
  $\displaystyle \underbrace{\leq}_{{\bf 4}\mbox{implies}}$ $\displaystyle \Vert{\bf A} - {\bf C}\Vert^{2} + 2\Vert{\bf A}-{\bf C}\Vert  \Vert{\bf C} - {\bf B}\Vert + \Vert{\bf C} - {\bf B}\Vert^{2}  $  
  $\displaystyle =$ $\displaystyle (\Vert{\bf A} - {\bf C}\Vert + \Vert{\bf C} - {\bf B}\Vert)^{2}.$  

Thus,

$\displaystyle \Vert{\bf A} - {\bf B}\Vert \leq \Vert{\bf A} - {\bf C}\Vert + \Vert{\bf C} - {\bf B}\Vert .$

6.

$\displaystyle 0 \leq \Vert(f - \lambda g)\Vert^{2}$ $\displaystyle =$ $\displaystyle (f - \lambda g , f - \lambda g )$  
  $\displaystyle =$ $\displaystyle \Vert f\Vert^{2} - 2\lambda (f,g) + \lambda^{2}\Vert g\Vert^{2}.$  

This is the second degree equation in $\lambda$ and greater than 0. Thus the discriminant $\Delta$ is less than or equal to 0. Hence,

$\displaystyle \Delta = \vert(f,g)\vert^{2} - \Vert f\Vert^{2}  \Vert g\Vert^{2} \leq 0. $

Therefore,

$\displaystyle \vert(f,g)\vert \leq \Vert f\Vert \Vert g\Vert .$

7. (a) $\Vert f\Vert = \{\int_{0}^{2}x^{2}dx\}^{1/2} = \{\frac{x^3}{3}\vert _{0}^{2}\}^{1/2} = \sqrt{\frac{8}{3}}.$

(b)

$\displaystyle \Vert f\Vert$ $\displaystyle =$ $\displaystyle \{\int_{0}^{2}[\sin{\pi x}]^{2}dx\}^{1/2} = \{\int_{0}^{2}\frac{1 - \cos{2\pi x}}{2}dx \}^{1/2}$  
  $\displaystyle =$ $\displaystyle \{\frac{x}{2} - \frac{\sin{2\pi x}}{4\pi} \vert _{0}^{2} \}^{1/2} = 1 .$  

(c)

$\displaystyle \Vert f\Vert$ $\displaystyle =$ $\displaystyle \{\int_{0}^{2}[\cos{\pi x}]^{2}dx\}^{1/2} = \{\int_{0}^{2}\frac{1 + \cos{2\pi x}}{2}dx \}^{1/2}$  
  $\displaystyle =$ $\displaystyle \{\frac{x}{2} + \frac{\sin{2\pi x}}{4\pi} \vert _{0}^{2} \}^{1/2} = 1 .$  

8.

$\displaystyle (P_{0},P_{1})$ $\displaystyle =$ $\displaystyle \int_{-1}^{1}xdx = \frac{x^2}{2}\mid_{-1}^{1} = 0.$  
$\displaystyle (P_{0},P_{2})$ $\displaystyle =$ $\displaystyle \int_{-1}^{1}\frac{3x^2 - 1}{2} dx = \frac{x^3 - x}{2}\mid_{-1}^{1} = 0.$  
$\displaystyle (P_{1},P_{2})$ $\displaystyle =$ $\displaystyle \int_{-1}^{1}x\frac{3x^2 - 1}{2} dx = \frac{3x^4}{8} - \frac{x^2}{4}\mid_{-1}^{1} = 0.$  

Exercise1.6.1

1.

(a)

$\displaystyle {\bf A} \times {\bf B}$ $\displaystyle =$ $\displaystyle (1,2,-3) \times (2,-1,1)$  
  $\displaystyle =$ $\displaystyle \left\vert\begin{array}{rrr}
{\bf i} & {\bf j} & {\bf k}\\
1&2&-...
...
\end{array}\right\vert = \begin{array}{rrrr}
2&-3&1&2\\
-1&1&2&-1
\end{array}$  
  $\displaystyle =$ $\displaystyle (2-3,-6-1,-1-4) = (-1,-7,-5).$  

(b)

$\displaystyle {\bf C} \times ({\bf A} \times {\bf B})$ $\displaystyle =$ $\displaystyle (4,2,2) \times (-1,-7,-5)$  
  $\displaystyle =$ $\displaystyle \left\vert\begin{array}{rrr}
{\bf i} & {\bf j} & {\bf k}\\
4&2&2...
...\end{array}\right\vert = \begin{array}{rrrr}
2&2&4&2\\
-7&-5&-1&-7
\end{array}$  
  $\displaystyle =$ $\displaystyle (-10+14,-2+20,-28+2) = (4,18,-26).$  

(c) ${\bf C} \cdot ({\bf A} \times {\bf B}) = (4,2,2) \cdot (-1,-7,-5) = -4 -14 -10 = -28.$
or

$\displaystyle {\bf C} \cdot ({\bf A} \times {\bf B})$ $\displaystyle =$ $\displaystyle (4,2,2) \cdot ((1,2,-3) \times (2,-1,1))$  
  $\displaystyle =$ $\displaystyle \left\vert\begin{array}{rrr}
4&2&2\\
1&2&-3\\
2&-1&1
\end{array}\right\vert = -28.$  

2.Let $\Gamma$ be the plane with the sides ${\bf i} + {\bf j} - {\bf k}$ and $2{\bf i} + 3{\bf j} + 2{\bf k}$. Then the normal vector of $\Gamma$ ${\bf n}_{\Gamma}$ is orthogonal to ${\bf i} + {\bf j} - {\bf k}$ and $2{\bf i} + 3{\bf j} + 2{\bf k}$. Therefore,

$\displaystyle {\bf n}_{\Gamma} = (1,1,-1) \times (2,3,2) = \begin{array}{rrrr}
1&-1&1&1\\
3&2&2&3
\end{array}= (5,-4,1). $

Next the plane is parallel to $\Gamma$. Then ${\bf n}_{\Gamma}$ is also orthogonal to the plane. Then let $(x,y,z)$ be an arbitrary point on the plane. Then $(x-1,y,z-1)$ is orthogonal to ${\bf n}_{\Gamma}$. Thus, the equation of the plane is

$\displaystyle {\bf n}_{\Gamma} \cdot (x-1,y,z-1) = (5,-4,1) \cdot (x-1,y,z-1) = 5x - 4y + z - 6 = 0. $

3.Let $\Gamma$ be the plane perpendicular to the palne $x - 2y + 3z - 4 = 0$. Then the normal vector ${\bf n} = (1,-2,3)$ of the plane $x - 2y + 3z - 4 = 0$ can be thought of being on the $\Gamma$. Also $(2,0,-1),  (3,2,1)$ go through the required plane. Thus the vector $(3-2,2-0,1+1) = (1,2,2)$ is on the required plane. Now take the cross product of these two vectors, we have the following normal vector of $\Gamma$ such as

$\displaystyle {\bf n}_{\Gamma} = (1,-2,3) \times (1,2,2) = \left\vert\begin{array}{rrrr}
-2&3&1&-2\\
2&2&1&2
\end{array}\right \vert = (-10,1,4) $

Here if we take an arbitray point $(x,y,z)$ on $\Gamma$ and make vector $(x-2,y,z+1)$. Then the vector is orthogonal to ${\bf n}_{\Gamma}$. Then the equation of the required plane is
$\displaystyle {\bf n}_{\Gamma} \cdot (x-2,y,z+1)$ $\displaystyle =$ $\displaystyle (-10,1,4) \cdot (x-2,y,z+1)$  
  $\displaystyle =$ $\displaystyle -10x + y + 4z + 24 = 0.$  

4.The area of the triangle is the half of the parallelogram with the sides A,B.

$\displaystyle \frac{1}{2}\Vert(1,3,-1) \times (2,1,1)\Vert = \frac{1}{2}\Vert(4,-3,-5)\Vert = \frac{1}{2}\sqrt{16 + 9 + 25} = \frac{5\sqrt{2}}{2}. $

5. $m = {\bf r} \times {\bf F} = (-2,1,-1) \times (1,3,1) = (4,1,-7).$

6. ${\bf v} = (1,2,2) \times (1,-2,3) = (10,-1,-4).$

7.The area of parallelogram with the sides ${\bf B}$ and ${\bf C}$ is given by $\Vert{\bf B} \times {\bf C}\Vert$. Let the angle between the vector ${\bf B} \times {\bf C}$ and A be $\theta$. Then the height of the parallelpiped with ${\bf A},{\bf B},{\bf C}$ is $\Vert{\bf A}\Vert\cos{\theta}$. Thus, the volume of the parallelpiped with the sides ${\bf A},{\bf B},{\bf C}$ is given by

$\displaystyle \Vert{\bf B} \times {\bf C}\Vert  \Vert{\bf A}\Vert\cos{\theta} ...
...}  \Vert{\bf B} \times {\bf C}\Vert = {\bf A} \cdot ({\bf B} \times {\bf C}). $

8. The cross product of vectors itself is 0. Changing the order of multiplication changes the sign.


$\displaystyle {\bf A} \times {\bf B}$ $\displaystyle =$ $\displaystyle (2{\bf e}_{1} + 5{\bf e}_{2} - {\bf e}_{3}) \times ({\bf e}_{1} - 2{\bf e}_{2} - 4{\bf e}_{3})$  
  $\displaystyle =$ $\displaystyle (2{\bf e}_{1} + 5{\bf e}_{2} - {\bf e}_{3}) \times {\bf e}_{1} - (2{\bf e}_{1} + 5{\bf e}_{2} - {\bf e}_{3}) \times 2{\bf e}_{2}$  
  $\displaystyle -$ $\displaystyle 4(2{\bf e}_{1} + 5{\bf e}_{2} - {\bf e}_{3}) \times {\bf e}_{3}$  
  $\displaystyle =$ $\displaystyle 5({\bf e}_{2} \times {\bf e}_{1}) - ({\bf e}_{3} \times {\bf e}_{1}) - 4 ({\bf e}_{1} \times {\bf e}_{2}) + 2 ({\bf e}_{3} \times {\bf e}_{2})$  
  $\displaystyle -$ $\displaystyle 8 ({\bf e}_{1} \times {\bf e}_{3}) -20 ({\bf e}_{2} \times {\bf e}_{3})$  
  $\displaystyle =$ $\displaystyle 5( {\bf j} - {\bf i}) + {\bf j} + {\bf k} - ({\bf i} - {\bf j}) - 2({\bf i} + {\bf k}) - 8({\bf j} + {\bf k}) - 20({\bf i} + {\bf k})$  
  $\displaystyle =$ $\displaystyle -36 {\bf i} - {\bf j} - 29{\bf k}.$  

9.Using the scalar triple product.

$\displaystyle (4,-3,1) \cdot ((10,-3,0) \times (2,-6,3))$ $\displaystyle =$ $\displaystyle \left\vert\begin{array}{rrr}
4&-3&1\\
10&-3&0\\
2&-6&3
\end{array}\right\vert$  
  $\displaystyle =$ $\displaystyle 4(-9) + 3(30) + (-54) = 0.$  

Thus by the theorem 1.3, it is linearly dependent.

10. (a) Set $c_{1} + c_{2}x + c_{3}x^2 = 0$ and differentiate with respect to $x$ twice. Then we ahve

$\displaystyle c_{2} + 2c_{3}x = 0 ,  2c_{3} = 0 $

Thus, $c_{1} = c_{2} = c_{3} = 0$ and $\{1, x, x^2\}$ is linearly independent.

(b) Set $c_{1}\sin{x} + c_{2}\cos{x} = 0$ and differentiate with respect to $x$. Then we have

$\displaystyle c_{1}\cos{x} - c_{2}\sin{x} = 0. $

Solve this equation, we have $c_{1} = 0, c_{2} = 0$. Thus it is linearly independent.

11.Suppose ${\bf A},{\bf B}$ is linearly independent. Then show ${\bf A} \times {\bf B} \neq {\bf0}$. (Using contraposition, for ${\bf A} \times {\bf B} = {\bf0}$, show ${\bf A},{\bf B}$ is linearly dependent. )

Suppose that ${\bf A} \times {\bf B} = {\bf0}$. Then A and B are parallel. In other words, there exists some real number $\lambda \neq 0$ so that ${\bf A} = \lambda {\bf B}$. Thus,

$\displaystyle {\bf A} - \lambda {\bf B} = 0 $

Therefore, ${\bf A},{\bf B}$ is linearly dependent.

Next we show if ${\bf A} \times {\bf B} \neq {\bf0}$, then ${\bf A},{\bf B}$ is linearly independent. (Using contraposition, we show if ${\bf A},{\bf B}$ is linearly dependent, then ${\bf A} \times {\bf B} = {\bf0}$. )

If ${\bf A},{\bf B}$ are linearly dependent, then there exists $c_{1} \neq 0$ or $c_{2} \neq 0$ so that

$\displaystyle c_{1}{\bf A} + c_{2}{\bf B} = {\bf0} $

Then ${\bf A} = -\frac{c_{2}}{c_{1}}{\bf B}$. Thus, ${\bf A}$ and ${\bf B}$ are parallel. Therefore, ${\bf A} \times {\bf B} = {\bf0}$.

Exercise1.8.1

1.Let $w_{1},w_{2}$ be elements of $W$. Then we can write $w_{1} = (x_{1},y_{1},1), w_{2} = (x_{2},y_{2},1)$. To be a subspace, it must satisfy the closure property in addition and scalar multiplication. We first show for an addtion. $w_{1} + w_{2} = (x_{1},y_{1},1) + (x_{2},y_{2},1) = (x_{1}+x_{2},y_{1}+y_{2},2)$. Since $z$- component is not zero, $w_{1} + w_{2}$ is not element of $W$. Therefore $W$ is not a subspace of $R^{3}$.

2.Suppose $w_{1},w_{2} \in W$. Then

$\displaystyle w_{1} = (x_{1},y_{1},-3x_{1}+2y_{1}), w_{2} = (x_{2},y_{2},-3x_{2}+2y_{2}) $

Thus,
$\displaystyle w_{1}+w_{2}$ $\displaystyle =$ $\displaystyle (x_{1},y_{1},-3x_{1}+2y_{1}) + (x_{2},y_{2},-3x_{2}+2y_{2})$  
  $\displaystyle =$ $\displaystyle (x_{1}+x_{2},y_{1}+y_{2},-3(x_{1}+x_{2})+2(y_{1}+y_{2})) \in W.$  

Also,

$\displaystyle \alpha w_{1} = (\alpha x_{1},\alpha y_{1},-3\alpha x_{1}+2 \alpha y_{1}) \in W . $

Therefore $W$ is a subspace of $R^{3}$.

3.Let $w$ be an arbitray element of $W$. Then $w= (x,y,-3x+2y)$. Now express $w$ using i, j, k. Then we have

$\displaystyle w = (x,y,-3x+2y)$ $\displaystyle =$ $\displaystyle x{\bf i} + y{\bf j} + (-3x+2y){\bf k}$  
  $\displaystyle =$ $\displaystyle x{\bf i} - 3x{\bf k} + y{\bf j} + 2y{\bf k}$  
  $\displaystyle =$ $\displaystyle x({\bf i} - 3{\bf k}) + y({\bf j} + 2{\bf k})$  

Thus, every element of $W$ is a linear combination of i - 3k and j + 2k. Also, i - 3k and j + 2k are linearly independent. Thus, i - 3k and j + 2k is the basis of $W$ . Therefore, $\dim W = 2$.

4.Put $c_{1}({\bf i}+{\bf j}) + c_{2}{\bf k} + c_{3}({\bf i}+{\bf k}) = {\bf0}$ . Then

$\displaystyle (c_{1}+c_{3}){\bf i} + c_{1}{\bf j} + (c_{2}+c_{3}){\bf k} = {\bf0} .$

Thus, $c_{1} = c_{2} = c_{3} = 0$ and hence linearly independent.

Next let $(a_{1},a_{2},a_{3}) \in R^{3}$. Then

$\displaystyle (a_{1},a_{2},a_{3}) = a_{2}({\bf i}+{\bf j}) + (a_{3}+a_{2}-a_{1}){\bf k} + (a_{1}-a_{2})({\bf i}+{\bf k}) $

Thus, $\langle {\bf i}+{\bf j}, {\bf k}, {\bf i}+{\bf k} \rangle = R^{3}$.

5.Let $V$ be the subspace generated by $\{3,x-2,x+3,x^2+1\}$. Then

$\displaystyle V$ $\displaystyle =$ $\displaystyle \langle3,x-2,x+3,x^2+1\rangle$  
  $\displaystyle =$ $\displaystyle \{3c_{1}+c_{2}(x-2) + c_{3}(x+3) +c_{4}(x^2 + 1) : c_{i}  $   real$\displaystyle \}$  

Here, let $v$ be an element of $V$. Then
$\displaystyle v$ $\displaystyle =$ $\displaystyle 3c_{1}+c_{2}(x-2) + c_{3}(x+3) +c_{4}(x^2 + 1)$  
  $\displaystyle =$ $\displaystyle (3c_{1} - 2c_{2} + 3c_{3} + c_{4}) + (c_{2} + c_{3})x + c_{4}x^2.$  

We have shown that $\{1, x, x^2\}$ is linearly independent in Exercise1.3. Thus, $\{1, x, x^2\}$ is a basis of $V$. Hence, $\dim V = 3$.

6.

$\displaystyle {\bf v}_{1} = \frac{{\mathbf x}_{1}}{\Vert{\mathbf x}_{1}\Vert} = \frac{(1,1,1)}{\sqrt{3}}. $

Now
$\displaystyle {\mathbf x}_{2} - ({\mathbf x}_{2},{\bf v}_{1}){\bf v}_{1}$ $\displaystyle =$ $\displaystyle (0,1,1) - ((0,1,1) \cdot \frac{(1,1,1)}{\sqrt{3}})\frac{(1,1,1)}{\sqrt{3}}$  
  $\displaystyle =$ $\displaystyle (0,1,1) - \frac{2}{\sqrt{3}}(\frac{(1,1,1)}{\sqrt{3}})$  
  $\displaystyle =$ $\displaystyle \frac{1}{3}(-2,1,1)$  

implies that
$\displaystyle {\bf v}_{2}$ $\displaystyle =$ $\displaystyle \frac{{\mathbf x}_{2} - ({\mathbf x}_{2},{\bf v}_{1}){\bf v}_{1}}{\Vert{\mathbf x}_{2} - ({\mathbf x}_{2},{\bf v}_{1}){\bf v}_{1}\Vert}$  
  $\displaystyle =$ $\displaystyle \frac{\frac{1}{3}(-2,1,1)}{\Vert\frac{1}{3}(-2,1,1)\Vert}$  
  $\displaystyle =$ $\displaystyle \frac{\frac{1}{3}(-2,1,1)}{\frac{\sqrt{6}}{3}} = \frac{1}{\sqrt{6}}(-2,1,1) .$  

Lastly,
$\displaystyle {\mathbf x}_{3} - ({\mathbf x}_{3},{\bf v}_{1}){\bf v}_{1} - ({\mathbf x}_{3},{\bf v}_{2}){\bf v}_{2}$ $\displaystyle =$ $\displaystyle (0,0,1) - ((0,0,1) \cdot \frac{(1,1,1)}{\sqrt{3}})\frac{(1,1,1)}{\sqrt{3}}$  
  $\displaystyle -$ $\displaystyle ((0,0,1) \cdot \frac{(-2,1,1)}{\sqrt{6}})\frac{(-2,1,1)}{\sqrt{6}}$  
  $\displaystyle =$ $\displaystyle (0,0,1) - \frac{(1,1,1)}{3} - \frac{(-2,1,1)}{6}$  
  $\displaystyle =$ $\displaystyle (0,-\frac{1}{2},\frac{1}{2}) = \frac{1}{2}(0,-1,1)$  

implies that
$\displaystyle {\bf v}_{3}$ $\displaystyle =$ $\displaystyle \frac{{\mathbf x}_{3} - ({\mathbf x}_{3},{\bf v}_{1}){\bf v}_{1} ...
...x}_{3},{\bf v}_{1}){\bf v}_{1} - ({\mathbf x}_{3},{\bf v}_{2}){\bf v}_{2}\Vert}$  
  $\displaystyle =$ $\displaystyle \frac{\frac{1}{2}(0,-1,1)}{\Vert\frac{1}{2}(0,-1,1)\Vert} = \frac{\frac{1}{2}(0,-1,1)}{\frac{\sqrt{2}}{2}}$  
  $\displaystyle =$ $\displaystyle \frac{1}{\sqrt{2}}(0,-1,1) .$  

Therefore the required orthonormal system is

$\displaystyle \{{\bf v}_{1} = \frac{(1,1,1)}{\sqrt{3}}, {\bf v}_{2} = \frac{(-2,1,1)}{\sqrt{6}}, {\bf v}_{3} = \frac{(0,-1,1)}{\sqrt{2}}\} . $

7.By example 1.4, $U + W, U \cap W$ are subspaces of $V$. Then set

$\displaystyle \dim U = n, \dim W = m, \dim (U \cap W) = r, $

$\{v_{1},v_{2},\ldots,v_{r}\}$ is the basis of $U \cap W$,
$\{v_{1},v_{2},\ldots,v_{r},u_{1},\ldots,u_{n-r}\}$ is the basis of $U$,
$\{v_{1},v_{2},\ldots,v_{r},w_{1},\ldots,w_{m-r}\}$ is the basis of $W$. If we can show that

$\displaystyle \{v_{1},\ldots,v_{r},u_{1},\ldots,u_{n-r},w_{1},\ldots,w_{m-r}\} $

is the basis of $U + W$, then

$\displaystyle \dim(U+W) = r + n-r + m-r = n+m-r$

and

$\displaystyle \dim (U + W) = \dim U + \dim W - \dim(U \cap W) $

So, set $c_{1}v_{1} + \cdots + c_{r}v_{r} + c_{1}^{*}u_{1} + \cdots + c_{n-r}^{*}u_{n-r} +c_{1}^{**}w_{1} + \cdots + c_{m-r}^{**}w_{m-r} = 0$. Then

$\displaystyle c_{1}v_{1} + \cdots + c_{r}v_{r} + c_{1}^{*}u_{1} + \cdots + c_{n-r}^{*}u_{n-r} = -(c_{1}^{**}w_{1} + \cdots + c_{m-r}^{**}w_{m-r}) . $

Note that the left-hand side is element of $U$ and the right-hand side is element of $W$. Thus it is in $U \cap W$. Therefore it can be shown by the linear combination of $\{v_{1},v_{2},\ldots,v_{r}\}$. In other words,

$\displaystyle c_{1}^{*}u_{1} + \cdots + c_{n-r}^{*}u_{n-r} = d_{1}v_{1} + \ldots + d_{r}v_{r} $

$\displaystyle c_{1}^{**}w_{1} + \cdots + c_{m-r}^{**}w_{m-r} = e_{1}v_{1} + \ldots + e_{r}v_{r} $

Then $c_{1}^{*} = \cdots = c_{n-r}^{*} = 0,  c_{1}^{**} = \cdots = c_{m-r}^{**} = 0 $ and we have $c_{1} = \cdots = c_{r} = 0$. Therefore

$\displaystyle \{v_{1},v_{2},\ldots,u_{1},\ldots,u_{n-r},w_{1},\ldots,w_{m-r}\} $

is linearly independent. Also, $\{v_{1},v_{2},\ldots,v_{r},u_{1},\ldots,u_{n-r}\}$ spans $U$ and
$\{v_{1},v_{2},\ldots,v_{r},w_{1},\ldots,w_{m-r}\}$ spans $W$. Thus,

$\displaystyle \{v_{1},\ldots,v_{r},u_{1},\ldots,u_{n-r},w_{1},\ldots,w_{m-r}\} $

spans $U + W$.

8.Let $(a_{11},a_{12},a_{13}),(a_{21},a_{22},a_{23}),(a_{31},a_{32},a_{33}),(a_{41},a_{42},a_{43})$ be 3D vectors in 3D vector space. Take a linear combination of those vectors and set it to 0. We have

$\displaystyle c_{1}(a_{11},a_{12},a_{13}) + c_{2}(a_{21},a_{22},a_{23}) + c_{3}(a_{31},a_{32},a_{33}) + c_{4}(a_{41},a_{42},a_{43}) = {\bf0} $

Write this equation using components.
$\displaystyle c_{1}a_{11} + c_{2}a_{21} + c_{3}a_{31} + c_{4}a_{41}$ $\displaystyle =$ 0  
$\displaystyle c_{1}a_{12} + c_{2}a_{22} + c_{3}a_{32} + c_{4}a_{42}$ $\displaystyle =$ 0  
$\displaystyle c_{1}a_{13} + c_{2}a_{23} + c_{3}a_{33} + c_{4}a_{43}$ $\displaystyle =$ 0  

This system of linear equations has 3 equations and 4 unknowns. Thus solutions $c_{1},c_{2},c_{3},c_{4}$ need not to be 0. In other words,

$\displaystyle \{(a_{11},a_{12},a_{13}),(a_{21},a_{22},a_{23}),(a_{31},a_{32},a_{33}),(a_{41},a_{42},a_{43})\} $

is linearly dependent.

Chapter 2
Exercise2.2.1

1.

(a) A sum of matrices is the sum of corresponding components. $A + B = \left(\begin{array}{rr}
2&\!\!-3\\
4&\!\!2
\end{array}\right) + \left(...
...ay}\right) = \left(\begin{array}{rr}
1&\!\!-1\\
7&\!\!2
\end{array}\right ) . $

(b)A scalar multiplication of matrix is the multiplication of every components.

$\displaystyle 2A - 3B$ $\displaystyle =$ $\displaystyle 2\left(\begin{array}{rr}
2&\!\!-3\\
4&\!\!2
\end{array}\right) -...
...{array}\right) - \left(\begin{array}{rr}
\!\!-3&6\\
\!\!9&0
\end{array}\right)$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{rr}
4+3&-6-6\\
8-9&4-0
\end{array}\right) = \left(\begin{array}{rr}
7&-12\\
-1&4
\end{array}\right) .$  

(c) A product of matrices is the inner product of corresponding rows and columns.

$AB = \left(\begin{array}{rr}
2&-3\\
4&2
\end{array}\right)\left(\begin{array}{...
...end{array}\right) =
\left(\begin{array}{cc}
-11&4\\
2&8
\end{array}\right ) . $

$BA = \left(\begin{array}{rr}
-1&2\\
3&0
\end{array}\right)\left(\begin{array}{...
...\end{array}\right) = \left(\begin{array}{rr}
6&7\\
6&-9
\end{array}\right ) . $

2.

$\displaystyle AB$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrr}
3&1&7\\
5&2&-4
\end{array}\right )\left(\begin{array}{rr}
2&-3\\
3&6\\
4&1
\end{array}\right )$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{rr}
3(2) + 1(3) + 7(4) & 3(-3) + 1(6) + 7(1)\...
...\end{array}\right )
= \left(\begin{array}{rr}
37&4\\
0&-7
\end{array}\right ).$  
$\displaystyle BA$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rr}
2&-3\\
3&6\\
4&1
\end{array}\right )\left(\begin{array}{rrr}
3&1&7\\
5&2&-4
\end{array}\right )$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{rrr}
2(3) - 3(5) & 2(1) -3(2) & 2(7) -3(-4)\\...
...) & 3(7) + 6(-4) \\
4(3) + 1(5) & 4(1) +1(2) & 4(7) +1(-4)
\end{array}\right )$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{ccc}
-9&-4&26\\
39&15&-3\\
17&6&24
\end{array}\right ) .$  

3.

  $\displaystyle {}$ $\displaystyle A^2 - 5A + 6I = (A - 2I)(A - 3I)$  
  $\displaystyle =$ $\displaystyle (\left(\begin{array}{rrr}
2&3&0\\
1&4&1\\
2&0&1
\end{array}\rig...
...\right )- \left(\begin{array}{rrr}
3&0&0\\
0&3&0\\
0&0&3
\end{array}\right ))$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{rrr}
0&3&0\\
1&2&1\\
2&0&-1
\end{array}\rig...
...ight ) = \left(\begin{array}{rrr}
3&3&3\\
3&5&0\\
4&6&2
\end{array}\right ) .$  

4.$A$ and $B$ are symmetric matrices. Then we have $A^{t} = A$ and $B^{t} = B$. To show $A + B$ is symmetric, it is enough to show $(A + B)^{t} = A + B$. Now

$\displaystyle (A + B)^{t} = A^{t} +B^{t} = A + B . $

5.For $n$-square symmetric matrices $A,B$, $(AB)^{t} = B^{t}A^{t} = BA$. Then to show $AB$ is symmetric, we have to show $AB = BA$. In general, $AB \neq BA$. So, the answer to the question $AB$ is always symmetric is not true. In fact, let $A = \left(\begin{array}{rr}
1&0\\
0&-1
\end{array}\right ), B = \left(\begin{array}{rr}
0&1\\
1&-1
\end{array}\right )$. Then $A,B$ are symmetric matrices. But

$\displaystyle AB = \left(\begin{array}{rr}
1&0\\
0&-1
\end{array}\right )\left...
...d{array}\right ) = \left(\begin{array}{rr}
0 & 1\\
-1 & 1
\end{array}\right ) $

Thus, $(AB)^{t} \neq AB$. Therefore, $AB$ is not symmetric.

Next we find the necessary and sufficient condition so that $AB$ is always symmetric.

Since for $n$-square symmetric matrices $A,B$, we have $(AB)^{t} = B^{t}A^{t} = BA$. So, to make the matrix $AB$ is symmetric, it is enough to be $AB = BA$. Suppose first that $AB$ is symmetric. Then $(AB)^{t} = AB$ implies that $AB = BA$.

Suppose that $AB = BA$. Then since $(AB)^{t} = B^{t}A^{t} = BA$, we have $(AB)^{t} = AB$. Thus, $AB$ is symmetric.

6. $(A^{2})^{t} = (A  A)^{t} = A^{t}  A^{t} = (-A)(-A) = A^{2} . $

7.Let $A = \left(\begin{array}{rrr}
a_{1}&0&0\\
0&a_{2}&0\\
0&0&a_{3}
\end{array}\right ) be the matrix and $B = $\left(\begin{array}{rrr}
b_{11}&b_{12}&b_{13}\\
b_{21}&b_{22}&b_{23}\\
b_{31}&b_{32}&b_{33}
\end{array}\right )$ be the matrix commutable with $A$. Then $AB = BA$ implies that

$\displaystyle \left(\begin{array}{rrr}
a_{1}&0&0\\
0&a_{2}&0\\
0&0&a_{3}
\end...
...eft(\begin{array}{rrr}
a_{1}&0&0\\
0&a_{2}&0\\
0&0&a_{3}
\end{array}\right ) $

$\displaystyle \left(\begin{array}{rrr}
a_{1}b_{11}&a_{1}b_{12}&a_{1}b_{13}\\
a...
..._{22}&a_{3}b_{23}\\
a_{1}b_{31}&a_{2}b_{32}&a_{3}b_{33}
\end{array}\right ) . $

Note that the corresponding components are equal. Then

$\displaystyle a_{1}b_{12} = a_{2}b_{12}, a_{1}b_{13} = a_{3}b_{13}, $

$\displaystyle a_{2}b_{21} = a_{1}b_{21}, a_{2}b_{23} = a_{3}b_{23}, $

$\displaystyle a_{3}b_{31} = a_{1}b_{31}, a_{3}b_{32} = a_{2}b_{32} $

We also note that $a_{1},a_{2}.a_{3}$ are different real numbers. Thus

$\displaystyle b_{12} = b_{13} = b_{21} = b_{23} = b_{31} = b_{32} = 0 $

Thus, $B$ is

$\displaystyle \left(\begin{array}{rrr}
b_{11}&0&0\\
0&b_{22}&0\\
0&0&b_{33}
\end{array}\right ) $

and it is a diagonal matrix.

8. $(A - A^{t})^{t} = A^{t} - (A^{t})^{t} = A^{t} - A = - (A - A^{t})$. Thus $A - A^{t}$ is skew-symmetric. Also, $(A + A^{t})^{t} = A^{t} + (A^{t})^{t} = A^{t} + A = A + A^{t}$. Thus, $A + A^{t}$ is symmetric. Now let

$\displaystyle A = \frac{A + A^{t}}{2} + \frac{A - A^{t}}{2}$

Then $A$ is a sum of skew-symmetric and symmetric matrices. Next suppose that $B$ is symmetric and $C$ is skew-symmetric and $A = B + C$. Then

$\displaystyle A^{t} = B^{t} + C^{t} = B - C. $

Thus $A + A^{t} = 2B,  A - A^{t} = 2C$. Therefore,

$\displaystyle B = \frac{A + A^{t}}{2},  C = \frac{A - A^{t}}{2}. $

9.

$\displaystyle A = \left(\begin{array}{cc\vert c}
1 & 1 & 1\\
2 & -1 & 0\ \h...
...left(\begin{array}{cc}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{array}\right)$

$\displaystyle B = \left(\begin{array}{cccc}
1 & 2 & 3 & -1\\
3 & -1 & 1 & 0\...
...eft(\begin{array}{cc}
B_{11} & B_{12}\\
B_{21} & B_{22}
\end{array}\right) $

implies that

$\displaystyle AB = \left(\begin{array}{c}
\left(\begin{array}{cccc}
4&1&4&-1\ ...
...t(\begin{array}{cccc}
4&1&2&0\\
-1&5&5&-2\\
-1&-2&-7&3
\end{array}\right)$

Exercise2.4.1

1.

  $\displaystyle {}$ $\displaystyle \left(\begin{array}{rrrr}
1&-2&3&-1\\
2&-1&2&2\\
3&0&2&3
\end{a...
...} \left(\begin{array}{rrrr}
1&-2&3&-1\\
0&3&-4&4\\
3&0&2&3
\end{array}\right)$  
  $\displaystyle \stackrel{-3 R_{1} + R_{3}}{\rightarrow }$ $\displaystyle \left(\begin{array}{rrrr}
1&-2&3&-1\\
0&3&-4&4\\
0&6&-7&6
\end{...
...{rrrr}
1&-2&3&-1\\
0&1&-\frac{4}{3}&\frac{4}{3}\\
0&6&-7&6
\end{array}\right)$  
  $\displaystyle \stackrel{-6 R_{2} + R_{3}}{\rightarrow }$ $\displaystyle \left(\begin{array}{rrrr}
1&-2&3&-1\\
0&1&-\frac{4}{3}&\frac{4}{...
...gin{array}{rrrr}
1&-2&3&-1\\
0&1&0&-\frac{4}{3}\\
0&0&1&-2
\end{array}\right)$  
  $\displaystyle \stackrel{2 R_{2} + R_{1}}{\rightarrow }$ $\displaystyle \left(\begin{array}{rrrr}
1&0&0&\frac{7}{3}\\
0&1&0&-\frac{4}{3}\\
0&0&1&-2
\end{array}\right) .$  

2.

(a)

$\displaystyle \left(\begin{array}{rrrr}
2&4&1&-2\\
-3&-6&2&-4
\end{array}\right)$ $\displaystyle \stackrel{ \frac{1}{2} \times R_{1}}{\longrightarrow }$ $\displaystyle \left(\begin{array}{rrrr}
1&2&\frac{1}{2}&-1\\
-3&-6&2&-4
\end{array}\right)$  
  $\displaystyle \stackrel{ 3R_{1} + R_{2}}{\longrightarrow }$ $\displaystyle \left(\begin{array}{rrrr}
1&2&\frac{1}{2}&-1\\
0&0&\frac{7}{2}&-7
\end{array}\right)$  
  $\displaystyle \stackrel{ \frac{2}{7} \times R_{3}}{\longrightarrow }$ $\displaystyle \left(\begin{array}{rrrr}
1&2&\frac{1}{2}&-1\\
0&0&1&-2
\end{array}\right).$  

Therefore, ${\rm rank} = 2.$

(b)

$\displaystyle A$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrr}
2&-1&3\\
1&2&-3\\
3&-4&9
\end{array}\r...
...tarrow } \left(\begin{array}{rrr}
1&2&-3\\
2&-1&3\\
3&-4&9
\end{array}\right)$  
  $\displaystyle \stackrel{ -2 \times R_{1} + R_{2}}{\longrightarrow }$ $\displaystyle \left(\begin{array}{rrr}
1&2&-3\\
0&-5&9\\
3&-4&9
\end{array}\r...
...rrow }
\left(\begin{array}{rrr}
1&2&-3\\
0&-5&9\\
0&-10&18
\end{array}\right)$  
  $\displaystyle \stackrel{ -\frac{1}{5} \times R_{2} + R_{3}}{\rightarrow }$ $\displaystyle \left(\begin{array}{rrr}
1&2&-3\\
0&1&-\frac{9}{5}\\
0&-10&18
\...
...eft(\begin{array}{rrr}
1&2&-3\\
0&1&-\frac{9}{5}\\
0&0&0
\end{array}\right) .$  

Therefore, ${\rm rank} = 2.$

(c) By Exercise2-4-1.1, we have ${\rm rank} = 3.$

3.An elementary matrix can be obtained by applying an elementary operation on the identity matrix.

  $\displaystyle \left(\begin{array}{rr}
1&0\\
0&1
\end{array}\right )$ $\displaystyle \stackrel{-R_{2} + R_{1}}{\rightarrow} \left(\begin{array}{rr}
1&-1\\
0&1
\end{array}\right ) = I,$  
  $\displaystyle \left(\begin{array}{rr}
1&0\\
0&1
\end{array}\right )$ $\displaystyle \stackrel{- 1 \times R_{1}}{\rightarrow}
\left(\begin{array}{rr}
-1&0\\
0&1
\end{array}\right ) = II,$  
  $\displaystyle \left(\begin{array}{rr}
1&0\\
0&1
\end{array}\right )$ $\displaystyle \stackrel{-3 \times R_{1} + R_{2}}{\rightarrow} \left(\begin{array}{rr}
1&0\\
-3&1
\end{array}\right ) = III,$  
  $\displaystyle \left(\begin{array}{rr}
1&0\\
0&1
\end{array}\right )$ $\displaystyle \stackrel{-R_{2} + R_{1}}{\rightarrow}
\left(\begin{array}{rr}
1&-1\\
0&1
\end{array}\right ) = IV.$  

Next to express a identity matrix as a product of the matrix $A$ and the elementary matrices, multiply elementary matrices $I$, $II$, $III$, $IV$ to $A$ from the left. Then
$\displaystyle I_{2}$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rr}
1&0\\
0&1
\end{array}\right )$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{rr}
1&-1\\
0&1
\end{array}\right )\left(\beg...
...0&1
\end{array}\right )\left(\begin{array}{rr}
2&3\\
3&4
\end{array}\right ) .$  

4.The product of matrices $P$ satisfying $PA = I$ can be found by the following steps:

    $\displaystyle \left(\begin{array}{rrrrrr}
2&-3&1&1&0&0\\
1&2&-3&0&1&0\\
3&2&-...
...!\!-3&0&1&0\\
2&\!\!-3&\!\!1&1&0&0\\
3&\!\!2&\!\!-1&0&0&1
\end{array}\right )$  
  $\displaystyle {\stackrel{-2 R_{1} + R_{2}}{\rightarrow}}$ $\displaystyle \left(\begin{array}{rrrrrr}
1&\!\!2&\!\!-3&0&\!\!1&0\\
0&\!\!-7&...
...0\\
0&\!\!-7&\!\!7&1&\!\!-2&0\\
0&\!\!-4&\!\!8&0&\!\!-3&1
\end{array}\right )$  
  $\displaystyle {\stackrel{\frac{-1}{7} \times R_{2}}{\rightarrow}}$ $\displaystyle \left(\begin{array}{rrrrrr}
1&\!\!2&\!\!-3&0&\!\!1&0\\
0&\!\!1&\...
...\!\!0\\
0&0&\!\!4&\!\!\frac{-4}{7}&\!\!\frac{-13}{7}&\!\!1
\end{array}\right )$  
  $\displaystyle {\stackrel{\frac{1}{4} \times R_{2}}{\rightarrow}}$ $\displaystyle \left(\begin{array}{rrrrrr}
1&2&\!\!-3&\!\!0&\!\!1&\!\!0\\
0&1&\...
...0&\!\!1&\!\!\frac{-1}{7}&\!\!\frac{-13}{28}&\!\!\frac{1}{4}
\end{array}\right )$  
  $\displaystyle {\stackrel{3 R_{3} + R_{1}}{\rightarrow}}$ $\displaystyle \left(\begin{array}{rrrrrr}
1&\!\!2&\!\!0&\!\!\frac{-3}{7}&\!\!\f...
...\!\!1&\!\!\frac{-1}{7}&\!\!\frac{-13}{28}&\!\!\frac{1}{4}
\end{array}\right ) .$  

Thus,

$\displaystyle P = \left(\begin{array}{rrr}
\frac{1}{7}&-\frac{1}{28}&\frac{1}{4...
...8}\left(\begin{array}{rrr}
4&-1&7\\
-8&-5&7\\
-4&-13&7
\end{array}\right ) . $

5.By the theorem2.2, the dimension of the row space is the same as the rank of the matrix, it is enough to find the rank of matrix with the row vectors are given by ${\bf v}_{1}, {\bf v}_{2},{\bf v}_{3},{\bf v}_{4}$.

$\displaystyle \left(\begin{array}{r}
{\bf v}_{1}\\
{\bf v}_{2}\\
{\bf v}_{3}\\
{\bf v}_{4}
\end{array}\right)$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrrr}
2&-1&1&3\\
-1&1&0&1\\
4&-1&3&11\\
-2...
...\
\!\!2&\!\!-1&1&3\\
\!\!4&\!\!-1&3&11\\
\!\!-2&\!\!3&1&1
\end{array}\right)$  
  $\displaystyle \stackrel{-1 \times R_{1} }{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrrr}
\!\!1&\!\!-1&0&\!\!-1\\
\!\!2&\!\!-1&1...
...-1\\
0&\!\!1&1&\!\!5\\
0&\!\!3&3&\!\!15\\
0&\!\!2&2&\!\!4
\end{array}\right)$  
  $\displaystyle \stackrel{\begin{array}{cc}
{}^{-3R_{2}+R_{3}}\\
{}^{-2R_{2}+R_{4}}
\end{array}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrrr}
1&-1&0&-1\\
0&1&1&5\\
0&0&0&0\\
0&0&0&1
\end{array}\right) .$  

よって $\dim = 3.$

Exercise2.6.1

1.

(a)

$\displaystyle [A: {\bf b}]$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrr}
1&-3&5\\
3&-5&7
\end{array}\right ) \st...
...{\longrightarrow}
\left(\begin{array}{rrr}
1&-3&5\\
0&4&-8
\end{array}\right )$  
  $\displaystyle \stackrel{\frac{1}{4} \times R_{2}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrr}
1&-3&5\\
0&1&-2
\end{array}\right ) \st...
...longrightarrow}
\left(\begin{array}{rrr}
1&0&-1\\
0&1&-2
\end{array}\right ) .$  

よって

$\displaystyle \left(\begin{array}{c}
x\\
y
\end{array}\right ) =
\left(\begin{array}{c}
-1\\
-2
\end{array}\right ). $

(b)

$\displaystyle [A: {\bf b}]$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrrr}
1&1&1&3\\
1&2&2&5\\
1&2&3&6
\end{arra...
...ow}
\left(\begin{array}{rrrr}
1&1&1&3\\
0&1&1&2\\
0&1&2&3
\end{array}\right )$  
  $\displaystyle \stackrel{- R_{2} + R_{3}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrrr}
1&1&1&3\\
0&1&1&2\\
0&0&1&1
\end{arra...
...ow}
\left(\begin{array}{rrrr}
1&1&0&2\\
0&1&0&1\\
0&0&1&1
\end{array}\right )$  
  $\displaystyle \stackrel{-R_{2} + R_{1}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrrr}
1&0&0&1\\
0&1&0&1\\
0&0&1&1
\end{array}\right ) .$  

Thus,

$\displaystyle \left(\begin{array}{c}
x\\
y\\
z
\end{array}\right ) = \left(\begin{array}{c}
1\\
1\\
1
\end{array}\right ) . $

(c)

$\displaystyle [A: {\bf b}]$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrrrr}
1&1&1&1&1\\
1&2&3&4&2\\
1&4&9&16&6
\...
...t(\begin{array}{rrrrr}
1&1&1&1&1\\
0&1&2&3&1\\
0&3&8&15&5
\end{array}\right )$  
  $\displaystyle \stackrel{-3 R_{2} + R_{3}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrrrr}
1&1&1&1&1\\
0&1&2&3&1\\
0&0&2&6&2
\e...
...ft(\begin{array}{rrrrr}
1&1&1&1&1\\
0&1&2&3&1\\
0&0&1&3&1
\end{array}\right )$  
  $\displaystyle \stackrel{\begin{array}{cc}
{}^{-2R_{3} + R_{2}}\\
{}^{ -R_{3} + R_{1}}
\end{array}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrrrr}
1&1&0&\!\!-2&\!\!0\\
0&1&0&\!\!-3&\!\...
...0&\!\!1&\!\!1\\
0&1&0&\!\!-3&\!\!-1\\
0&0&1&\!\!3&\!\!1
\end{array}\right ) .$  

Note that ${\rm rank}(A) = {\rm rank}([A : {\bf b}])$. Thus, the equation has a solution. Rewrite $[A:{\bf b}]_{R}$ using the following system of linear equations:

$\displaystyle \left\{\begin{array}{rc}
x_{1} + x_{4} =& 1\\
x_{2} - 3x_{4}=& -1\\
x_{3} + 3x_{4} =& 1
\end{array}\right. $

Here the degree of freedom is $= 4 - 3 = 1$. Thus we let $x_{4} = \alpha$. Then

$\displaystyle \left(\begin{array}{r}
x_{1}\\
x_{2}\\
x_{3}\\
x_{4}
\end{arra...
...ight ) + \alpha \left(\begin{array}{c}
1\\
3\\
-3\\
1
\end{array}\right ) . $

2. $A{\mathbf x} = {\bf b}$ has a solution if and only if ${\rm rank}(A) = {\rm rank}([A : {\bf b}])$.

$\displaystyle [A: {\bf b}]$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrrr}
1&2&3&7\\
3&2&5&9\\
5&2&7&k
\end{arra...
...t(\begin{array}{rrrr}
1&2&3&7\\
0&-4&-4&-12\\
0&-8&-8&k-35
\end{array}\right)$  
  $\displaystyle \stackrel{-\frac{1}{4}R_{2}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrrr}
1&\!\!2&\!\!3&\!\!\\
0&\!\!1&\!\!1&\!\...
...in{array}{rrrr}
1&2&3&\!\!7\\
0&1&1&\!\!3\\
0&0&0&\!\!k-11
\end{array}\right)$  
  $\displaystyle \stackrel{-2R_{2}+R_{1}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrrr}
1&0&1&1\\
0&1&1&3\\
0&0&0&k-11
\end{array}\right) .$  

Note that ${\rm rank}(A) = 2$ implies that ${\rm rank}([A: {\bf b}])$ is also 2. Thus, if $k-11$ is not 0, then this system of linear equation has no solutoin. Thus, $k = 11$. Moreover, for $k = 11$, $x_{1} + x_{3} = 1, x_{2}+x_{3} = 3$. Therefore, let $x_{3} = \alpha$. Then

$\displaystyle \left(\begin{array}{c}
x_{1}\\
x_{2}\\
x_{3}
\end{array}\right ...
...ay}\right ) + \alpha \left(\begin{array}{c}
-1\\
-1\\
1
\end{array}\right ). $

3. Let $A$ be $n$-square normal matrix. Then we have $\Leftrightarrow {\rm rank}(A) = n$ $\Leftrightarrow A_{R} = I_{n}$.

(a)

$\displaystyle  $ $\displaystyle {}$ $\displaystyle \left(\begin{array}{rrrrrr}
2&3&4&1&0&0\\
1&2&3&0&1&0\\
-1&1&4&...
...in{array}{rrrrrr}
1&2&3&0&1&0\\
2&3&4&1&0&0\\
-1&1&4&0&0&1
\end{array}\right)$  
$\displaystyle  $ $\displaystyle {\stackrel{\begin{array}{cc}
{}^{-2R_{1} + R_{2}}\\
{}^{R_{1}+R_{3}}
\end{array}}{\longrightarrow}}$ $\displaystyle \left(\begin{array}{rrrrrr}
1&\!\!2&\!\!3&0&\!\!1&0\\
0&\!\!-1&\...
...!\!0&\!\!1&0\\
0&1&2&\!\!-1&\!\!2&0\\
0&0&1&\!\!3&\!\!-5&1
\end{array}\right)$  
$\displaystyle  $ $\displaystyle {\stackrel{\begin{array}{cc}
{}^{-2R_{3} + R_{2}}\\
{}^{-3R_{3}+R_{1}}
\end{array}}{\longrightarrow}}$ $\displaystyle \left(\begin{array}{rrrrrr}
1&2&0&\!\!-9&\!\!16&\!\!-3\\
0&1&0&\...
...1\\
0&1&0&\!\!-7&\!\!12&\!\!-2\\
0&0&1&\!\!3&\!\!-5&\!\!1
\end{array}\right).$  

Thus, $A$ is regular and $A^{-1} = \left(\begin{array}{ccc}
5&-8&1\\
-7&12&-2\\
3&-5&1
\end{array}\right ) .$

(b)

$\displaystyle [A:I]$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrrrrrrr}
0&0&0&1&1&0&0&0\\
0&0&1&0&0&1&0&0\\
0&1&0&0&0&0&1&0\\
1&0&0&0&0&0&0&1
\end{array}\right)$  
  $\displaystyle \stackrel{\begin{array}{cc}
{}^{R_{1} \leftrightarrow R_{4}}\\
{}^{R_{2} \leftrightarrow R_{3}}
\end{array}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrrrrrrr}
1&0&0&0&0&0&0&1\\
0&1&0&0&0&0&1&0\\
0&0&1&0&0&1&0&0\\
0&0&0&1&1&0&0&0
\end{array}\right) .$  

Thus $A$ is regular and $A^{-1} = \left(\begin{array}{cccc}
0&0&0&1\\
0&0&1&0\\
0&1&0&0\\
1&0&0&0
\end{array}\right ) .$

4.$A$ is $n$-square regular matrix $\Leftrightarrow {\rm rank}(A) = n$. Thus, we make $a$ so that ${\rm rank}(A) = 3$.

$\displaystyle A$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrr}
2&0&-3\\
1&-1&a\\
5&3&4
\end{array}\ri...
...\left(\begin{array}{rrr}
1&0&-\frac{3}{2}\\
5&3&4\\
1&-1&a
\end{array}\right)$  
  $\displaystyle \stackrel{\begin{array}{cc}
{}^{-5R_{1}+R_{2}}\\
{}^{-R_{1}+R_{3}}
\end{array}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrr}
1&0&-\frac{3}{2}\\
0&3&\frac{23}{2}\\
...
...}
1&0&-\frac{3}{2}\\
0&1&\frac{23}{6}\\
0&-1&a+\frac{3}{2}
\end{array}\right)$  
  $\displaystyle \stackrel{R_{2}+R_{3}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrr}
1&0&-\frac{3}{2}\\
0&1&\frac{23}{6}\\
0&0&a+\frac{32}{6}
\end{array}\right) .$  

Note that ${\rm rank}(A) = 3$ if and only if $\displaystyle{a + \frac{32}{6} \neq 0}$. Thus, $\displaystyle{a \neq -\frac{16}{3}}.$

5.To show $A$ as a product of elementary metrices, we start with an identiry matrix, then apply elementary operations. We then multiply the elementary matrices coming from the elementary operations to the identity matrix.

$\displaystyle A$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrr}
2&-1&0\\
4&3&2\\
3&0&1
\end{array}\rig...
... \left(\begin{array}{rrr}
1&\frac{-1}{2}&0\\
0&5&2\\
3&0&1
\end{array}\right)$  
  $\displaystyle \stackrel{-3R_{1}+R_{3}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrr}
1&\frac{-1}{2}&0\\
0&5&2\\
0&\frac{3}{...
...\frac{-1}{2}&0\\
0&\!\!1&\frac{2}{5}\\
0&\!\!0&\frac{2}{5}
\end{array}\right)$  
  $\displaystyle \stackrel{\frac{2}{5}R_{3}}{\longrightarrow}$ $\displaystyle \left(\begin{array}{rrr}
1&\!\!\frac{-1}{2}&0\\
0&\!\!1&\frac{2}...
...arrow}\!\!
\left(\begin{array}{rrr}
1&0&0\\
0&1&0\\
0&0&1
\end{array}\right).$  

Thusm $A$ is regular. Next we reverse the elementary operations, then multiply the elementary matrices coming from the elementary operations.


$\displaystyle A$ $\displaystyle =$ $\displaystyle \left(\begin{array}{ccc}
2&0&0\\
0&1&0\\
0&0&1
\end{array}\righ...
...ay}\right )\left(\begin{array}{ccc}
1&0&0\\
0&5&0\\
0&0&1
\end{array}\right )$  
  $\displaystyle \cdot$ $\displaystyle \left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&\frac{3}{2}&1
\end{a...
...\left(\begin{array}{ccc}
1&-\frac{1}{2}&0\\
0&1&0\\
0&0&1
\end{array}\right )$  

6.Suppose $A = \left(\begin{array}{ccc}
*& &*\\
& * &\\
0&\cdots&0
\end{array}\right )$. Then for any matrix $X$,

$\displaystyle AX = \left(\begin{array}{ccc}
* & &* \\
& * &\\
0&\cdots&0
\end...
...begin{array}{ccc}
*& &* \\
& * &\\
0&\cdots&0
\end{array}\right ) \neq I_{n} $

Thus $A$ is not regular.

Alternate Solution Let $A$ be $n$-square matrix. If every element of some row of $A$ is 0, then every element of one row of $A_{R}$ is 0. Then ${\rm rank}(A) \neq n$ and by the theorem2.3, $A$ is not regular.

7.

$\displaystyle (AB)(B^{-1}A^{-1}) = A(BB^{-1})A^{-1} = AIA^{-1} = AA^{-1} = I $

Thus, $AB$ is regular and $(AB)^{-1} = B^{-1}A^{-1}$.

Exercise2.7.1

1.Let $L{\bf y} = {\bf b}$.Here ${\bf y} = \left(\begin{array}{ccc}
2 & 3 & -1\\
0&-2&1\\
0&0&3\end{array}\right)\left(\begin{array}{c}x_1\\ x_2\\ x_3\end{array}\right)$. Then

$\displaystyle L{\bf y} = \left(\begin{array}{ccc}
1 & 0 & 0\\
2&1&0\\
-1&0&1\...
...y}{c}y_1\\ y_2\\ y_3\end{array}\right) = \begin{pmatrix}2\\ -1\\ 1\end{pmatrix}$

Thus, $y_1 = 2, y_2 = -5, y_3 = 3$. Threefore,

$\displaystyle \left(\begin{array}{ccc}
2 & 3 & -1\\
0&-2&1\\
0&0&3\end{array}...
...y}{c}x_1\\ x_2\\ x_3\end{array}\right) = \begin{pmatrix}2\\ -5\\ 3\end{pmatrix}$

which implies $x_{1} = -3,\hskip 0.5cm \ x_{2} = 3,\hskip 0.5cm x_{3} = 1$

2. (a)

Using Gaussian elimination, we have

$\displaystyle \left(\begin{array}{ccc}
2 & -1 &1\\
3 & 3 & 9\\
3 & 3 & 5
\end{array}\right)$   $\displaystyle \stackrel{\begin{array}{c} -3/2R_1 + R_2 \\ -3/2R_1 +R_3\end{arra...
...in{array}{ccc}
2 & -1 &1\\
0 & 3/2 & 15/2\\
0 & 0 & -4
\end{array}\right) = U$  

Now reversing Gaussian elimination, we obtain
$\displaystyle \left(\begin{array}{ccc}
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}\right)$   $\displaystyle \stackrel{R_2 + R_3}{\longrightarrow} \left(\begin{array}{ccc}
1 ...
...egin{array}{ccc}
1 & 0 & 0\\
3/2 & 1 & 0\\
3/2 & 1 & 1
\end{array}\right) = L$  

(b)Using Gaussian elimination, we have

$\displaystyle \begin{pmatrix}
2&0&0&0\\
1&1.5&0&0\\
0&-3&0.5&0\\
2&-2&1&...
...begin{pmatrix}
2&0&0&0\\
0&1.5&0&0\\
0&0&0.5&0\\
0&0&0&1\end{pmatrix}= U$

Reversing Gaussian elimination, we have

$\displaystyle \begin{pmatrix}
1&0&0&0\\
0&1&0&0\\
0&0&1&0\\
0&0&0&1 \end{pma...
...\begin{pmatrix}
1&0&0&0\\
1/2&1&0&0\\
0&-2&1&0\\
1&-4/3&2&1\end{pmatrix} = L$

Exercise2.8.1

1.

(a) We use the cofactor expansion with the 2nd row.

$\displaystyle \left\vert\begin{array}{rrr}
2&-3&1\\
1&0&2\\
1&-1&1
\end{array}\right\vert$ $\displaystyle =$ $\displaystyle (-1)^{2+1}\left\vert\begin{array}{rr}
-3&1\\
-1&1
\end{array}\ri...
... + (-1)^{2+3}(2)\left\vert\begin{array}{rr}
2&-3\\
1&-1
\end{array}\right\vert$  
  $\displaystyle =$ $\displaystyle -(-3 +1) - 2(-2+3) = 0 .$  

(b) We use the cofactor expansion with the 1st row.

$\left \vert \begin{array}{rrrr}
2&4&0&5\\
1&-2&-1&3\\
1&2&3&0\\
3&3&-4&-4
\end{array}\right\vert $

$= 2\left\vert\begin{array}{rrr}
-2&-1&3\\
2&3&0\\
3&-4&-4
\end{array}\right\v...
...t\vert\begin{array}{rrr}
1&-1&3\\
1&3&0\\
3&-4&-4
\end{array}\right\vert
+ 0 $

$+ (-5)\left\vert\begin{array}{rrr}
1&-2&-1\\
1&2&3\\
3&3&-4
\end{array}\right...
...1\\
2&3
\end{array}\right\vert}_{\mbox{cofactor expansion with 3rd column}}\} $

$+ (-4)\{\underbrace{(3)\left\vert\begin{array}{rr}
1&3\\
3&-4
\end{array}\righ...
...1\\
1&3
\end{array}\right\vert}_{\mbox{cofactor expansion with 3rd column}}\} $

$+ (-5)\{\underbrace{\left\vert\begin{array}{rr}
2&3\\
3&-4
\end{array}\right\v...
...}
1&2\\
3&3
\end{array}\right\vert}_{\mbox{cofactor expansion with 1st row}}\}$

$= 2\{3(-17) -4(-4)\} + -4\{3(-13) -4(4)\} $

$+ (-5)\{-17 + 2(-13) - (-3)\} $

$= 2(-35) -4(-55) -5(-40) = 350 .$

(c)

  $\displaystyle {}$ $\displaystyle \left\vert\begin{array}{rrrrr}
0&0&0&1&0\\
0&1&0&0&0\\
0&0&0&0&1\\
1&0&0&0&0\\
0&0&1&0&0
\end{array}\right\vert$  
  $\displaystyle =$ $\displaystyle sgn(4,2,5,1,3)1\cdot 1\cdot 1\cdot 1 \cdot 1 = - sgn(1,2,5,4,3) = sgn(1,2,3,4,5) = +1$  

(d) $\left\vert\begin{array}{rrrrr}
3 & 5 & 1 & 2 & -1\\
2 & 6 & 0 & 9 & 1\\
0 & 0...
... & 2
\end{array}\right\vert \left\vert-6\right\vert = (18-10)(14-3)(-6) = -528$ 2.

(a) $\left\vert\begin{array}{rrr}
1&a^2&(b+c)^2\\
1&b^2&(c+a)^2\\
1&c^2&(a+b)^2
\e...
...
0&b^2-a^2&(c+a)^2-(b+c)^2\\
0&c^2-a^2&(a+b)^2-(b+c)^2
\end{array}\right\vert $

$= \left\vert\begin{array}{rrr}
1&a^2&(b+c)^2\\
0&(b+a)(b-a)&(a-b)(a+b+2c)\\
0&(c-a)(c+a)&(a-c)(a+2b+c)
\end{array}\right\vert $

% latex2html id marker 38834
$ \stackrel{{\mbox{theorem}\ref{teiri:2-14}}}{=}
(a...
...r}
1&a^2&(b+c)^2\\
0&-(a+b)&(a+b+2c)\\
0&c+a&-(a+2b+c)
\end{array}\right\vert$

$\stackrel{R_{2}+R_{3}}{=} (a-b)(c-a)\left\vert\begin{array}{rrr}
1&a^2&(b+c)^2\\
0&-(a+b)&(a+b+2c)\\
0&c-b&c-b
\end{array}\right\vert $

$= (a-b)(c-a)(c-b)\left\vert\begin{array}{rrr}
1&a^2&(b+c)^2\\
0&-(a+b)&(a+b+2c)\\
0&1&1
\end{array}\right\vert $

$\stackrel{R_{2} \leftrightarrow R_{3}}{=} -(a-b)(c-a)(c-b)\left\vert\begin{array}{rrr}
1&a^2&(b+c)^2\\
0&1&1\\
0&-(a+b)&(a+b+2c)
\end{array}\right\vert $

$\stackrel{(a+b)R_{2}+R_{3}}{=}
(a-b)(c-a)(b-c)\left\vert\begin{array}{rrr}
1&a^2&(b+c)^2\\
0&1&1\\
0&0&2(a+b+c)
\end{array}\right\vert $

$= 2(a-b)(c-a)(b-c)(a+b+c) .$

(b)

Use factorization with the column.

    $\displaystyle \left\vert\begin{array}{rrr}
b+c&b&c\\
a&c+a&c\\
a&b&a+b
\end{a...
...ft\vert\begin{array}{rrr}
c&b&c\\
-c&c+a&c\\
a-b&b&a+b
\end{array}\right\vert$  
  $\displaystyle \stackrel{-L_{3}+L_{1}}{=}$ $\displaystyle \left\vert\begin{array}{rrr}
0&b&c\\
-2c&c+a&c\\
-2b&b&a+b
\end...
...\left\vert\begin{array}{rrr}
0&b&c\\
c&c+a&c\\
b&b&a+b
\end{array}\right\vert$  
  $\displaystyle \stackrel{-L_{1}+L_{3}}{=}$ $\displaystyle -2\left\vert\begin{array}{rrr}
0&b&c\\
c&c+a&0\\
b&b&a
\end{array}\right\vert = -2\{-b(ca) + c(cb-bc)\} = 4abc .$  

(c) By Vandermonde, $\left\vert\begin{array}{rrrr}
1&1&1&1\\
a & b & c & d\\
a^2 & b^2 & c^2 & d^2\\
a^3 & b^3 & c^3 & d^3
\end{array}\right\vert = (d-a)(d-b)(d-c)(c-a)(c-b)(b-a)$

3.

    $\displaystyle \left\vert\begin{array}{rrr}
1-x&2&2\\
2&2-x&1\\
2&1&2-x
\end{a...
...vert\begin{array}{rrr}
5-x&5-x&5-x\\
2&2-x&1\\
2&1&2-x
\end{array}\right\vert$  
  $\displaystyle \stackrel{2.14}{=}$ $\displaystyle (5-x)\left\vert\begin{array}{rrr}
1&1&1\\
2&2-x&1\\
2&1&2-x
\en...
...\left\vert\begin{array}{rrr}
1&1&1\\
0&-x&-1\\
0&-1&-x
\end{array}\right\vert$  
  $\displaystyle =$ $\displaystyle (5-x)(x^2-1) = 0$  

よって, $x = -1,1,5.$

4.Vectors $(x - a_{1},y - a_{2})$ and $(b_{1}-a_{1},b_{2}-a_{2})$ are parallel. Thus theie cross product is 0.

Figure A.1: a line goes through two points
\includegraphics[width=6.6cm]{LALG/2-8-1-4.eps}

Thus,

$\displaystyle (x - a_{1},y - a_{2}) \times (b_{1}-a_{1},b_{2}-a_{2}) = \left\ve...
...y - a_{2}&0 \\
b_{1}-a_{1} & b_{2}-a_{2}&0
\end{array}\right \vert = {\bf0} . $

Also,

$\displaystyle \left\vert \begin{array}{ccc}
{\bf i}&{\bf j}&{\bf k}\\
x - a_{1...
...} & y - a_{2}\\
b_{1} - a_{1} & b_{2} - a_{2}
\end{array}\right\vert = {\bf0} $

implies that

$\displaystyle \left\vert\begin{array}{rrr}
x - a_{1} & y - a_{2} & 0\\
a_{1} &...
... y & 1\\
a_{1} & a_{2} & 1\\
b_{1} & b_{2} & 1
\end{array}\right \vert = 0 . $

5.The normal vector given by ${\bf N} = (b_{1}-a_{1},b_{2}-a_{2}.b_{3}-a_{3}) \times (c_{1}-a_{1},c_{2}-a_{2}.c_{3}-a_{3})$ and the vector on the plane $(x-a_{1},y-a_{2},z-a_{3})$ is diagonal and their inner product is 0. Thus scalar triple product is

$\displaystyle \left\vert\begin{array}{ccc}
x-a_{1} & y-a_{2} & z - a_{3}\\
b_{...
... b_{3}-a_{3}\\
c_{1}-a_{1} & c_{2}-a_{2} & c_{3}-a_{3}
\end{array}\right \vert$ $\displaystyle =$ $\displaystyle \left\vert\begin{array}{cccc}
x-a_{1} & y-a_{2} & z - a_{3} & 0\\...
..._{3} & 0\\
c_{1}-a_{1} & c_{2}-a_{2} & c_{3}-a_{3} & 0
\end{array}\right \vert$  
  $\displaystyle =$ $\displaystyle \left\vert\begin{array}{cccc}
x & y & z & 1\\
a_{1} & a_{2} & a_...
... & b_{2} & b_{3} & 1\\
c_{1} & c_{2} & c_{3} & 1
\end{array}\right \vert = 0 .$  

6.Suppose that $\vert A\vert \neq 0$. Then the inverse matrix $A^{-1}$ exists. $A{\mathbf x} = {\bf0}$ implies that

$\displaystyle A^{-1}(A{\mathbf x}) = A^{-1}{\bf0} \Rightarrow (A^{-1}A){\mathbf x} = A^{-1}{\bf0} = 0 .$

Therefore, ${\mathbf x} = {\bf0}.$

7.

(a)

$\displaystyle x = \frac{\left\vert\begin{array}{rr}
5&-3\\
7&-5
\end{array}\ri...
...ert\begin{array}{rr}
1&-3\\
3&-5
\end{array}\right\vert} = \frac{-4}{4} = -1, $

$\displaystyle y = \frac{\left\vert\begin{array}{rr}
1&5\\
3&7
\end{array}\righ...
...ert\begin{array}{rr}
1&-3\\
3&-5
\end{array}\right\vert} = \frac{-8}{4} = -2 .$

(b)

$\displaystyle x = \frac{\left\vert\begin{array}{rrr}
3&1&1\\
5&2&2\\
6&2&3
\e...
...array}{rrr}
1&1&1\\
1&2&2\\
1&2&3
\end{array}\right\vert} = \frac{1}{1} = 1, $

$\displaystyle y = \frac{\left\vert\begin{array}{rrr}
1&3&1\\
1&5&2\\
1&6&3
\e...
...array}{rrr}
1&1&1\\
1&2&2\\
1&2&3
\end{array}\right\vert} = \frac{1}{1} = 1, $

$\displaystyle z = \frac{\left\vert\begin{array}{rrr}
1&1&3\\
1&2&5\\
1&2&6
\e...
...array}{rrr}
1&1&1\\
1&2&2\\
1&2&3
\end{array}\right\vert} = \frac{1}{1} = 1 .$

Chapter 3
Exercise3.2.1

1.For ${\bf v},{\bf w} \in R^{3}$, we check to see

\begin{displaymath}\begin{array}{l}
T({\bf v} + {\bf w}) = T({\bf v}) + T({\bf w}) ,\\
T(\alpha {\bf v}) = \alpha T({\bf v})
\end{array} \end{displaymath}

Let ${\bf v},{\bf w} \in R^{3}$. Then we can write ${\bf v} = \left(\begin{array}{c}
x_{1}\\
x_{2}\\
x_{3}
\end{array}\right),  ...
...array}{c}
x_{1}^{\prime}\\
x_{2}^{\prime}\\
x_{3}^{\prime}
\end{array}\right)$. Then

$\displaystyle T_{1}[\left(\begin{array}{c}
x_{1}\\
x_{2}\\
x_{3}
\end{array}\...
...rray}{c}
x_{1}^{\prime}\\
x_{2}^{\prime}\\
x_{3}^{\prime}
\end{array}\right)]$ $\displaystyle =$ $\displaystyle T_{1}\left(\begin{array}{c}
x_{1}+x_{1}^{\prime}\\
x_{2}+x_{2}^{...
..._{3}^{\prime}\\
x_{1}+x_{1}^{\prime} + x_{2}+x_{2}^{\prime}
\end{array}\right)$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{c}
x_{3}\\
x_{1} + x_{2}
\end{array}\right) ...
...in{array}{c}
x_{3}^{\prime}\\
x_{1}^{\prime}+x_{2}^{\prime}
\end{array}\right)$  
  $\displaystyle =$ $\displaystyle T_{1}\left(\begin{array}{c}
x_{1}\\
x_{2}\\
x_{3}
\end{array}\r...
...rray}{c}
x_{1}^{\prime}\\
x_{2}^{\prime}\\
x_{3}^{\prime}
\end{array}\right).$  

Also,
$\displaystyle T_{1}(\alpha \left(\begin{array}{c}
x_{1}\\
x_{2}\\
x_{3}
\end{array}\right))$ $\displaystyle =$ $\displaystyle T_{1}\left(\begin{array}{c}
\alpha x_{1}\\
\alpha x_{2}\\
\alph...
...\begin{array}{c}
\alpha x_{3}\\
\alpha x_{1} + \alpha x_{2}
\end{array}\right)$  
  $\displaystyle =$ $\displaystyle \alpha\left( \begin{array}{c}
x_{3}\\
x_{1} + x_{2}
\end{array}\...
...\alpha T_{1}\left(\begin{array}{c}
x_{1}\\
x_{2}\\
x_{3}
\end{array}\right) .$  

Thsu, $T_{1}$ is a linear mapping.

Next we check to see $T_{2}$. Suppose ${\bf v} = \left(\begin{array}{c}
1\\
2\\
3
\end{array}\right), {\bf w} = \left(\begin{array}{c}
1\\
0\\
0
\end{array}\right)$. Then

$\displaystyle T_{2}(\alpha {\bf v} + \beta {\bf w}) = T_{2}\left(\begin{array}{...
...begin{array}{c}
\alpha + \beta + 1\\
2 \alpha + 3 \alpha
\end{array}\right) . $

Also,
$\displaystyle T_{2}(\alpha {\bf v}) + T_{2}(\beta {\bf w})$ $\displaystyle =$ $\displaystyle T_{2}\left(\begin{array}{c}
\alpha \\
2 \alpha\\
3 \alpha
\end{array}\right) + T_{2}\left(\begin{array}{c}
\beta \\
0\\
0
\end{array}\right)$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{c}
\alpha + 1 \\
2 \alpha + 3 \alpha
\end{ar...
...begin{array}{c}
\alpha + \beta + 2 \\
2 \alpha + 3 \alpha
\end{array}\right) .$  

Thus, $T_{2}(\alpha {\bf v} + \beta {\bf w}) \neq T_{2}(\alpha {\bf v}) + T_{2}(\beta {\bf w})$. Therefore $T_{2}$ is not a linear mapping.

2.Let ${\bf v} \in V$. Then $\{{\bf v}_{1},{\bf v}_{2},\ldots,{\bf v}_{n}\}$ is a basis of $V$. Then we can express ${\bf v}$ uniquely as

$\displaystyle {\bf v} = \alpha_{1}{\bf v}_{1} + \cdots + \alpha_{n}{\bf v}_{n} $

Here, the image of $T$, $T({\bf v})$, is an element of $R^{n}$. Then we can write

$\displaystyle T({\bf v}) = \beta_{1}{\bf e}_{1} + \cdots + \beta_{n}{\bf e}_{n} $

But $T({\bf v}_{i}) = {\bf e}_{i}$ implies that

$\displaystyle T({\bf v}_{i}) = \beta_{1}{\bf e}_{1} + \cdots + \beta_{n}{\bf e}_{n} = {\bf e}_{i} . $

Thus for $i = 1,2,\ldots,n$, $\alpha_{i} = 0$ implies $\beta_{i} = 0$. Also, $\alpha_{i} = 1$ implies $\beta_{i} = 1$. Thus, $\alpha_{i} = \beta_{i}$. Therefore,

$\displaystyle T({\bf v}) = \alpha_{1}{\bf e}_{1} + \cdots + \alpha_{n}{\bf e}_{n}. $

Next we show $T$ is a linear mapping.

Suppose ${\bf v},{\bf w} \in V$. Then

$\displaystyle {\bf v} = \alpha_{1}{\bf v}_{1} + \cdots + \alpha_{n}{\bf v}_{n}, $

$\displaystyle {\bf w} = \beta_{1}{\bf w}_{1} + \cdots + \beta_{n}{\bf w}_{n}. $

Thus,
$\displaystyle T(\alpha{\bf v} + \beta {\bf w})$ $\displaystyle =$ $\displaystyle T(\alpha \alpha_{1}{\bf v}_{1} + \cdots + \alpha\alpha_{n}{\bf v}_{n} + \beta\beta_{1}{\bf v}_{1} + \cdots + \beta\beta_{n}{\bf v}_{n})$  
  $\displaystyle =$ $\displaystyle T((\alpha\alpha_{1} + \beta\beta_{1}){\bf v}_{1} + \cdots + (\alpha\alpha_{n} + \beta\beta_{n}){\bf v}_{n})$  
  $\displaystyle =$ $\displaystyle (\alpha\alpha_{1} + \beta\beta_{1}){\bf e}_{1} + \cdots + (\alpha\alpha_{n} + \beta\beta_{n}){\bf e}_{n}$  
  $\displaystyle =$ $\displaystyle (\alpha\alpha_{1}{\bf e}_{1} + \cdots + \alpha\alpha_{n}{\bf e}_{n}) + (\beta\beta_{1}{\bf e}_{1} + \cdots + \beta\beta_{n}{\bf e}_{n})$  
  $\displaystyle =$ $\displaystyle \alpha(\alpha_{1}{\bf e}_{1} + \cdots + \alpha_{n}{\bf e}_{n}) + \beta(\beta_{1}{\bf e}_{1} + \cdots + \beta_{n}{\bf e}_{n})$  
  $\displaystyle =$ $\displaystyle \alpha T({\bf v}) + \beta T({\bf w}) .$  

Hence, $T$ is a linear mapping.

3. $(a) \Rightarrow (b)$
If $T$ is isomorphic, then by the theorem3.1, there exists isomorphic mapping $S = T^{-1}$ such that $T \circ S = 1.$
$(b) \Rightarrow (a)$
Suppose that $T(S(x)) = T(S(y))$. Then $T \circ S = 1$ implies that $S(x) = S(y)$. Thus for some $x,y$, we have $x = y$. Also, $x = y$ implies $S(x) = S(y)$. Thus $T$ is injective. Next we showに $T$ is surjective. Since $T \circ S = 1$, for $y \in R^{n}$, there exists $z \in R^{n}$ such that $y = T(S(z))$. Also, $S$ is a mapping from $R^{n}$ to $R^{n}$. Thus for some $x \in R^{n}$, we have $x = S(z)$. Therefore, $y = T(x)$.

4. $\ker(T) = \{{\bf v} \in V : T({\bf v}) = {\bf0}\}$ implies that for ${\bf v}_{1},{\bf v}_{2} \in \ker(T)$, we have $T({\bf v}_{1}) = {\bf0}, T({\bf v}_{2}) = {\bf0}$. Thus for any real numbers $\alpha, \beta$, we need to show that $\alpha{\bf v}_{1} + \beta{\bf v}_{2} \in \ker(T)$. In other words, we have to show $T(\alpha{\bf v}_{1} + \beta{\bf v}_{2}) = {\bf0}$. Note that

$\displaystyle T(\alpha{\bf v}_{1} + \beta{\bf v}_{2}) = \alpha T({\bf v}_{1}) + \beta T({\bf v}_{2}) = {\bf0} $

Thus, $\alpha{\bf v}_{1} + \beta{\bf v}_{2} \in \ker(T)$ which shows that $\ker(T)$ is a subspace.

Next $Im(T) = \{{\bf w} \in W : {\bf w} = T({\bf v}), {\bf v} \in V \}$. For some ${\bf w}_{1},{\bf w}_{2} \in Im(T)$, $T({\bf v}_{1}) = {\bf w}_{1}, T({\bf v}_{2}) = {\bf w}_{2}$. Then for any real numbers $\alpha, \beta$, we need to show $\alpha{\bf w}_{1} + \beta{\bf w}_{2} \in Im(T)$. In other words, we have to show the existence of some ${\bf v} \in V$ so that $T({\bf v}) = \alpha{\bf w}_{1} + \beta{\bf w}_{2}$. Note that $V$ is a vector space, so $\alpha{\bf v}_{1} + \beta{\bf v}_{2} \in V$. Also,

$\displaystyle T(\alpha{\bf v}_{1} + \beta{\bf v}_{2}) = \alpha T({\bf v}_{1}) + \beta T({\bf v}_{2}) = \alpha{\bf w}_{1} + \beta{\bf w}_{2} $

Thus, $\alpha{\bf w}_{1} + \beta{\bf w}_{2} \in Im(T)$. Hence, $Im(T)$ is a subspace.

5.Let $m(T)$ be a matrix representation of $T$. Then

$\displaystyle m(T) = \left(\begin{array}{rrr}
1&2&2\\
2&1&3\\
2&2&1
\end{array}\right).$

By the theorem3.1, $\dim R^{3} = \dim \ker(T) + \dim Im(T)$ and $\dim Im(T)$ = ${\rm rank}(T)$. Thus, $\dim \ker(T) = 3 - {\rm rank}(A) = 0.$

Exercise3.4.1

1.

$\displaystyle {\bf w}_{1}$ $\displaystyle =$ $\displaystyle \left(\begin{array}
{r}
3\\
1
\end{array}\right) = \left(\begin{...
...\left(\begin{array}
{r}
1\\
1
\end{array}\right) = {\bf v}_{1} + 2{\bf v}_{2},$  
$\displaystyle {\bf w}_{2}$ $\displaystyle =$ $\displaystyle \left(\begin{array}
{r}
-1\\
2
\end{array}\right) = -\frac{3}{2}...
...}
1\\
1
\end{array}\right) = -\frac{3}{2}{\bf v}_{1} + \frac{1}{2}{\bf v}_{2}.$  

Then the transition matrix $P$ is

$\displaystyle P = \left(\begin{array}{cc}
1&-\frac{3}{2}\\
2&\frac{1}{2}
\end{array}\right ). $

2.Let ${\bf w}_{j} = p_{1j}{\bf v}_{1} + \cdots + p_{nj}{\bf v}_{n}$. Then $P = (p_{ij})$ is a transition matrix from $\{{\bf v}_{i}\}$ to $\{{\bf w}_{j}\}$. Also, let ${\bf v}_{j} = q_{1j}{\bf w}_{1} + \cdots + q_{nj}{\bf w}_{n}$. Then $Q = (q_{ij})$ is a transition matrix from $\{{\bf w}_{i}\}$ to $\{{\bf v}_{j}\}$.

$\displaystyle PQ \!\!$ $\displaystyle =$ $\displaystyle \left(\begin{array}{ccc}
p_{11}&\cdots&p_{1n}\\
\vdots&\vdots&\v...
...cdots&q_{1n}\\
\vdots&\vdots&\vdots\\
q_{n1}&\cdots&q_{nn}
\end{array}\right)$  
$\displaystyle  $ $\displaystyle =$ $\displaystyle \left(\begin{array}{ccc}
p_{11}q_{11}+\cdots+p_{1n}q_{n1}&\cdots&...
...+\cdots+p_{nn}q_{n1}&\cdots&p_{n1}q_{1n}+\cdots+p_{nn}q_{nn}
\end{array}\right)$  
$\displaystyle  $ $\displaystyle =$ $\displaystyle \left(\begin{array}{cccc}
1&0&\cdots&0\\
0&1&\cdots&0\\
\vdots&\vdots&\ddots&\vdots\\
0&0&\cdots&1
\end{array}\right).$  

3.

(a)

$\displaystyle \Phi_{A}(t) = \vert A - t I\vert = \left\vert\begin{array}{rr}
3-t & -1\\
1 & 1 - t
\end{array}\right\vert = t^2 - 4t + 4 = 0 . $

Then the eigenvalue of $A$ is $\lambda = 2$.

The eigenvector ${\mathbf x}$ corresponding to $\lambda = 2$ satisfies $(A - 2I){\mathbf x} = {\bf0}$ and not 0. Solving the system of linear equations, we have

$\displaystyle A - 2I = \left(\begin{array}{rr}
1 & -1\\
1 & -1
\end{array}\right) \longrightarrow \left(\begin{array}{rr}
1 & -1\\
0 & 0
\end{array}\right) $

Thus,

$\displaystyle {\mathbf x} = \alpha \left(\begin{array}{r}
1\\
1
\end{array}\right)  (\alpha \neq 0) . $

Therefore, the eigenspace is

$\displaystyle V(2) = \{\alpha \left(\begin{array}{c}
1\\
1
\end{array}\right ) \}. $

(b)

$\displaystyle \Phi_{A}(t) = \vert A - t I\vert = \left\vert\begin{array}{rrr}
2...
... - t & -1\\
0 & 2& 4 - t
\end{array}\right\vert = (2 - t)(t^2 - 5t + 6) = 0 . $

Then the eigenvalues of $A$ are $\lambda = 2,3$.

We find the eigenvector corresponds to $\lambda = 2$.

$\displaystyle A - 2I = \left(\begin{array}{rrr}
0&1 & 0\\
0& -1 & -1\\
0&2&2
...
...htarrow \left(\begin{array}{rrr}
0&1 & 0\\
0&0&1\\
0 & 0&0
\end{array}\right)$

Thus,

$\displaystyle {\mathbf x} = \alpha \left(\begin{array}{r}
1\\
0\\
0
\end{array}\right)  (\alpha \neq 0) . $

Next we find the eigenvector corresponds to $\lambda = 3$. Then

$\displaystyle A - 3I = \left(\begin{array}{rrr}
-1&1 & 0\\
0& -2 & -1\\
0&2&1...
...{array}{rrr}
1&0 & \frac{1}{2}\\
0&1&\frac{1}{2}\\
0 & 0&0
\end{array}\right)$

Thus,

$\displaystyle {\mathbf x} = \beta \left(\begin{array}{r}
-1\\
-1\\
2
\end{array}\right)  (\beta \neq 0) . $

Therefore,

$\displaystyle V(2) = \{\alpha \left(\begin{array}{r}
1\\
0\\
0
\end{array}\ri...
...\}, V(3) = \{\beta \left(\begin{array}{r}
-1\\
-1\\
2
\end{array}\right ) \}.$

(c)

$\displaystyle \Phi_{A}(t)$ $\displaystyle =$ $\displaystyle \vert A - t I\vert = \left\vert\begin{array}{rrr}
1-t & 4 & -4\\
-1&-3 - t & 2\\
0 & 2& -1 - t
\end{array}\right\vert$  
  $\displaystyle =$ $\displaystyle (1 - t)(t^2 + 4t - 1) + (-4t + 4)
= (1 - t)(t^2 + 4t + 3)$  
  $\displaystyle =$ $\displaystyle (1 - t)(t + 1)(t + 3) .$  

Then the eigenvalues of $A$ are $\lambda = -3,-1,1$.

We find the eigenvector corresponds to $\lambda = -3$. Then

$\displaystyle A + 3I = \left(\begin{array}{rrr}
4&4&-4\\
-1&0&2\\
0&2&2
\end{...
...rightarrow \left(\begin{array}{rrr}
1&0&-2\\
0&1&1\\
0&0&0
\end{array}\right)$

Thus,

$\displaystyle {\mathbf x} = \alpha \left(\begin{array}{r}
2\\
-1\\
1
\end{array}\right)  (\alpha \neq 0) . $

We find the eigenvector corresponds to $\lambda = -1$. Then

$\displaystyle A + I = \left(\begin{array}{rrr}
2&4&-4\\
-1&-2&2\\
0&2&0
\end{...
...ghtarrow \left(\begin{array}{rrr}
1&0&-2\\
0&1&0\\
0 & 0&0
\end{array}\right)$

Thus,

$\displaystyle {\mathbf x} = \beta \left(\begin{array}{r}
2\\
0\\
1
\end{array}\right)  (\beta \neq 0) . $

Finally, we find the eigenvector corresponds to $\lambda = 1$. Then

$\displaystyle A - I = \left(\begin{array}{rrr}
0&4&-4\\
-1&-4&2\\
0&2&-2
\end...
...tarrow \left(\begin{array}{rrr}
1&0 & 2\\
0&1&-1\\
0 & 0&0
\end{array}\right)$

Thus,

$\displaystyle {\mathbf x} = \gamma \left(\begin{array}{r}
-2\\
1\\
1
\end{array}\right)  (\gamma \neq 0) . $

Therefore the eigenspace is

$\displaystyle V(-3) = \{\alpha \left(\begin{array}{r}
2\\
-1\\
1
\end{array}\...
..., V(1) = \{\gamma \left(\begin{array}{r}
-2\\
1\\
1
\end{array}\right ) \} . $

4.Let $\lambda$ be the eigenvalue of $A$. Then

$\displaystyle \lambda {\mathbf x} = A{\mathbf x} = A^{2}{\mathbf x} = A(A{\math...
...) = A(\lambda {\mathbf x}) = \lambda A({\mathbf x}) = \lambda^{2}{\mathbf x} . $

Thus we have $\lambda = \lambda^2$. Therefore, $\lambda = 0,1$.

5.Let $\lambda_{i}$ be the eigenvalue of $A$. Then $A{\mathbf x} = \lambda_{i}{\mathbf x}$ implies that

$\displaystyle A^{m}{\mathbf x} = A^{m-1}(A{\mathbf x}) = A^{m-1}(\lambda_{i} {\mathbf x}) = \cdots = \lambda_{i}^{m} {\mathbf x} . $

6. $A = \left(\begin{array}{rr}
3&1\\
-1&1
\end{array}\right)$ implies that

$\displaystyle \Phi_{A}(t) = \det(A - t I) = \left\vert\begin{array}{rr}
3 - t& 1\\
-1&1 - t
\end{array}\right\vert = t^2 - 4 t + 4 = 0 . $

Thus by Cayley-Hamilton's theorem

$\displaystyle \Phi_{A}(A) = A^2 - 4A + 4I = 0. $

Here, since $A^4 = (A^2 - 4A + 4I)(A^2 + 4A + 12I) + 32A - 48I$, we have
$\displaystyle A^4$ $\displaystyle =$ $\displaystyle 32A - 48I = 32\left(\begin{array}{rr}
3&1\\
-1&1
\end{array}\right) -48\left(\begin{array}{rr}
1&0\\
0&1
\end{array}\right)$  
  $\displaystyle =$ $\displaystyle \left(\begin{array}{rr}
96&32\\
-32&32
\end{array}\right) - \lef...
...d{array}\right) = \left(\begin{array}{rr}
48&32\\
-32&-16
\end{array}\right) .$  

Note that $A^2 - 4A + 4I = 0$ implies that $A - 4I + 4A^{-1} = 0$. Thus,
$\displaystyle A^{-1}$ $\displaystyle =$ $\displaystyle \frac{1}{4}(4I - A) = \frac{1}{4}(\left(\begin{array}{rr}
4&0\\
0&4
\end{array}\right) - \left(\begin{array}{rr}
3&1\\
-1&1
\end{array}\right))$  
  $\displaystyle =$ $\displaystyle \frac{1}{4}\left(\begin{array}{cc}
1&-1\\
1&3
\end{array}\right) .$  

7.Note that $X^2 - 3X + 2I = 0$ implies that $(X - I)(X - 2I) = 0$. Then $X = I$ or $X = 2I$ satisfies the equation. Next let $X = \left(\begin{array}{rr}
a&b\\
c&d
\end{array}\right)$. Then by Cayley-Hamilton's theorem, $\Phi_{X}(X) = 0$. Thsu, we find $X$ so that the characteristic equation $\Phi_{x}(t) = t^2 - 3t + 2 = 0$.

$\displaystyle \Phi_{X}(t) = \det(X - tI) = \left\vert\begin{array}{rr}
a-t&b\\
c&d-t
\end{array}\right\vert = t^2 -(a+d)t + ad -bc .$

Thus the required $X$ satisfy the condition $a+d = 3, ad - bc = 2$.

Chapter 4

Exercise4.2.1

1.

(a) $A = \left(\begin{array}{rr}
1&2\\
0&-1
\end{array}\right)$. Then

$\displaystyle \Phi_{A}(t) = \left\vert\begin{array}{rr}
1 - t&2\\
0&-1-t
\end{array}\right\vert = t^2 - 1 . $

Thus the eigenvalues are $\lambda = \pm 1$.

For $\lambda = 1$, we have to solve the equation $(A - I){\mathbf x} = {\bf0}$ for ${\mathbf x} \neq 0$.

$\displaystyle A - I = \left(\begin{array}{rr}
0&2\\
0&-2
\end{array}\right) \longrightarrow \left(\begin{array}{rr}
0&1\\
0&0
\end{array}\right)$

Thus, we have

$\displaystyle {\mathbf x} = \alpha \left(\begin{array}{r}
1\\
0
\end{array}\right)  (\alpha \neq 0) . $

We next find the eigenvector corresponds to $\lambda = -1$.

$\displaystyle A + I = \left(\begin{array}{rr}
2&2\\
0&0
\end{array}\right) \longrightarrow \left(\begin{array}{rr}
1&1\\
0&0
\end{array}\right)$

Thus, we have

$\displaystyle {\mathbf x} = \beta \left(\begin{array}{r}
-1\\
1
\end{array}\right)  (\beta \neq 0) . $

Therefore, the eigenspace is

$\displaystyle V(1) = \{\alpha \left(\begin{array}{r}
1\\
0
\end{array}\right ) \}, V(-1) = \{\beta \left(\begin{array}{r}
-1\\
1
\end{array}\right ) \} . $

Now the number of eigenvalues and the number of linearly independent eigenvectors are the same. By the theorem4.1, it is diagonalizable and for $P = \left(\begin{array}{rr}
1&-1\\
0&1
\end{array}\right)$, we have

$\displaystyle P^{-1}AP = \left(\begin{array}{cc}
1&0\\
0&-1
\end{array}\right) . $

(b) $A = \left(\begin{array}{rrr}
2&1&1\\
1&2&1\\
0&0&1
\end{array}\right)$ implies that

$\displaystyle \Phi_{A}(t) = \left\vert\begin{array}{rrr}
2 - t&1&1\\
1&2-t&1\\
0&0&1-t
\end{array}\right\vert = (1-t)(t^2 -4t+3) . $

Thus the eigenvalues are $\lambda = 1,3$.

We find the eigenvector corresponds to $\lambda = 1$. Then

$\displaystyle A - I = \left(\begin{array}{rrr}
1&1&1\\
1&1&1\\
0&0&0
\end{arr...
...grightarrow \left(\begin{array}{rrr}
1&1&1\\
0&0&0\\
0&0&0
\end{array}\right)$

Thus, we have

$\displaystyle {\mathbf x} = \left(\begin{array}{r}
-\alpha - \beta\\
\beta\\
...
...{array}\right) + \beta\left(\begin{array}{r}
-1\\
1\\
0
\end{array}\right) . $

We find the eigenvector corresponds to $\lambda = 3$. Then

$\displaystyle A - 3I = \left(\begin{array}{rrr}
-1&1&1\\
1&-1&1\\
0&0&-2
\end...
...ightarrow \left(\begin{array}{rrr}
1&-1&-1\\
0&0&1\\
0&0&0
\end{array}\right)$

Thus,

$\displaystyle {\mathbf x} = \gamma \left(\begin{array}{r}
1\\
1\\
0
\end{array}\right)  (\gamma \neq 0) . $

Therefore, the eigenspace is

$\displaystyle V(1) = \{\alpha \left(\begin{array}{r}
-1\\
0\\
1
\end{array}\r...
...}, V(3) = \{\gamma \left(\begin{array}{r}
1\\
1\\
0
\end{array}\right ) \} . $

Since the number of eigenvalues and the number of linearly independent eigenvectors are the same. By the theorem4.1, it is diagonalizable and for $P = \left(\begin{array}{rrr}
-1&-1&1\\
0&1&1\\
1&0&0
\end{array}\right)$, we have

$\displaystyle P^{-1}AP = \left(\begin{array}{ccc}
1&0&0\\
0&1&0\\
0&0&3
\end{array}\right) . $

(c) $A = \left(\begin{array}{rrr}
1&1&6\\
-1&3&6\\
1&-1&-1
\end{array}\right)$. Then

$\displaystyle \Phi_{A}(t)$ $\displaystyle =$ $\displaystyle \left\vert\begin{array}{rrr}
1 - t&1&6\\
-1&3-t&6\\
1&-1&-1-t
\end{array}\right\vert$  
  $\displaystyle =$ $\displaystyle (1-t)(t^2 -2t+3)-(t-5)+6(t-2)$  
  $\displaystyle =$ $\displaystyle -t^3 + 3t^2 - 5t + 3 + 5t -7 = -(t^3 - 3t^2 + 4)$  
  $\displaystyle =$ $\displaystyle -(t-2)(t+1)(t-2) .$  

Thus the eigenvalues are $\lambda = -1,2$.

We find the eigenvector corresponds to $\lambda = -1$.

$\displaystyle A + I$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrr}
2&1&6\\
-1&4&6\\
1&-1&0
\end{array}\ri...
...rightarrow \left(\begin{array}{rrr}
1&-1&0\\
0&3&6\\
0&3&6
\end{array}\right)$  
  $\displaystyle \longrightarrow$ $\displaystyle \left(\begin{array}{rrr}
1&-1&0\\
0&1&2\\
0&0&0
\end{array}\rig...
...grightarrow \left(\begin{array}{rrr}
1&0&2\\
0&1&2\\
0&0&0
\end{array}\right)$  

Thus,

$\displaystyle {\mathbf x} = \alpha\left(\begin{array}{r}
-2\\
-2\\
1
\end{array}\right) . $

We next find the eigenvector corrsponds to $\lambda = 2$.

$\displaystyle A - 2I = \left(\begin{array}{rrr}
-1&1&6\\
-1&1&6\\
1&-1&-3
\en...
...rightarrow \left(\begin{array}{rrr}
1&-1&0\\
0&0&0\\
0&0&1
\end{array}\right)$

Thus,

$\displaystyle {\mathbf x} = \beta \left(\begin{array}{r}
1\\
1\\
0
\end{array}\right)  (\gamma \neq 0) . $

Therefore, the eigenspace is

$\displaystyle V(-1) = \{\alpha \left(\begin{array}{r}
-2\\
-2\\
1
\end{array}...
...}, V(2) = \{\gamma \left(\begin{array}{r}
1\\
1\\
0
\end{array}\right ) \} . $

Thus,

$\displaystyle \dim V(-1) + \dim V(2) = 2 < 3 $

and by the theorem4.1, it is diagonalizable. Now using

$\displaystyle {\mathbf x}_{1} = \left(\begin{array}{r}
-2\\
-2\\
1
\end{array...
...ght ), {\mathbf x}_{2} = \left(\begin{array}{r}
1\\
1\\
0
\end{array}\right) $

, we create the orthnormal basis $\{{\bf u}_{1},{\bf u}_{2},{\bf u}_{3}\}$. Then

$\displaystyle {\bf u}_{1} = \frac{{\mathbf x}_{1}}{\Vert{\mathbf x}_{1}\Vert} = \frac{1}{3}\left(\begin{array}{r}
-2\\
-2\\
1
\end{array}\right) , $

$\displaystyle {\bf u}_{2} = \frac{{\mathbf x}_{2} - ({\mathbf x}_{2},{\bf u}_{1...
...} = \frac{1}{3\sqrt{2}}\left(\begin{array}{r}
1\\
1\\
4
\end{array}\right) , $

$\displaystyle {\bf u}_{3} = \frac{{\bf u}_{1} \times {\bf u}_{2}}{\Vert{\bf u}_...
...rt} = \frac{1}{\sqrt{2}}\left(\begin{array}{r}
1\\
-1\\
0
\end{array}\right) $

Therefore, letting $U = \left(\begin{array}{rrr}
\frac{-2}{3}&\frac{1}{3\sqrt{2}}&\frac{1}{\sqrt{2}...
...2}}&\frac{-1}{\sqrt{2}}\\
\frac{1}{3}&\frac{4}{3\sqrt{2}}&0
\end{array}\right)$, $U$ is a unitary matrix and

$\displaystyle U^{-1}AU = \left(\begin{array}{ccc}
-1&0&0\\
0&2&1\\
0&0&2
\end{array}\right). $

2.Note that if $U + W$ is a direct sum, then we show $U \cap W = \{\bf0\}$. Let ${\bf a} \in U \cap W$. Then ${\bf a} \in U$ and ${\bf a} \in W$. Thus, ${\bf a} = {\bf a} + {\bf0} = {\bf0} + {\bf a}$. But $U + W$ is a direct sum, the expression is unique which implies that ${\bf a} = {\bf0}$. Thus, $U \cap W = \{{\bf0}\}$. Conversely, if ${\bf a} \in U \cap W = \{{\bf0}\}$ and ${\bf a} \in U + W$ is expressed as

$\displaystyle {\bf a} = u_{1} + w_{1} = u_{2} + w_{2}   (u_{1},u_{2} \in U,  w_{1},w_{2} \in W) $

Then,

$\displaystyle u_{1} - u_{2} = w_{2} - w_{1} \in U \cap W = \{{\bf0}\} $

implies that $u_{1} = u_{2} = w_{1} = w_{2}$. This shows that the expression above is unique. Thus, $U + W$ is a direct sum.

3.By the theorem1.4, $\dim (U + W) =\dim U + \dim W - \dim (U \cap W)$. Also, if $U + W$ is a direct sum, then $U \cap W = \{\bf0\}$ and $\dim(U \cap W) = 0$. Thus, $\dim (U \oplus W) = \dim U + \dim W.$

4.We first show that $U + W$ is a direct sum. By Exercise4.1, it is enough to show $U \cap W = \{\bf0\}$. Let $(x_{1},x_{2},x_{3}) \in U \cap W$ . Then

$\displaystyle x_{1} + x_{2} + x_{3} = 0,  x_{1} = x_{2} = x_{3}. $

Then $x_{1} = x_{2} = x_{3} = 0$ and $U \cap W = \{(0,0,0)\}$.

We next show that $R^3 = U \oplus W$. Since $U \subset R^3,  W \subset R^3$, $U \oplus W \subset R^3$. Also, $\dim U = 2, \dim W = 1$ implies that

$\displaystyle \dim (U \oplus W) = \dim U + \dim W = 2 + 1 = 3 $

Thus, $R^3 = U \oplus W$.

5.Let $\lambda$ be the eigenvalue of the orthogonal matrix $A$. Then since $A = A^{-1}$, we have

$\displaystyle \lambda {\mathbf x} = A{\mathbf x} = A^{-1}{\mathbf x} = \lambda^{-1}{\mathbf x} . $

Thus, $\lambda^{2} = 1$.

6.Since $U = ({\bf u}_{1},{\bf u}_{2},\ldots,{\bf u}_{n})$, we have

$\displaystyle U^{*} = \left(\begin{array}{c}
\overline{{\bf u}_{1}} \\
\overline{{\bf u}_{2}}\\
\vdots\\
\overline{{\bf u}_{n}}
\end{array}\right)$

Thus,

$\displaystyle U^{*}U = \left(\begin{array}{rrrr}
{\bf u}_{1} \cdot \overline{{\...
..._{2}}&\cdots&{\bf u}_{n} \cdot \overline{{\bf u}_{n}}
\end{array}\right) = I . $

Also, $UU^{*} = (U^{*}U)^{*}$ implies that $UU^{*} = I$.

Exercise4.4.1

1.Let $A = \left(\begin{array}{cc}
1&1-i\\
1+i&2
\end{array}\right)$. Then $A^{*} = \left(\begin{array}{cc}
1&1-i\\
1+i&2
\end{array}\right)$. Thus, $AA^{*} = A^{*}A$. Therefore, $A$ is diagonalizable by a unitary matrix.

$\displaystyle \Phi_{A}(t) = \det(A - tI) = \left\vert\begin{array}{cc}
1-t&1-i\\
1+i&2-t
\end{array}\right\vert = t^2 -3t + 2 - 2 = t(t-3) $

implies that the eigenvalues are $\lambda = 0,3$.

We find the eigenvector corresponds to $\lambda = 0$.

$\displaystyle \left(\begin{array}{cc}
1&1-i\\
1+i&2-i
\end{array}\right) \longrightarrow \left(\begin{array}{cc}
1&1-i\\
0&0
\end{array}\right) . $

Thus,

$\displaystyle V(0) = \{\alpha\left(\begin{array}{c}
1-i\\
-1
\end{array}\right) \}.$

We next find the eigenvector corresponds to $\lambda = 3$.

$\displaystyle \left(\begin{array}{cc}
-2&1-i\\
1+i&-1
\end{array}\right) \longrightarrow \left(\begin{array}{cc}
1&\frac{1-i}{-2}\\
0&0
\end{array}\right) . $

Thus,

$\displaystyle V(3) = \{\beta\left(\begin{array}{c}
1-i\\
2
\end{array}\right) \}. $

Now let the orthonomal basis of $V(0),V(3)$ be

$\displaystyle \{\left(\begin{array}{c}
\frac{1-i}{\sqrt{3}}\\
\frac{-1}{\sqrt{...
...gin{array}{c}
\frac{1-i}{\sqrt{6}}\\
\frac{2}{\sqrt{6}}
\end{array}\right) \} $

Then we can find the unitary matrix

$\displaystyle U = \left(\begin{array}{cc}
\frac{1-i}{\sqrt{3}}&\frac{1-i}{\sqrt{6}}\\
\frac{-1}{\sqrt{3}}&\frac{2}{\sqrt{6}}
\end{array}\right) $

and

$\displaystyle U^{-1}AU = U^{*}AU = \left(\begin{array}{cc}
0&0\\
0&3
\end{array}\right). $

2. $A = \left(\begin{array}{rrr}
1&0&-1\\
0&-1&0\\
-1&0&1
\end{array}\right)$ implies that $A$ is a real symmetric matrix. Thus by the theorem4.2, it is diagonalizable by a unitary matrix.

$\displaystyle \Phi_{A}(t)$ $\displaystyle =$ $\displaystyle \det(A - tI) = \left\vert\begin{array}{rrr}
1-t&0&-1\\
0&-1-t&0\\
-1&0&1-t
\end{array}\right\vert$  
  $\displaystyle =$ $\displaystyle -(1+t)(t^2-2t+1-1) = (1+t)(t)(t-2)$  

implies that the eigenvalues are $\lambda = -1,0,2$.

We find the eigenvector corresponds to $\lambda = -1$.

$\displaystyle A + I$ $\displaystyle =$ $\displaystyle \left(\begin{array}{rrr}
2&0&-1\\
0&0&0\\
-1&0&2
\end{array}\ri...
...rightarrow \left(\begin{array}{rrr}
1&0&-2\\
0&0&0\\
0&0&3
\end{array}\right)$  
  $\displaystyle \longrightarrow$ $\displaystyle \left(\begin{array}{rrr}
1&0&-2\\
0&0&0\\
0&0&1
\end{array}\rig...
...ightarrow \left(\begin{array}{rrr}
1&0&0\\
0&0&0\\
0&0&1
\end{array}\right) .$  

Thus,

$\displaystyle V(-1) = \{\alpha\left(\begin{array}{c}
0\\
1\\
0
\end{array}\right) \}.$

We next find the eigenvector corresponds to $\lambda = 0$.

$\displaystyle \left(\begin{array}{rrr}
1&0&-1\\
0&-1&0\\
-1&0&1
\end{array}\r...
...htarrow \left(\begin{array}{rrr}
1&0&-1\\
0&1&0\\
0&0&0
\end{array}\right) . $

よって

$\displaystyle V(0) = \{\beta\left(\begin{array}{c}
1\\
0\\
1
\end{array}\right) \}. $

Then we find the eigensapce corresponds to $\lambda = 2$.

$\displaystyle \left(\begin{array}{rrr}
-1&0&-1\\
0&-3&0\\
-1&0&-1
\end{array}...
...ghtarrow \left(\begin{array}{rrr}
1&0&1\\
0&1&0\\
0&0&0
\end{array}\right) . $

Thus,

$\displaystyle V(2) = \{\beta\left(\begin{array}{c}
-1\\
0\\
1
\end{array}\right) \}. $

Next let the orthonormal basis of $V(-1),V(0),V(2)$ be as follows:

$\displaystyle \{\left(\begin{array}{c}
0\\
1\\
0
\end{array}\right) ,  \left...
...array}{c}
\frac{-1}{\sqrt{2}}\\
0\\
\frac{1}{\sqrt{2}}
\end{array}\right) \} $

Then,

$\displaystyle P = \left(\begin{array}{rrr}
0&\frac{1}{\sqrt{2}}&\frac{-1}{\sqrt{2}}\\
1&0&0\\
0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}
\end{array}\right) $

is a diagonal matrix and

$\displaystyle P^{-1}AP = P^{t}AP = \left(\begin{array}{rrr}
-1&0&0\\
0&0&0\\
0&0&2
\end{array}\right). $

3. $A = \left(\begin{array}{ll}
0&a_{1}\\
a_{2}&0
\end{array}\right)$ is diagonalizable by a unitary matrix if and only if $A$ is a normal matrix according to the theorem4.2. In other words, $AA^{*} = A^{*}A$ .

$\displaystyle AA^{*} = \left(\begin{array}{ll}
0&a_{1}\\
a_{2}&0
\end{array}\r...
...}\right)\left(\begin{array}{ll}
0&a_{1}\\
a_{2}&0
\end{array}\right) = A^{*}A $

Thus,

$\displaystyle \left(\begin{array}{ll}
a_{1}\bar{a_{1}}&0\\
0&a_{2}\bar{a_{2}}
...
...(\begin{array}{ll}
a_{2}\bar{a_{2}}&0\\
0&a_{1}\bar{a_{1}}
\end{array}\right) $

This shows that $\vert a_{1}\vert = \vert a_{2}\vert$.

4.Express $x_{1}^2 + 2x_{2}^2 - 3x_{3}^2 + 2x_{1}x_{2}$ using matrix. We have

$\displaystyle (x_{1},x_{2},x_{3})\left(\begin{array}{ccc}
1&1&0\\
1&2&0\\
0&0...
...ray}\right)\left(\begin{array}{c}
x_{1}\\
x_{2}\\
x_{3}
\end{array}\right) . $

Here $A = \left(\begin{array}{ccc}
1&1&0\\
1&2&0\\
0&0&-3
\end{array}\right)$ is real symmetric matrix. Thus by the theorem4.2, it is diagonalizable by unitary matrix.

$\displaystyle \Phi_{A}(t) = \left\vert\begin{array}{ccc}
1-t&1&0\\
1&2-t&0\\
0&0&-3-t
\end{array}\right\vert = (-3-t)(t^2 -3t + 1) $

implies that $\lambda = -3, \frac{3 \pm \sqrt{5}}{2}$. Thus,

$\displaystyle {\mathbf x}^{t}A{\mathbf x} = {\mathbf y}^{t}(P^{t}AP){\mathbf y} = -3y_{1}^2 + \frac{3 - \sqrt{5}}{2}y_{2}^2 + \frac{3 + \sqrt{5}}{2}y_{2}^2. $

5.
Express $x_{1}\bar{x_{1}} + (1-i)x_{1}\bar{x_{2}} + (1+i)x_{2}\bar{x_{1}} + 2x_{2}\bar{x_{2}}$ using matrix.

$\displaystyle (\bar{x_{1}},\bar{x_{2}})\left(\begin{array}{cc}
1&1-i\\
1+i&2
\end{array}\right)\left(\begin{array}{c}
x_{1}\\
x_{2}
\end{array}\right) $

Here $A$ is Hermitian matrix. So, by the theorem4.2, it is possible to diagonalize by unitary matrix.

$\displaystyle \Phi_{A}(t) = \left\vert\begin{array}{cc}
1-t&1-i\\
1+i&2-t\\
\end{array}\right\vert = t(t-3) $

implies that $\lambda = 0,3$. Thus

$\displaystyle {\mathbf x}^{t}A{\mathbf x} = {\mathbf y}^{t}(U^{*}AU){\mathbf y} = 3\bar{y_{2}}y_{2}. $

chapter 5

1. (a)

$\displaystyle \left(\begin{array}{ccc}
8&0&0\\
0&4&-1\\
0&0&2
\end{array}\ri...
...^{-1}AP = \left(\begin{array}{ccc}
0&1&0\\
0&0&0\\
0&0&0
\end{array}\right) $

(b)

$\displaystyle \left(\begin{array}{ccc}
0&6&1\\
0&0&-3\\
18&0&0
\end{array}\r...
...^{-1}AP = \left(\begin{array}{ccc}
0&1&0\\
0&0&1\\
0&0&0
\end{array}\right) $

2. (a)

$\displaystyle \left(\begin{array}{ccc}
-2&0&1\\
1&-1&-1\\
1&0&0
\end{array}\...
...^{-1}AP = \left(\begin{array}{ccc}
3&1&0\\
0&3&1\\
0&0&3
\end{array}\right) $

(b)

$\displaystyle \left(\begin{array}{ccc}
2&1&1\\
2&0&1\\
-1&0&-1
\end{array}\r...
...1}AP = \left(\begin{array}{ccc}
-1&1&0\\
0&-1&0\\
0&0&-3
\end{array}\right) $