Linear Mapping

$¥spadesuit$Mapping $¥spadesuit$

In chapter 1 and chapter 2, we have studied about vector spaces. In this chapter, we study the relationship between two vector spaces. Before going to detail, we review about the mapping.

Consider two vector spaces $V$ and $W$. For any element $v$ of $V$, there is assigned a unique element $w$ of $W$; the collection, $T$, of such assignments is called a mapping from $V$ to $W$, and is written

$¥displaystyle T : V ¥longrightarrow W $

Using this idea, $m ¥times n$ matrix represents a mapping which maps a $n$th column vector to $m$th column vector. In other words, it is a mapping from ${¥mathcal R}^{n}$ to ${¥mathcal R}^{m}$.

$¥spadesuit$Linear Mapping $¥spadesuit$

In vector space, an addition and a scalar product are defined. So, a mapping between two vector spaces had better to satisfy an addition and scalar product.

Definition 3..1  

Let $V,W$ be vector spaces. Then the mapping $T : V ¥longrightarrow W$ is called a linear mapping if it satisfies the following two conditions.
$1.  T({¥bf v} + {¥bf w}) = T({¥bf v}) + T({¥bf w})   ({¥bf v},{¥bf w} ¥in V)$
$2.  T(¥alpha {¥bf v}) = ¥alpha T({¥bf v})   ({¥bf v} ¥in V, ¥alpha ¥in R)$
For $V = W$, We say the mapping $T$ is a linear transformation of $V$.

A linear mapping from $V$ to $W$ is a mapping which preserves the properties of vector space. The image of $T$, written ${¥rm Im}T$, is the set of image point in $W$:

$¥displaystyle Im (T) = T(V) = ¥{{¥bf w} ¥in W : T({¥bf v}) = {¥bf w} {¥rm \ for some} {¥bf v} ¥in V¥} $

The kernel of $T$, written ${¥rm kernel}T$, is the set of elements in $V$ which map into $0 ¥in W$:

$¥displaystyle ¥ker (T) = ¥{{¥bf v} ¥in V : T({¥bf v}) = 0 ¥} $

The following theorems is easily proven.

Theorem 3..1  

Let $T:V ¥to W$ be a linear mapping. Then the image of $T$ is a subspace of $W$ and the kernel of $T$ is a subspace of $V$.

Example 3..1  

Let $A$ be the matrix with $m ¥times n$. Let

$¥displaystyle T : {¥mathcal R}^{n} ¥longrightarrow {¥mathcal R}^{m}. $

with $T({¥mathbf x}) = A{¥mathbf x}$. Then show that $T$ is a linear mapping.

Answer For any vectors ${¥mathbf x}_{1},{¥mathbf x}_{2}$ and any real number $¥alpha$, we have

$¥displaystyle T({¥mathbf x}_{1} + {¥mathbf x}_{2}) = A({¥mathbf x}_{1} + {¥math...
...mathbf x}_{1}) + A({¥mathbf x}_{2}) = T({¥mathbf x}_{1}) + T({¥mathbf x}_{2}), $

$¥displaystyle T(¥alpha {¥mathbf x}_{1}) = A(¥alpha {¥mathbf x}_{1}) = ¥alpha A({¥mathbf x}_{1}) = ¥alpha T({¥mathbf x}_{1}) $

Thus, $T$ is a linear mapping. $ ¥blacksquare$

Suppose that the linear mapping $T : V ¥longrightarrow W$ satisfies the following condition:

$¥displaystyle {¥bf v}_{1} ¥neq {¥bf v}_{2}  ¥mbox{implies}  T({¥bf v}_{1}) ¥neq T({¥bf v}_{2})  ({¥bf v}_{1},{¥bf v}_{2} ¥in V) $

Then $T$ is called a one-to-one or injective.

If a linear mapping $T : V ¥longrightarrow W$ satisfies $Im(T) = W$, then $T$ is called onto mapping from $V$ to $W$ or surjective.

Example 3..2  

Suppose that $S : U ¥longrightarrow V, T : V ¥longrightarrow W$ are linear mappings. Then show that the compostion mapping $T ¥circ S$ is a linear mapping.

Answer

$¥displaystyle (T ¥circ S)(¥alpha{¥bf u}_{1} + ¥beta{¥bf u}_{2})$ $¥displaystyle =$ $¥displaystyle T(S(¥alpha{¥bf u}_{1} + ¥beta{¥bf u}_{2}))$  
  $¥displaystyle =$ $¥displaystyle T(¥alpha S({¥bf u}_{1}) + ¥beta S({¥bf u}_{2}))$  
  $¥displaystyle =$ $¥displaystyle ¥alpha (T ¥circ S)({¥bf u}_{1}) + ¥beta (T ¥circ S)({¥bf u}_{2})$  

Thus the composition mapping $T ¥circ S : U ¥longrightarrow W$ is a linear mapping.. $ ¥blacksquare$

$¥spadesuit$Isomorphic Mapping $¥spadesuit$

One-to-one and onto mapping is called an isomorphism. If there is an isomorphism from vector space $V$ to $W$, Then we say $V$ and $W$ are isomorphic and denoted by $V ¥sim W$. If $T : V ¥longrightarrow W$ is isomorphic and $T({¥bf v}) = {¥bf w}$, then set $S({¥bf w}) = {¥bf v}$. Then we can find a mapping from $W$ to $V$. This mapping is called an inverse mapping and denoted by $S = T^{-1}$.

Theorem 3..2  

Let $T : V ¥longrightarrow {¥mathcal R}^{n}$ be a linear mapping. Then the following conditions are equivalent.
$(1)$ $¥dim V = n$
$(2)$ $T$ is isomorphic. In other wordsm $V ¥sim {¥mathcal R}^{n}$
$(3)$ $¥ker(T) = ¥{{¥bf0}¥},  Im(T) = {¥mathcal R}^{n}$
$(4)$ $T^{-1} : {¥mathcal R}^{n} ¥longrightarrow V$ is isomorphic.
$(5)$ If $¥{{¥bf w}_{1},{¥bf w}_{2}, ¥ldots , {¥bf w}_{n}¥}$ is the basis of ${¥mathcal R}^{n}$ then the set of images of $T^{-1}$:

$¥{T^{-1}({¥bf w}_{1}),T^{-1}({¥bf w}_{2}), ¥ldots , T^{-1}({¥bf w}_{n})¥}$$V$ is the basis of $V$.

Proof $(1) ¥Rightarrow (2)$
Let the basis of $V$ be $¥{{¥bf v}_{1},{¥bf v}_{2}¥ldots,{¥bf v}_{n}¥}$ and $¥{{¥bf w}_{1},{¥bf w}_{2}, ¥ldots , {¥bf w}_{n}¥}$ be the basis of ${¥mathcal R}^{n}$. Now define $T : V ¥longrightarrow {¥mathcal R}^{n}$ as $T(¥alpha_{1}{¥bf v}_{1} + ¥alpha_{2}{¥bf v}_{2} + ¥cdots + ¥alpha_{n}{¥bf v}_{n...
... ¥alpha_{1}{¥bf w}_{1} + ¥alpha_{2}{¥bf w}_{2} + ¥cdots + ¥alpha_{n}{¥bf w}_{n}$. Then $T$ is a linear mapping (Excercise3.1). Also let ${¥bf a},{¥bf b} ¥in V$ . Then for some $c_{1},c_{2},¥ldots,c_{n},  d_{1},d_{2},¥ldots,d_{n} ¥in R$, we have

$¥displaystyle {¥bf a} = c_{1}{¥bf v}_{1} + ¥cdots + c_{n}{¥bf v}_{n}, $

$¥displaystyle {¥bf b} = d_{1}{¥bf v}_{1} + ¥cdots + d_{n}{¥bf v}_{n}. $

Thus
$¥displaystyle T({¥bf a}) = T({¥bf b})$ $¥displaystyle ¥Longrightarrow$ $¥displaystyle T(c_{1}{¥bf v}_{1} + ¥cdots + c_{n}{¥bf v}_{n}) = T(d_{1}{¥bf v}_{1} + ¥cdots + d_{n}{¥bf v}_{n})$  
  $¥displaystyle ¥Longrightarrow$ $¥displaystyle c_{1}T({¥bf v}_{1}) + ¥cdots + c_{n}T({¥bf v}_{n}) = d_{1}T({¥bf v}_{1}) + ¥cdots + d_{n}T({¥bf v}_{n})$  
  $¥displaystyle ¥Longrightarrow$ $¥displaystyle c_{1}{¥bf w}_{1} + ¥cdots + c_{n}{¥bf w}_{n} = d_{1}{¥bf w}_{1} + ¥cdots + d_{n}{¥bf w}_{n}$  
  $¥displaystyle ¥Longrightarrow$ $¥displaystyle c_{1} = d_{1}, ¥ldots , c_{n} = d_{n}$  
  $¥displaystyle ¥Longrightarrow$ $¥displaystyle {¥bf a} = {¥bf b}$  

and $T$ is injective. Next suppose that ${¥mathbf y} ¥in {¥mathcal R}^{n}$. Then ${¥mathbf y} = c_{1}{¥bf w}_{1} + ¥cdots + c_{n}{¥bf w}_{n}$. This shows that $c_{1}{¥bf v}_{1} + ¥ldots + c_{n}{¥bf v}_{n}$ is the image of $T$. Thus, $T$ is injective. Therefore a linear mapping $T$ is bijective. This shows tha t $T$ is isomorphism.
$(2) ¥Rightarrow (3)$
Suppose that $T({¥bf a}) = {¥bf0}$. Then $T({¥bf0}) = {¥bf0} = T({¥bf a}) $. By the assumption. ${¥bf a} = {¥bf0}$. Thus, $¥ker(T) = ¥{{¥bf0}¥}$. Since $T$ is bijection, we have $Im(T) = {¥mathcal R}^{n}$.
$(3) ¥Rightarrow (4)$
Since $T$ is isomorphi, for any ${¥bf w}_{i},{¥bf w}_{j} ¥in R^{n}$, we have $T({¥bf v}_{i}) = {¥bf w}_{i}, T({¥bf v}_{j}) = {¥bf w}_{j}$. and ${¥bf v}_{i},{¥bf v}_{j} ¥in V$ exist. By the linearlity of $T$, we have

$¥displaystyle T(¥alpha{¥bf v}_{i} + ¥beta{¥bf v}_{j}) = ¥alpha T({¥bf v}_{i}) + ¥beta T({¥bf v}_{j}) = ¥alpha {¥bf w}_{i} + ¥beta {¥bf w}_{j}.$

Thus,

$¥displaystyle T^{-1}(¥alpha {¥bf w}_{i} + ¥beta {¥bf w}_{j}) = ¥alpha{¥bf v}_{i} + ¥beta{¥bf v}_{j}. $

Next we show $T^{-1}$ is bijection. First of all, we show $T^{-1}({¥bf w}_{i}) = T^{-1}({¥bf w}_{j})$ implies that ${¥bf w}_{i} = {¥bf w}_{j}$.

$¥displaystyle {¥bf w}_{i} - {¥bf w}_{j} = T({¥bf v}_{i}) - T({¥bf v}_{j}) = T({¥bf v}_{i} - {¥bf v}_{j}) = T({¥bf0}) = {¥bf0} . $

Finally, $Im(T) = {¥mathcal R}^{n}$ implies that

$¥displaystyle {¥bf v}_{i} = T^{-1}({¥bf w}_{i}) . $

Thus, $T^{-1}$ is isomorphic.
$(4) ¥Rightarrow (5)$
To show $¥{T^{-1}({¥bf w}_{1}),T^{-1}({¥bf w}_{2}), ¥ldots , T^{-1}({¥bf w}_{n})¥}$ is the basis of $V$, it is enough to show this set is linearly indepenent and the vector space spanned by this set is $V$.

$¥displaystyle c_{1}T^{-1}({¥bf w}_{1})+c_{2}T^{-1}({¥bf w}_{2}) + ¥cdots + c_{n}T^{-1}({¥bf w}_{n}) = {¥bf0} $

implies that

$¥displaystyle T^{-1}(c_{1}{¥bf w}_{1}+c_{2}{¥bf w}_{2} + ¥cdots + c_{n}{¥bf w}_{n}) = {¥bf0}. $

Note that $T^{-1}$ is isomorphic. Then

$¥displaystyle c_{1}{¥bf v}_{1} + c_{2}{¥bf v}_{2} + ¥cdots + c_{n}{¥bf v}_{n} = {¥bf0}.
$

Here, $¥{{¥bf v}_{1},¥ldots,{¥bf v}_{n}¥}$ is linearly independent. Thus, we have $c_{1} = c_{2} = ¥cdots = c_{n} = 0$. Hence,

$¥displaystyle ¥{T^{-1}({¥bf w}_{1}),T^{-1}({¥bf w}_{2}), ¥ldots , T^{-1}({¥bf w}_{n})¥} $

is linearly independent. Next let ${¥bf v} ¥in V$. Then

$¥displaystyle {¥bf v} = c_{1}{¥bf v}_{1} + ¥cdots + c_{n}{¥bf v}_{n} = c_{1}T^{-1}({¥bf w}_{1}) + ¥cdots + c_{n}T^{-1}({¥bf w}_{n}) . $

Thus, $¥langle{¥bf w}_{1},{¥bf w}_{2},¥ldots,{¥bf w}_{n}¥rangle = V$.
$(5) ¥Rightarrow (1)$
Since $¥{T^{-1}({¥bf w}_{1}),T^{-1}({¥bf w}_{2}), ¥ldots , T^{-1}({¥bf w}_{n})¥}$ is basis of $V$. Thus, $¥dim V = n$. $ ¥blacksquare$

$¥spadesuit$Matrix Representation $¥spadesuit$

To check a linear mapping of the finite dimensional vector space, we apply a matrix for the linear mapping. Then the properties of the linear mapping can be seen directly as the properties of matrices. For example, consider the linear mapping $T$ which describes the rotation of the point on $xy$ plane with $¥theta$ rotation to a point $(X,Y)$. Consider the basis of $R^2$, $¥{¥left(¥begin{array}{c}
1¥¥
0
¥end{array} ¥right), ¥left(¥begin{array}{c}
0¥¥
1
¥end{array} ¥right) ¥}$. Then

$¥displaystyle {¥mathbf x} = ¥left(¥begin{array}{c}
x¥¥
y
¥end{array} ¥right...
... ¥end{array} ¥right) + y ¥left(¥begin{array}{c}
0¥¥
1
¥end{array} ¥right), $


$¥displaystyle {¥mathbf y}$ $¥displaystyle =$ $¥displaystyle ¥left(¥begin{array}{c}
X¥¥
Y
¥end{array}¥right) = T({¥mathbf x})...
...1¥¥
0
¥end{array}¥right) + y ¥left(¥begin{array}{c}
0¥¥
1
¥end{array}¥right))$  
  $¥displaystyle =$ $¥displaystyle xT(¥left(¥begin{array}{c}
1¥¥
0
¥end{array}¥right) ) + y T(¥left...
...) + y ¥left(¥begin{array}{c}
-¥sin{¥theta}¥¥
¥cos{¥theta}
¥end{array}¥right) .$  

Thus, we can show the linear mapping by using matrix

$¥displaystyle ¥left(¥begin{array}{c}
X¥¥
Y
¥end{array}¥right) = ¥left(¥begin{a...
...s{¥theta}
¥end{array}¥right) ¥left(¥begin{array}{c}
x¥¥
y
¥end{array}¥right). $

In general, consider $T : {¥mathcal R}^{n} ¥longrightarrow {¥mathcal R}^{m}$. Let

$¥displaystyle ¥{{¥bf v}_{1},¥ldots,{¥bf v}_{n}¥}, ¥{{¥bf w}_{1},¥ldots,{¥bf w}_{m}¥}$

be the basis of ${¥mathcal R}^{n},{¥mathcal R}^{m}$. Let ${¥mathbf x} ¥in {¥mathcal R}^{n}$ and the image of $T$ be ${¥mathbf y} = T({¥mathbf x}) ¥in {¥mathcal R}^{m}$

$¥displaystyle {¥mathbf x} = {¥bf v}_{1}x_{1} + {¥bf v}_{2}x_{2} + ¥cdots + {¥bf v}_{n}x_{n}, $

$¥displaystyle {¥mathbf y} = {¥bf w}_{1}y_{1} + {¥bf w}_{2}y_{2} + ¥cdots + {¥bf w}_{m}y_{m} $

Then by the linearlity of $T$, we have
$¥displaystyle {¥mathbf y} = T({¥mathbf x})$ $¥displaystyle =$ $¥displaystyle T({¥bf v}_{1}x_{1} + {¥bf v}_{2}x_{2} + ¥cdots + {¥bf v}_{n}x_{n})$  
  $¥displaystyle =$ $¥displaystyle T({¥bf v}_{1})x_{1} + T({¥bf v}_{2})x_{2} + ¥cdots + T({¥bf v}_{n})x_{n}$  

Here, $¥{T({¥bf v}_{1}),¥ldots,T({¥bf v}_{n})¥}$ is in ${¥mathcal R}^{m}$, we can express as

$¥displaystyle T({¥bf v}_{i}) = a_{1i}{¥bf w}_{1} + a_{2i}{¥bf w}_{2} + ¥cdots + a_{mi}{¥bf w}_{m}  (i = 1,2,¥ldots,n)$

Thus,
$¥displaystyle {¥mathbf y}$ $¥displaystyle =$ $¥displaystyle (a_{11}{¥bf w}_{1} + a_{21}{¥bf w}_{2} + ¥cdots + a_{m1}{¥bf w}_{m})x_{1}$  
  $¥displaystyle =$ $¥displaystyle (a_{12}{¥bf w}_{1} + a_{22}{¥bf w}_{2} + ¥cdots + a_{m2}{¥bf w}_{m})x_{2}$  
  $¥displaystyle =$ $¥displaystyle ¥vdots$  
  $¥displaystyle =$ $¥displaystyle (a_{1n}{¥bf w}_{1} + a_{2n}{¥bf w}_{2} + ¥cdots + a_{mn}{¥bf w}_{m})x_{n}$  

Here, $¥{{¥bf w}_{1},{¥bf w}_{2},¥ldots,{¥bf w}_{m}¥}$ is the basis of ${¥mathcal R}^{m}$. Then the corresponding coefficients must be equal. So, we have the following relations:

$¥displaystyle ¥left ¥{ ¥begin{array}{rrr}
y_{1}& =& a_{11}x_{1} + a_{12}x_{2} +...
...{m}& =& a_{m1}x_{1} + a_{m2}x_{2} + ¥cdots + a_{mn}x_{n} .
¥end{array}¥right . $

Then the coefficient matrix $A = (a_{ij})$. This matrix $A$ is called a matrix representation of $T$ on the basis

$¥displaystyle ¥{{¥bf v}_{1},¥ldots,{¥bf v}_{n}¥}, ¥{{¥bf w}_{1},¥ldots,{¥bf w}_{m}¥}$

and denoted by $[T]_{¥bf v}^{¥bf w}$. For $V = W$, we take the basis of $¥{{¥bf v}_{1},¥ldots,{¥bf v}_{n}¥}$ of $V$ and the basis $¥{{¥bf w}_{1},¥ldots,{¥bf w}_{n}¥}$ of $W$ to be the same and write $[T]_{¥bf v}$. Also for $T: { R}^{n} ¥longrightarrow { R}^{n}$, the following usual basis

$¥displaystyle ¥{{¥bf e}_{1} = ¥left(¥begin{array}{c}
1¥¥
0¥¥
¥vdots¥¥
0
¥end...
...¥bf e}_{n} = ¥left(¥begin{array}{c}
0¥¥
0¥¥
¥vdots¥¥
1
¥end{array}¥right)¥} $

is used and the matrix representation of $T$ is given by $[T]_{¥bf e}$ or $[T]$ To summarize, A linear mapping $T$ from ${¥mathcal R}^{n}$ to ${¥mathcal R}^{m}$ is represented by the $m ¥times n$ matrix which shows the image of the basis of ${¥mathcal R}^{n}$ by the basis of ${¥mathcal R}^{m}$. Then it satisfies

$¥displaystyle [T({¥mathbf x})]_{¥bf w} = [T]_{¥bf v}^{¥bf w}[{¥mathbf x}]_{¥bf v} $

Example 3..3  

Let $T : {¥mathcal R}^{2} ¥longrightarrow {¥mathcal R}^{2}$ be given by $T¥left(¥begin{array}{c}
x¥¥
y
¥end{array}¥right) = ¥left(¥begin{array}{c}
3x - 4y¥¥
x + 5y
¥end{array}¥right)$. Find the matrix representation $[T]$ of $T$ relative to the basis $¥{{¥bf e}_{1},{¥bf e}_{2}¥}$ and the matrix representation of $[T]_{w}$ relative to the basis $¥{{¥bf w}_{1} = ¥left(¥begin{array}{c}
1¥¥
3
¥end{array}¥right), {¥bf w}_{2} = ¥left(¥begin{array}{c}
2¥¥
5
¥end{array}¥right)¥}$. Furthermore, find the matrix representation $[T]_{¥bf e}^{¥bf w}$ of $T$ relative to $¥{{¥bf e}_{1},{¥bf e}_{2}¥}, ¥{{¥bf w}_{1},{¥bf w}_{2}¥}$.

Answer

¥begin{displaymath}¥begin{array}{l}
T({¥bf e}_{1}) = T(¥left(¥begin{array}{c}
1¥...
...5
¥end{array}¥right) = -4{¥bf e}_{1} + 5{¥bf e}_{2}
¥end{array}¥end{displaymath}

implies that $[T] = ¥left(¥begin{array}{cc}
3&-4¥¥
1&5
¥end{array}¥right)$.


$¥displaystyle T({¥bf w}_{1})$ $¥displaystyle =$ $¥displaystyle T(¥left(¥begin{array}{c}
1¥¥
3
¥end{array}¥right)) = ¥left(¥begin{array}{c}
-9¥¥
16
¥end{array}¥right) = a{¥bf w}_{1} + b{¥bf w}_{2},$  
$¥displaystyle T({¥bf w}_{2})$ $¥displaystyle =$ $¥displaystyle T(¥left(¥begin{array}{c}
2¥¥
5
¥end{array}¥right)) = ¥left(¥begin{array}{c}
-14¥¥
27
¥end{array}¥right) = c{¥bf w}_{1} + d{¥bf w}_{2}$  

Then $a{¥bf w}_{1} + b{¥bf w}_{2} = ¥left(¥begin{array}{c}
-9¥¥
16
¥end{array}¥right...
...¥bf w}_{1} + d{¥bf w}_{2} = ¥left(¥begin{array}{c}
-14¥¥
27
¥end{array}¥right)$ implies that $a = 77,b = -43, c = 124, d = -69$. Thus, $[T]_{¥bf w} = ¥left(¥begin{array}{cc}
77&124¥¥
-43&-69
¥end{array}¥right)$.
Also,
$¥displaystyle T({¥bf e}_{1})$ $¥displaystyle =$ $¥displaystyle T(¥left(¥begin{array}{c}
1¥¥
0
¥end{array}¥right)) = ¥left(¥begin{array}{c}
3¥¥
1
¥end{array}¥right) = -13{¥bf w}_{1} + 8{¥bf w}_{2},$  
$¥displaystyle T({¥bf e}_{2})$ $¥displaystyle =$ $¥displaystyle T(¥left(¥begin{array}{c}
0¥¥
1
¥end{array}¥right)) = ¥left(¥begin{array}{c}
-4¥¥
5
¥end{array}¥right) = 30{¥bf w}_{1} - 17{¥bf w}_{2}$  

implies that $[T]_{¥bf e}^{¥bf w} = ¥left(¥begin{array}{cc}
-13&30¥¥
8&-17
¥end{array}¥right)$. $ ¥blacksquare$

Let $T$ be a linear mapping $T : {¥mathcal R}^{n} ¥longrightarrow {¥mathcal R}^{m}$. Then $¥ker(T)$ is a set of all elements of ${¥mathcal R}^{n}$ so that the image of $T$ is ${¥bf0}$. In other words, it is the same as the solution space of the system of linear equations.

$¥displaystyle ¥left ¥{ ¥begin{array}{rrr}
a_{11}x_{1} + a_{12}x_{2} + ¥cdots + ...
...¥
a_{m1}x_{1} + a_{m2}x_{2} + ¥cdots + a_{mn}x_{n}& = & 0
¥end{array}¥right . $

Also by the theorem 2.3, the dimension of the solution space is

$¥displaystyle ¥dim ¥ker(T) = n - {¥rm rank}(A)$

How about the $Im(T)$. $Im(T)$ is the set all images of elements in ${¥mathcal R}^{n}$. Then $¥dim Im(T) = {¥rm rank}(A)$. From this, we have the following theorem.

Theorem 3..3  

Let $T : {¥mathcal R}^{n} ¥longrightarrow {¥mathcal R}^{m}$ be a linear mapping. Then the following is true.

$¥displaystyle ¥dim {¥mathcal R}^{n} = ¥dim ¥ker(T) + ¥dim Im(T) $

Example 3..4  

Let the matrix representation of $T : {¥mathcal R}^{3} ¥longrightarrow {¥mathcal R}^{3}$ by $A = ¥left(¥begin{array}{rrr}
1&1&1¥¥
2&2&2¥¥
3&3&3
¥end{array}¥right )$. Find the $¥dim ¥ker(T)$.

Answer Since $¥dim ¥ker(T)$ is the same as $¥dim {¥mathcal R}^{3} - ¥dim Im(T)$. Also, $¥dim Im(T) = {¥rm rank}(A)$. Thus if we find the ${¥rm rank}(A)$, then we can find $¥dim ¥ker(T)$.

$¥displaystyle A = ¥left(¥begin{array}{rrr}
1&1&1¥¥
2&2&2¥¥
3&3&3
¥end{array}¥...
...ightarrow ¥left(¥begin{array}{rrr}
1&1&1¥¥
0&0&0¥¥
0&0&0
¥end{array}¥right ) $

implies that ${¥rm rank}(A) = 1$. Thus $¥dim ¥ker(T) = 3 - 1 = 2$. $ ¥blacksquare$

Given vector spaces $,{¥mathcal R}^{l},{¥mathcal R}^{n},{¥mathcal R}^{m}$, there is a linear mapping $S$ such that

$¥displaystyle S : { R}^{l} ¥longrightarrow {¥mathcal R}^{n},  T : {¥mathcal R}^{n} ¥longrightarrow {¥mathcal R}^{m} $

Then the composition mapping $T ¥circ S : { R}^{l} ¥longrightarrow { R}^{m}$ is also linear mapping. (see Example3.1). Then take basis for each vector space and let the matrix representation for linear mappings $S,T$ be $A,B$. Then for ${¥mathbf x} = (x_{i}), {¥mathbf y} = (y_{j}), {¥mathbf z} = (z_{k})$, we have

$¥displaystyle (y_{i}) = A(x_{j}),  (z_{k}) = B(y_{i}) = BA(x_{j}) $

Thus the matrix representation for $T ¥circ S$ is $BA$.

Theorem 3..4  

Let the matrix representation of $T : {¥mathcal R}^{n} ¥longrightarrow {¥mathcal R}^{n}$ relative to the basis $¥{{¥bf v}_{i}¥}$ be $A$. Then the followings areequivalent.
$(1)$ $T$ is isomorphic.
$(2)$ The matrix $A$ is regular.

Proof $(1) ¥Rightarrow (2)$
Let the matrix representation of $T : {¥mathcal R}^{n} ¥longrightarrow {¥mathcal R}^{n}$ be $A$. Since $T$ is isomorphic, $T^{-1}$ exists. Now let the matrix representation of $T^{-1}$ be $B$. Then $BA = I$ and $A$ is regular.
$(2) ¥Rightarrow (1)$
Suppose that $A$ is regular. Then there is a matrix $B$ such that $BA = I$. Now let $S$ be the linear mapping on $B$. Then $T ¥circ S = 1$. Thus, by Exercise 3.1, $T$ is isomorphic. $ ¥blacksquare$

Exercise3-2

1. Determine whether the following mapping is linear mapping.

$¥displaystyle T_{1} : {¥mathcal R}^{3} ¥longrightarrow {¥mathcal R}^{2}, T_{1}...
...y}¥right) = ¥left(¥begin{array}{c}
x_{3}¥¥
x_{1} + x_{2}
¥end{array}¥right) . $

$¥displaystyle T_{2} : {¥mathcal R}^{3} ¥longrightarrow {¥mathcal R}^{2}, T_{2}...
...ight) = ¥left(¥begin{array}{c}
x_{1} + 1¥¥
x_{2} + x_{3}
¥end{array}¥right) . $

2. Let $V$ be the $n$ dimensional vector space. Let $¥{{¥bf v}_{1},¥ldots,{¥bf v}_{n}¥}$ be the basis of $V$. Define $T : V ¥longrightarrow {¥mathcal R}^{n}$ by $T(¥alpha_{1}{¥bf v}_{1} + ¥alpha_{2}{¥bf v}_{2} + ¥cdots + ¥alpha_{n}{¥bf v}_{n...
... ¥alpha_{1}{¥bf e}_{1} + ¥alpha_{2}{¥bf e}_{2} + ¥cdots + ¥alpha_{n}{¥bf e}_{n}$. Then show that $T$ is a linear mapping.

3. Let $T : {¥mathcal R}^{n} ¥longrightarrow {¥mathcal R}^{n}$ be a linear mapping. Then the followings are equivalent.

(a)
$T$ is isomorphic.
(b)
There exists $S$ such that $T ¥circ S = 1$ and $S : {¥mathcal R}^{n} ¥longrightarrow {¥mathcal R}^{n}$.

4. Suppose that $T : V ¥longrightarrow W$ is a linear mapping. Show that $¥ker(T),Im(T)$ are the subspace of $V,W$.

5. Let $T : {¥mathcal R}^{3} ¥longrightarrow {¥mathcal R}^{3}$ be a linear transformation such that $T¥left(¥begin{array}{r}
x_{1}¥¥
x_{2}¥¥
x_{3}
¥end{array}¥right) = ¥left(¥beg...
...x_{3}¥¥
2x_{1} + x_{2} + 3x_{3}¥¥
2x_{1} + 2x_{2} + x_{3}
¥end{array}¥right) $. Find the matrix representation $[T]$ of $T$ relative to the usual basis $¥{{¥bf e}_{1},{¥bf e}_{2},{¥bf e}_{3}¥}$. Find also $[T]_{¥bf w}$ relative to the basis $¥{{¥bf w}_{1} = ¥left(¥begin{array}{c}
1¥¥
2¥¥
1
¥end{array}¥right), {¥bf w}_...
...¥right), {¥bf w}_{3} = ¥left(¥begin{array}{c}
0¥¥
-1¥¥
1
¥end{array}¥right)¥}$.
また $¥dim ¥ker(T)$ を求めよ.