1.
(a) Add the corresponding component:
(b) Scalar multiplication to each component:
(d) First scalar multiplication then vector addition:
2.
3. Geometric vectors can be treated as space vectors see Exercise 1-2-1.5 .
4. Geometric vectors can be treated as space vectors see Exercise 1-2-1.5.
(1)
![]() |
![]() |
![]() |
|
![]() |
![]() |
(2)
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
(3)
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
(4)
![]() ![]() |
![]() |
![]() ![]() |
(5)
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
|
![]() |
![]() |
(6)
![]() |
![]() |
![]() |
(7)
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
(8)
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
(9)
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
|
![]() |
![]() |
6. For
,
is a diagonal vector bisecting the angle between
and
. For
, Multiply
by some scalar so that
. Then
is a vector bisecting the angle between
and
.
![]() |
![]() |
![]() |
|
![]() |
![]() |
1.
(d) A unit vector can be obtained by dividing the magnitude of itself.
2. every element of an orthogonal system is orthogonal to each other.
every element of an orthonormal system is unit element and orthogonal to each other.
(a) Since
, we have
.
Change to orthonormal system, we have
(c)
implies that non orthonormal system.
3.Let be an arbitray point on the plane. Consider the vector with the initial point
and the endpoint
. This vector
is on the plane. Thus, orthogonal to the vector
. Then the inner product is 0. Hence,
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
1.
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
2.Let be the plane with the sides
and
. Then the normal vector of
is orthogonal to
and
. Therefore,
3.Let be the plane perpendicular to the palne
. Then the normal vector
of the plane
can be thought of being on the
. Also
go through the required plane. Thus the vector
is on the required plane. Now take the cross product of these two vectors, we have the following normal vector of
such as
![]() |
![]() |
![]() |
|
![]() |
![]() |
4.The area of the triangle is the half of the parallelogram with the sides A,B.
7.The area of parallelogram with the sides and
is given by
. Let the angle between the vector
and A be
. Then the height of the parallelpiped with
is
. Thus, the volume of the parallelpiped with the sides
is given by
8. The cross product of vectors itself is 0. Changing the order of multiplication changes the sign.
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
9.Using the scalar triple product.
![]() |
![]() |
![]() |
|
![]() |
![]() |
10.
(a) Set
and differentiate with respect to
twice. Then we ahve
(b) Set
and differentiate with respect to
. Then we have
11.Suppose
is linearly independent. Then show
. (Using contraposition, for
, show
is linearly dependent. )
Suppose that
. Then A and B are parallel. In other words, there exists some real number
so that
. Thus,
Next we show if
, then
is linearly independent. (Using contraposition, we show if
is linearly dependent, then
. )
If
are linearly dependent, then there exists
or
so that
1.Let
be elements of
. Then we can write
. To be a subspace, it must satisfy the closure property in addition and scalar multiplication. We first show for an addtion.
. Since
- component is not zero,
is not element of
. Therefore
is not a subspace of
.
![]() |
![]() |
![]() |
|
![]() |
![]() |
3.Let be an arbitray element of
. Then
. Now express
using i, j, k. Then we have
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
Next let
. Then
5.Let be the subspace generated by
. Then
![]() |
![]() |
![]() |
|
![]() |
![]() ![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
7.By example 1.4,
are subspaces of
. Then set
So, set
. Then
8.Let
be 3D vectors in 3D vector space. Take a linear combination of those vectors and set it to 0. We have
![]() |
![]() |
0 | |
![]() |
![]() |
0 | |
![]() |
![]() |
0 |
1.
(a) A sum of matrices is the sum of corresponding components.
(b)A scalar multiplication of matrix is the multiplication of every components.
![]() |
![]() |
![]() |
|
![]() |
![]() |
(c) A product of matrices is the inner product of corresponding rows and columns.
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
4. and
are symmetric matrices. Then we have
and
. To show
is symmetric, it is enough to show
.
Now
5.For -square symmetric matrices
,
. Then to show
is symmetric, we have to show
. In general,
. So, the answer to the question
is always symmetric is not true. In fact, let
. Then
are symmetric matrices. But
Next we find the necessary and sufficient condition so that is always symmetric.
Since for -square symmetric matrices
, we have
. So, to make the matrix
is symmetric, it is enough to be
.
Suppose first that
is symmetric. Then
implies that
.
Suppose that . Then since
, we have
. Thus,
is symmetric.
7.Let
B =
be the matrix commutable with
. Then
implies that
8.
. Thus
is skew-symmetric. Also,
. Thus,
is symmetric. Now let
implies that
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
2.
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
(c) By Exercise2-4-1.1, we have
3.An elementary matrix can be obtained by applying an elementary operation on the identity matrix.
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
4.The product of matrices satisfying
can be found by the following steps:
![]() |
|||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
5.By the theorem2.2, the dimension of the row space is the same as the rank of the matrix, it is enough to find the rank of matrix with the row vectors are given by
.
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
1.
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
2.
has a solution if and only if
.
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
3.
Let be
-square normal matrix. Then we have
.
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
4. is
-square regular matrix
. Thus, we make
so that
.
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
5.To show as a product of elementary metrices, we start with an identiry matrix, then apply elementary operations. We then multiply the elementary matrices coming from the elementary operations to the identity matrix.
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
6.Suppose
. Then for any matrix
,
Alternate Solution
Let be
-square matrix. If every element of some row of
is 0, then every element of one row of
is 0.
Then
and by the theorem2.3,
is not regular.
Using Gaussian elimination, we have
![]() |
![]() |
![]() |
![]() |
(b)Using Gaussian elimination, we have
1.
(a) We use the cofactor expansion with the 2nd row.
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
||
![]() |
![]() |
Use factorization with the column.
![]() |
|||
![]() |
![]() |
||
![]() |
![]() |
![]() |
|||
![]() |
![]() |
||
![]() |
![]() |
4.Vectors
and
are parallel. Thus theie cross product is 0.
Thus,
5.The normal vector given by
and the vector on the plane
is diagonal and their inner product is 0. Thus scalar triple product is
![]() |
![]() |
![]() |
|
![]() |
![]() |
6.Suppose that
. Then the inverse matrix
exists.
implies that
7.
Let
. Then we can write
. Then
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
Next we check to see . Suppose
. Then
![]() |
![]() |
![]() |
|
![]() |
![]() |
2.Let
. Then
is a basis of
. Then we can express
uniquely as
Next we show is a linear mapping.
Suppose
. Then
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
3.
If is isomorphic, then by the theorem3.1, there exists isomorphic mapping
such that
Suppose that
. Then
implies that
. Thus for some
, we have
. Also,
implies
. Thus
is injective. Next we showに
is surjective. Since
, for
, there exists
such that
. Also,
is a mapping from
to
. Thus for some
, we have
. Therefore,
.
4.
implies that for
, we have
. Thus for any real numbers
, we need to show that
. In other words, we have to show
. Note that
Next
. For some
,
. Then for any real numbers
, we need to show
. In other words, we have to show the existence of some
so that
. Note that
is a vector space, so
. Also,
5.Let be a matrix representation of
. Then
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
2.Let
. Then
is a transition matrix from
to
.
Also, let
. Then
is a transition matrix from
to
.
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
3.
The eigenvector
corresponding to
satisfies
and not 0. Solving the system of linear equations, we have
We find the eigenvector corresponds to
.
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
We find the eigenvector corresponds to
. Then
Finally, we find the eigenvector corresponds to
. Then
4.Let be the eigenvalue of
. Then
5.Let
be the eigenvalue of
. Then
implies that
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
7.Note that
implies that
. Then
or
satisfies the equation. Next let
. Then by Cayley-Hamilton's theorem,
. Thsu, we find
so that the characteristic equation
.
1.
For
, we have to solve the equation
for
.
We next find the eigenvector corresponds to
.
We find the eigenvector corresponds to
. Then
![]() |
![]() |
![]() |
|
![]() |
![]() |
||
![]() |
![]() |
||
![]() |
![]() |
We find the eigenvector corresponds to
.
![]() |
![]() |
![]() |
|
![]() |
![]() |
2.Note that if is a direct sum, then we show
. Let
. Then
and
. Thus,
. But
is a direct sum, the expression is unique which implies that
. Thus,
. Conversely, if
and
is expressed as
3.By the theorem1.4,
. Also, if
is a direct sum, then
and
. Thus,
4.We first show that is a direct sum. By Exercise4.1, it is enough to show
. Let
. Then
We next show that
. Since
,
. Also,
implies that
5.Let be the eigenvalue of the orthogonal matrix
. Then since
, we have
1.Let
. Then
. Thus,
. Therefore,
is diagonalizable by a unitary matrix.
We find the eigenvector corresponds to
.
2.
implies that
is a real symmetric matrix. Thus by the theorem4.2, it is diagonalizable by a unitary matrix.
![]() |
![]() |
![]() |
|
![]() |
![]() |
We find the eigenvector corresponds to
.
![]() |
![]() |
![]() |
|
![]() |
![]() |
We next find the eigenvector corresponds to
.
3.
is diagonalizable by a unitary matrix if and only if
is a normal matrix according to the theorem4.2. In other words,
.
4.Express
using matrix. We have
5.
Express
using matrix.