Solution of linear differential equation

The differential equation of the form

$\displaystyle y^{(n)} + a_{n-1}(x)y^{(n-1)} + \cdots + a_{1}(x)y^{\prime} + a_{0}(x)y = f(x) $

is called the nth-order linear differential equation. $a_{i}(x)$ is called the coefficient function. $f(x)$ is called the input function.

If $f(x) \equiv 0$, then the differntial equation is called the homogeneous equation.

We denote the left-hand side as $L(y)$. Then the differential equation is expressed as

$\displaystyle L(y) = f(x) $

$L$ is called the differential operator and has linearlity. That is for any solutions $y_{1},y_{2}$ and constants $c_{1},c_{2}$,

$\displaystyle L(c_{1}y_{1} + c_{2}y_{2}) = c_{1}L(y_{1}) + c_{2}L(y_{2}) $

Theorem 2..1   The solutions of homogeneous differential equations form a vector space.

We review the vector space.

A sum of two vectors A and B is expressed as A $+$ B and is equal to the diagonal of the parallelogram formed by A and B.

Figure 2.1: vector addition and scalar multiplication
\begin{figure}\begin{center}
\includegraphics[width=14cm]{DFQ/Fig0-1.eps}
\end{center}\vspace{-1.6cm}
\end{figure}

1. A sum of two vetors is a vector (closure)
2. For any vectors A and B,A+B = B+A (commutative law)
3. For any vectors A,B,C, (A+B)+C = A+(B+C) (associative law)
4. Given any vector A, there exists a vector 0 satisying A+0 = A (existence of zero)
5. Given any vector A, there exists a vector B satisfying A+B = 0 (existence of inverse)
6. A scalar multiplication of a vector is a vector
7. For any real numbers $\alpha$ and $\beta$, $\alpha$($\beta$A) = ( $\alpha\beta$)A (associative law)
8. For any real numbers $\alpha$ and $\beta$, ( $\alpha + \beta$)A = $\alpha$A + $\beta$A and for any vectors A and B, $\alpha$(A+B) = $\alpha$A + $\alpha$B (distributive law)
9. 1A = A; 0A = 0; $\alpha$0 = 0 (1 is multiplicative identity)

Let $C(a,b)$ be the set of continuous functions on the interval $(a,b)$. Let $PC(a,b)$ be the set of piecewise continuous functions on $(a,b)$, 2.1

$C(a,b) = \{f(x) : f(x)$ is continuous on $(a,b)$}, $PC(a,b) = \{f(x) : f(x)$ is piecewise continuous on $(a,b)$}.

For $f(x)$ and $g(x)$ in $C(a,b)$ or $PC(a,b)$, we define the addition on the scalar multiplication as follows:
1. $f+g$ is a function of $x$ whose value is equal to $f(x)+g(x)$.
2. $\alpha f$ is a function of $x$ whose value is equal to $\alpha f(x)$.

Example 2..1   Given $f(x) = x , g(x) = x^2$, find $f+g, \frac{1}{2}f, 2g$.

SOLUTION $(f+g)(x) = f(x) + g(x) = x + x^2$
$(\frac{1}{2}f)(x) = \frac{1}{2}f(x) = \frac{x}{2}$
$(2g)(x) = 2g(x) = 2x^2$ $\ \blacksquare$

Theorem 2..2   $C(a,b)$ with the operation defined above is a vector space. Then the function $f(x)$ belongs to $C(a,b)$ is called a vector.

Proof The solutions of a homogeneous differential equation are differentiable. Thus, the set of solutions are the subset of continuous functions. Then to show $C(a,b)$ is a vector space, it is enough to show the linear combination of the solutions $y_1$ and $y_2$ is a solution. Let $L(y_{1}) \equiv 0, \ L(y_{2}) \equiv 0, y_{3} = c_{1}y_{1} + c_{2}y_{2}$. Then

$\displaystyle L(y_{3})$ $\displaystyle =$ $\displaystyle L(c_{1}y_{1} + c_{2}y_{2})$  
  $\displaystyle =$ $\displaystyle L(c_{1}y_{1}) + L(c_{2}y_{2})$  
  $\displaystyle =$ $\displaystyle c_{1}L(y_{1}) + c_{2}L(y_{2})$  
  $\displaystyle =$ 0  

Thus $y_3$ is again a solution. $\ \blacksquare$

The set of solutions of homogeneous equation becomes a vector space. So, we call this solution space.

Theorem 2..3   The basis of the solution space is a set of $n$ independent solutions of homogeneous differential equation.

The determinant of the following matrix is called Wronskian determinant. Let $y_1, y_2$ be the solutions of the differential equation. Then

$\displaystyle W(y_{1},y_{2},\ldots,y_{n}) = \left\vert\begin{array}{llll}
y_{1}...
...
y_{1}^{(n-1)},&y_{2}^{(n-1)},&\cdots,&y_{n}^{(n-1)}
\end{array}\right \vert . $

Theorem 2..4   If $y_{1},y_{2},\ldots,y_{n}$ are the solutions of the homogeneous equation on the interval $[a,b]$, then $W(y_{1},y_{2},\ldots,y_{n})$ is either 0 or never 0 0n the interval $[a,b]$.

Proof For $n=2$ Since $y_{1}$ and $y_{2}$ are solutions of $L(y) = y^{\prime\prime} + a_{1}y^{\prime} + a_{0}y = 0$, we have

$\displaystyle \frac{d}{dx}W(y_{1},y_{2})$ $\displaystyle =$ $\displaystyle \left\vert\begin{array}{rr}
y_{1}^{\prime}&y_{2}^{\prime}\\
y_{1...
...y_{1}&y_{2}\\
y_{1}^{\prime\prime}&y_{2}^{\prime\prime}
\end{array}\right\vert$  
  $\displaystyle =$ $\displaystyle \left\vert\begin{array}{rr}
y_{1}&y_{2}\\
-a_{1}y_{1}^{\prime}-a_{0}y_{1}&-a_{1}y_{2}^{\prime}-a_{0}y_{2}\end{array}\right\vert$  
  $\displaystyle =$ $\displaystyle -a_{1}W(y_{1},y_{2}) .$  

Thus

$\displaystyle \frac{d}{dx}W(y_{1},y_{2}) + a_{1}(x)W(y_{1},y_{2}) = 0. $

This is a linear differential equation. Thus the integrating factor $\mu(x) = \exp(\int_{x_{0}}^{x}a_{1}(t)dt)$. Multiplying $\mu$ to get

$\displaystyle \frac{d}{dx}(\mu(x)W(y_{1},y_{2})) = 0$

Integrating

$\displaystyle W(y_{1},y_{2})(x) = c\exp(-\int_{x_{0}}^{x}a_{1}(t)dt) \ (c:$constant$\displaystyle ) . $

Now let $x = x_{0}$. Then $c = W(y_{1},y_{2})(x_{0})$ and

$\displaystyle W(y_{1},y_{2})(x) = W(y_{1},y_{2})(x_{0})\exp(-\int_{x_{0}}^{x}a_{1}(t)dt) $

This becomes 0 or not is obvious $\ \blacksquare$

Theorem 2..5   If $y_{1},y_{2},\ldots,y_{n}$ are solutions of the homogeneous differential equation on $[a,b]$, then the following conditions are equivalent.
(1) $\{y_{1},y_{2},\ldots,y_{n}\}$ are linearly independent on the interval $[a,b]$.
(2) $W( y_{1},\ldots,y_{n}) \neq 0, \ (x \in I)$

Proof For $n=2$, let the linear combination of $y_{1}$ and $y_{2}$ be 0. Then

$\displaystyle c_{1}y_{1} + c_{2}y_{2} = 0 . $

Differentiate with respect to $x$. Then

$\displaystyle c_{1}y^{\prime}_{1} + c_{2}y^{\prime}_{2} = 0 $

Now using Cramer's rule to find $c_1$ and $c_2$.

$\displaystyle c_{1} = \frac{0}{W(y_{1},y_{2})}, \ c_{2} = \frac{0}{W(y_{1},y_{2})} . $

If $\{y_{1},y_{2}\}$ are linearly independent, then $c_{1} = c_{2} = 0$. Thus, $W(y_{1},y_{2}) \neq 0$. Conversely, if $W(y_{1},y_{2}) \neq 0$, then $c_{1} = c_{2} = 0$. Thus, $\{y_{1},y_{2}\}$ are linearly independent $\ \blacksquare$

Example 2..2   Find the general solution of $y^{(4)} = 0$.

SOLUTION The dimension of the solution space is 3. $y_{1} = 1, y_{2} = x, y_{3} = x^{2}, y_4 = x^3$ are solutions of the differential equation. So, we need to show they are linearly independent.

$\displaystyle W(1,x,x^{2},x^3) = \left\vert\begin{array}{rrrr}
1&x&x^{2}&x^3\\
0&1&2x&3x^2\\
0&0&2&6x\\
0&0&0&6
\end{array}\right\vert = 12. $

Thus, Wronskian is not 0 and therefore linearly independent. The general solution is

$\displaystyle y = c_{1} + c_{2}x + c_{3}x^{2} + c_4 x^3\ensuremath{\ \blacksquare}$

Example 2..3   Suppose that $y = e^{mx}$ is a solution of $L(y) = y^{\prime\prime} + 3y^{\prime} + 2y = 0$. Then find the fundamental solution.

SOLUTION Since

$\displaystyle L(e^{mx})$ $\displaystyle =$ $\displaystyle m^{2}e^{mx} + 3me^{mx} + 2e^{mx}$  
  $\displaystyle =$ $\displaystyle e^{mx}(m^{2} + 3m + 2)$  
  $\displaystyle =$ $\displaystyle e^{mx}(m+1)(m+2),$  

$m = -1, -2$ implies that $L(e^{mx}) \equiv 0$.

$\displaystyle W(e^{-x},e^{-2x}) = \left\vert\begin{array}{rr}
e^{-x}&e^{-2x}\\
-e^{-x}&-2e^{-2x}
\end{array}\right\vert = -e^{-2x} \neq 0 $

Thus, $\{e^{-x},e^{-2x}\}$ are fundamental solutions. $\ \blacksquare$

Theorem 2..6   Suppose $y_{p}$ is an particular solution of $n$the-order linear differential equation $L(y) = f(x)$ and $y_{c}$ is the general solution of $L(y) = 0$. Then $y(x) = y_{c}(x) + y_{p}(x)$ is the general solution of $L(y) = f(x)$.

Proof By the assumption, $L(y_{p}) = f(x), L(y_{c}) = 0$. Now by the linearlity of $L$, we have

$\displaystyle L(y_{c} + y_{p}) = L(y_{c}) + L(y_{p}) = f(x). $

Thus $y_{c} + y_{p}$ is a solution of $L(y) = f(x)$. Since $y_{c}(x)$ contains $n$ constants, $y_{c}(x) + y_{p}(x)$ is the general solution. $\ \blacksquare$

$y_{c}(x)$ is called the complementary solution. By this theorem, if an particular solution of $L(y) = f(x)$ is found, then to find the general solution, it is enough to find a complementary solution of $L(y) = 0$.

Example 2..4   Show $x - 1$ is an particular solution of $y^{\prime\prime} + 3y^{\prime} + 2y = 2x + 1$. Then find the general solution.

SOLUTION Since $y = x-1, y^{\prime} = 1, y^{\prime\prime} = 0$, $y^{\prime\prime} + 3y^{\prime} + 2y = 2x + 1$. Also, the complementary solution of $L(y) = 0$ is given in example2.2. Thus

$\displaystyle y_{c}(x) = c_{1}e^{-x} + c_{2}e^{-2x} . $

Therefore, the general solution is

$\displaystyle y(x) = \underbrace{c_{1}e^{-x} + c_{2}e^{-2x}}_{y_{c}} + \underbrace{x-1}_{y_{p}}\ensuremath{\ \blacksquare}$

If $f(x)$ is given as a sum of $f_{1}(x)$ and $f_{2}(x)$, then it is better to consider $L(y) = f_{1}(x)$ and $\ L(y) = f_{2}(x)$.

Theorem 2..7   Suppose that $y_{p_{1}}$ is a solution of $L(y) = f_{1}(x)$, and $y_{p_{2}}$ is a solution of $\ L(y) = f_{2}(x)$. Then $y_{p_{1}} + y_{p_{2}}$ is a solution of $L(y) = f_{1}(x) + f_{2}(x)$.

Proof.

$\displaystyle L(y_{p_1} + y_{p_2}) = L(y_{p_1}) + L(y_{p_2}) = f_{1}(x) + f_{2}(x)$

For example, to find the general solution of the differential equation

$\displaystyle y^{\prime\prime} - 5y^{\prime} + 2y = \sin{x} + x^{2}, $

find (1) $y_{c}$ of $y^{\prime\prime} - 5y^{\prime} + 2y = 0 $
(2) $y_{p_{1}}$ of $y^{\prime\prime} - 5y^{\prime} + 2y = \sin{x}$
(3) $y_{p_{2}}$ of $y^{\prime\prime} - 5y^{\prime} + 2y = x^{2}$
and the linear combination

$\displaystyle y = y_{c} + y_{p_{1}} + y_{p_{2}} $



Subsections