zotero-db/storage/JIKJ548V/.zotero-ft-cache

4213 lines
187 KiB
Plaintext
Raw Normal View History

ORDINARY DIFFERENTIAL
EQUATIONS
FOURTH EDITION
Garrett Birkhoff
Harvard Umversity
Gian-Carlo Rota
Massachusetts Institute of Technology
•WILEY
JOHN WILEY & SONS New York Chichester Bnsbane Toronto Smgapore
Copyright © 1959, 1960, 1962, 1969, 1978, and 1989 by John Wiley & Sons, Inc
All rights reserved. Published simultaneously in Canada.
Reproduction or translation of any part of this work beyond that permitted by Sections 107 and 108 of the 1976 United States Copyright Act without the permission of the copyright owner is unlawful. Requests for permission or further information should be addressed to the Permissions Department, John Wiley & Sons.
Library of Congress Cataloging in Publication Data:
Birkhoff, Garrett, 1911Ordinary differential equations.
Bibliography: p. 392 Includes index. 1. Differential equations. 1932- . II. Title. QA372.B58 1989 ISBN 0-471-86003-4
I. Rota, Gian-Carlo,
515.3'52
88-14231
Printed in the United States of America 10 9 8 7 6 5 4 3 2 1
PREFACE
The theory of differential equations is distinguished for the wealth of its ideas and methods. Although this richness makes the subject attractive as a field of research, the inevitably hasty presentation of its many methods in elementary courses leaves many students confused. One of the chief aims of the present text is to provide a smooth transition from memorized formulas to the critical understanding of basic theorems and their proofs.
We have tried to present a balanced account of the most important key ideas of the subject in their simplest context, often that of second-order equations. We have deliberately avoided the systematic elaboration of these key ideas, feeling that this is often best done by the students themselves. After they have grasped the underlying methods, they can often best develop mastery by generalizing them (say, to higher-order equations or to systems) by their own efforts.
Our exposition presupposes primarily the calculus and some experience with the formal manipulation of elementary differential equations. Beyond this requirement, only an acquaintance with vectors, matrices, and elementary complex functions is assumed throughout most of the book.
In this fourth edition, the first eight chapters have again been carefully revised. Thus simple numerical methods, which provide convincing empirical evidence for the well-posedness of initial value problems, are already introduced in the first chapter. Without compromising our emphasis on advanced ideas and proofs, we have supplied detailed reviews of elementary facts for convenient reference. Valuable criticisms and suggestions by Calvin Wilcox have helped to eliminate many obscurities and troublesome errors.
The book falls broadly into three parts. Chapters 1 through 4 constitute a review of material to which, presumably, the student has already been exposed in elementary courses. The review serves two purposes: first, to fill the inevitable gaps in the student's mastery of the elements of the subject, and, second, to give a rigorous presentation of the material, which is motivated by simple examples. This part covers elementary methods of integration of first-order, second-order linear, and nth-order linear constant-coefficient, differential equations. Besides reviewing elementary methods, Chapter 3 introduces the concepts of transfer function and the Nyquist diagram with their relation to Green's functions. Although widely used in communications engineering for many years, these concepts are ignored in most textbooks on differential equations. Finally, Chapter
V
vi
Preface
4 provides rigorous discussions of solution by power series and the method of majorants.
Chapters 5 through 8 deal with systems of nonlinear differential equations. Chapter 5 discusses plane autonomous systems, including the classification of nondegenerate critical points, and introduces the important notion of stability and Liapunov's method, which is then applied to some of the simpler types of nonlinear oscillations. Chapter 6 includes theorems of existence, uniqueness, and continuity, both in the small and in the large, and introduces the perturbation equations.
Chapter 7 gives rigorous error bounds for the methods introduced in Chapter 1, analyzing their rates of convergence. Chapter 8 then motivates and analyzes more sophisticated methods having higher orders of accuracy.
Finally, Chapters 9 through 11 are devoted to the study of second-order linear differential equations. Chapter 9 develops the theory of regular singular points in the complex domain, with applications to some important special functions. In this discussion, we assume familiarity with the concepts of pole and branch point. Chapter 10 is devoted to Sturm-Liouville theory and related asymptotic formulas, for both finite and infinite intervals. Chapter 11 establishes the completeness of the eigenfunctions of regular Sturm-Liouville systems, assuming knowledge of only the basic properties of Euclidean vector spaces (inner product spaces).
Throughout our book, the properties of various important special functions-notably Bessel functions, hypergeometric functions, and the more common orthogonal polynomials-are derived from their defining differential equations and boundary conditions. In this way we illustrate the theory of ordinary differential equations and show its power.
This textbook also contains several hundred exercises of varying difficulty, which form an important part of the course. The most difficult exercises are starred.
It is a pleasure to thank John Barrett, Fred Brauer, Thomas Brown, Nathaniel Chafee, Lamberto Cesari, Abol Ghaffari, Andrew Gleason, Erwin Kreyszig, Carl Langenhop, Norman Levinson, Robert Lynch, Lawrence Markus, Frank Stew-
art, Feodor Theilheimer, J. L. Walsh, and Henry Wente for their comments,
criticisms, and help in eliminating errors.
Garrett Birkhoff Gian-Carlo Rota
Cambridge, Massachusetts
CONTENTS
1 FIRST-ORDER OF DIFFERENTIAL EQUATIONS 1 1. Introduction 1 2. Fundamental Theorem of the Calculus 2 3. First-order Linear Equations 7 4. Separable Equations 9 5. Quasilinear Equations; Implicit Solutions 11 6. Exact Differentials; Integrating Factors 15 7. Linear Fractional Equations 17 8. Graphical and Numerical Integration 20 9. The Initial Value Problem 24
*10. Uniqueness and Continuity 26 *11. A Comparison Theorem 29 *12. Regular and Normal Curve Families 31
2 SECOND-ORDER LINEAR EQUATIONS 34 1. Bases of Solutions 34 2. Initial Value Problems 37 3. Qualitative Behavior; Stability 39 4. Uniqueness Theorem 40 5. The Wronskian 43 6. Separation and Comparison Theorems 47 7. The Phase Plane 49 8. Adjoint Operators; Lagrange Identity 54 9. Green's Functions 58
*10. Two-endpoint Problems 63 *11. Green's Functions, II 65
3 LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS 71 1. The Characteristic Polynomial 71 2. Complex Exponential Functions 72 3. The Operational Calculus 76 4. Solution Bases 78 5. Inhomogeneous Equations 83
vii
viii
Contents
6. Stability 85 7. The Transfer Function 86 *8. The Nyquist Diagram 90 *9. The Green's Function 93
4 POWER SERIES SOLUTIONS 99
1. Introduction 99 2. Method of Undetermined Coefficients 101 3. More Examples 105 ·4. Three First-order DEs 107 5. Analytic Functions 110 6. Method of Majorants 113 *7. Sine and Cosine Functions 116 *8. Bessel Functions 117 9. First-order Nonlinear DEs 121 10. Radius of Convergence 124 *11. Method ofMajorants, II 126 *12. Complex Solutions 128
5 PLANE AUTONOMOUS SYSTEMS 131
1. Autonomous Systems 131 2. Plane Autonomous Systems 134 3. The Phase Plane, II 136 4. Linear Autonomous Systems 141 5. Linear Equivalence 144 6. Equivalence Under Diffeomorphisms 151 7. Stability 153 8. Method of Liapunov 157 9. Undamped Nonlinear Oscillations 158 10. Soft and Hard Springs 159 11. Damped Nonlinear Oscillations 163 *12. Limit Cycles 164
6 EXISTENCE AND UNIQUENESS THEOREMS 170
1. Introduction 170 2. Lipschitz conditions 172 3. Well-posed Problems 174 4. Continuity 177 *5. Normal Systems 180 6. Equivalent Integral Equation 183 7. Successive Approximation 185 8. Linear Systems 188 9. Local Existence Theorem 190
*10. The Peano Existence Theorem 191 *II. Analytic Equations 193 *12. Continuation of Solutions 197 *13. The Perturbation Equation 198
Contents
ix
7 APPROXIMATE SOLUTIONS 204
I. Introduction 204 2. Error Bounds 205 *3. Deviation and Error 207 4. Mesh-halving; Richardson Extrapolation 210 5. Midpoint Quadrature 212 6. Trapezoidal Quadrature 215 *7. Trapezoidal Integration 218 8. The Improved Euler Method 222 *9. The Modified Euler Method 224 *10. Cumulative Error Bound 226
8 EFFICIENT NUMERICAL INTEGRATION 230
I. Difference Operators 230 2. Polynomial Interpolation 232 *3. Interpolation Errors 235 4. Stability 237 *5. Numerical Differentiation; Roundoff 240 *6. Higher Order Quadrature 244 *7. Gaussian Quadrature 248 8. Fourth-order Runge-Kutta 250 *9. Milne's Method 256 *10. Multistep Methods 258
9 REGULAR SINGULAR POINTS 261
I. Introduction 261 *2. Movable Singular Points 263 3. First-order Linear Equations 264 4. Continuation Principle; Circuit Matrix 268 5. Canonical Bases 270 6. Regular Singular Points 274 7. Bessel Equation 276 8. The Fundamental Theorem 281 *9. Alternative Proof of the Fundamental Theorem 285 *10. Hypergeometric Functions 287 *II. The Jacobi Polynomials 289 *12. Singular Points at Infinity 292 *13. Fuchsian Equations 294
x
Contents
10 STURM-LIOUVILLE SYSTEMS 300
I. Sturm-Liouville Systems 300 2. Sturm-Liouville Series 302 *3. Physical Interpretations 305 4. Singular Systems 308 5. Priifer Substitution 312 6. Sturm Comparison Theorem 313 7. Sturm Oscillation Theorem 314 8. The Sequence of Eigenfunctions 318 9. The Liouville Normal Form 320 10. Modified Priifer Substitution 323 *11. The Asymptotic Behavior of Bessel Functions 326 12. Distribution of Eigenvalues 328 13. Normalized Eigenfunctions 329 14. Inhomogeneous Equations 333 15. Green's Functions 334 *16. The Schroedinger Equation 336 *17. The Square-well Potential 338 *18. Mixed Spectrum 339
11 EXPANSIONS IN EIGENFUNCTIONS 344
1. Fourier Series 344 2. Orthogonal Expansions 346 3. Mean-square Approximation 347 4. Completeness 350 5. Orthogonal Polynomials 352 *6. Properties of Orthogonal Polynomials 354 *7. Chebyshev Polynomials 358 8. Euclidean Vector Spaces 360 9. Completeness of Eigenfunctions 363 *10. Hilbert Space 365 *11. Proof of Completeness 367
APPENDIX A: LINEAR SYSTEMS 371
1. Matrix Norm 371 2. Constant-coefficient Systems 372 3. The Matrizant 375 4. Floquet Theorem; Canonical Bases 377
APPENDIX B: BIFURCATION THEORY 380
1. What Is Bifurcation? 380 *2. Poincare Index Theorem 381
3. Hamiltonian Systems 383 4. Hamiltonian Bifurcations 386 5. Poincare Maps 387 6. Periodically Forced Systems 389
BIBLIOGRAPHY 392
INDEX 395
Contents
xi
CHAPTER 1
FIRST-ORDER DIFFERENTIAL
EQUATIONS
I INTRODUCTION
A differential equation is an equation between specified derivatives of an unknown function, its values, and known quantities and functions. Many physical laws are most simply and naturally formulated as differential equations (or DEs, as we will write for short). For this reason, DEs have been studied by the greatest mathematicians and mathematical physicists since the time of Newton.
Ordinary differential equations are DEs whose unknowns are functions of a single variable; they arise most commonly in the study of dynamical systems and electrical networks. They are much easier to treat than partial differential equations, whose unknown functions depend on two or more independent variables.
Ordinary DEs are classified according to their order. The order of a DE is defined as the largest positive integer, n, for which an nth derivative occurs in the equation. Thus, an equation of the form
ct,(x,y,y') = 0
is said to be of the first order. This chapter will deal with first-order DEs of the special form
(1)
M(x,y) + N(x,y)y' = 0
A DE of the form (1) is often said to be of the first degree. This is because, considered as a polynomial in the derivative of highest order, y', it is of the first
degree. One might think that it would therefore be called "linear," but this name is
reserved (within the class of first-order DEs) for DEs of the much more special
form a(x)y' + b(x)y + c(x) = 0, which are linear in y and its derivatives. Such
"linear" DEs will be taken up in §3, and we shall call first-order DEs of the more
general form (1) quasilinear.
A primary aim of the study of differential equations is to find their solutions-
that is, functions y = f(x) which satisfy them. In this chapter, we will deal with
the following special case of the problem of "solving" given DEs.
DEFINITION. A solution of (1) is a function f (x) such that M(xJ(x)) + N(xJ(x))f'(x) = 0 for all x in the interval where f(x) is defined.
1
2
CHAPTER 1 First-Order Differential Equations
The problem of solving (1) for given functions M(x,y) and N(x,y) is thus to
determine all real functions y = J(x) which satisfy (1), that is, all its solutions.
Example 1. Consider the first-order quasilinear DE
(2)
X + yy' = 0
The solutions of (2) can be found by considering the formula d(x2 + y2)/dx = 2(x + yy'). Clearly, y = J(x) is a solution of (2) if and only if x2 + y2 = C is a
constant.
The equation x2 + y2 = C defines y implicitly as a two-valued function of x,
for any positive constant C. Solving for y, we get for each positive constant C
V two solutions, the (single-valued)t functions y = ± C - x2. The graphs of these
solutions, the so-called solution curves, form two families of semicircles. These
fill the upper half-plane y > 0 and the lower half-plane y < 0, respectively, in
that there is one and only one such semicircle through each point in each half-
plane.
V Caution. Note that the functions y = ± C - x2 are defined only in the
VC VC , interval -
< x<
and that since y' does not exist (is "infinite") when
x = ± VC, these functions are solutions of (1) only on -VC < x < VC.
Therefore, although the pairs of semicircles in Figure 1.1 appear to join together
to form the full circle x2 + y2 = C, the latter is not a "solution curve" of (1). In
fact, no solution curve of (2) can cross the x-axis (except possibly at the origin),
because on the x-axis y = 0 the DE (2) implies x = 0 for any finite y'.
The preceding difficulty also arises if one tries to solve the DE (2) for y'. Divid-
ing through by y, one gets y' = -x/y, an equation which cannot be satisfied if y = 0. The preceding difficulty is thus avoided if one restricts attention to
regions where the DE (1) is normal, in the following sense.
DEFINITION. A normal first-order DE is one of the form
(3)
y' = F(x,y)
In the normal formy' = -x/y of the DE (2), the function F(x,y) is continuous
in the upper half-plane y > 0 and in the lower half-plane where y < 0; it is
undefined on the x-axis.
2 FUNDAMENTAL THEOREM OF THE CALCULUS
Although the importance of the theory of (ordinary) DEs stems primarily from its many applications to geometry, science, and engineering, a clear under-
t In this book, the word "function" will always mean single-valued function, unless the contrary is
expressly specified.
2 Fundamental Theorem of the Calculus
3
y
Figure 1.1 Integral curves of x + yy' = 0.
standing of its capabilities can only be achieved if its definitions and results are formulated precisely. Some of its most difficult results concern the existence and uniqueness of solutions. The nature of such existence and uniqueness theorems is well illustrated by the most familiar (and simplest!) class of ordinary DEs. These are the first-order DEs of the very special form
(4)
y' = g(x)
Such DEs are normal; their solutions are described by the fundamental theorem of the calculus, which reads as follows.
FUNDAMENTAL THEOREM OF THE CALCULUS. Let the function g(x) in the
DE (4) be continuous in the interval a < x < b. Given a number c, there is one and
only one solutionf(x) of the DE (4) in the interval such thatf(a) = c. This solution is
given l,y the de.finite integral
(5)
J(x) = C + ix g(t) dt, C = f(a)
This basic result serves as a model of rigorous formulation in several respects.
First, it specifies the region under consideration, as a vertical strip a < x < b
in the xy-plane. Second, it describes in precise terms the class of functions g(x)
considered. And third, it asserts the existence and uniqueness of a solution, given
the "initial condition" /(a) = c.
We recall that the definite integral
(5')
is defined for each fixed x as a limit of Riemann sums; it is not necessary to find
a formal expression for the indefinite integral Jg(x) dx to give meaning to the definite integral J: g(t) dt, provided only that g(t) is continuous. Such functions
4
CHAPTER 1 First-Order Differential Equations
as the error Junction erf x = (2/y;;.) JOe-12 dt and the sine integral Junction
Si (x) = Jo [(sin t)/t] dt are indeed commonly de.fined as definite integrals; cf.
Ch. 4, §1.
Quadrature. The preceding considerations enable one to solve DEs of the
special form y' = g(x) by inspection: for any a, one solution is the function J; g(t) dt; the others are obtained by adding an arbitrary constant C to this
"particular" solution. Thus, the solutions of y' = e-x2 are the functions y =
Je-x2 dx = ( y;;. /2) erf x + C; those of xy' = sin x are the functions y = Si (x) + C; and so on. Note that from any one solution curve of y' = g(x), the others are obtained by the vertical translations (x,y) I-+ (x,y + C).t Thus,
they form a one-parameter family of curves, one for each value of the parameter
C. This important geometrical fact is illustrated in Figure 1.2.
After y' = J(x), the simplest type of DE is y' = g(y). Any such DE is invariant under horizontal translation (x,y) I-+ (x + c,y). Hence, any horizontal line is cut
by all solution curves at the same angle (such lines are called "isoclines"), and
any horizontal translate y = cf>(x + c) of any solution curve y = cf>(x) is again a
solution curve.
The DE y' = y is the most familiar DE of this form. It can be solved by rewrit-
ing it as dy/y = dx; integrating, we get x = In lyl + c, or y = ±c-c, where c is an arbitrary constant. Setting k = ±e-c, we get the general solution y = kc-
but the solution y = 0 is "lost" until the last step.
Example 2. A similar procedure can be applied to any DE of the form y' = g(y). Thus consider
(6)
y'=y2-l
Since y2 - 1 = (y + l)(y - 1), the constant functions y = - l and y = l are
particular solutions of (6). Since y2 > 1 if ly I > 1 whereas y2 < 1 if -1 < y
y
Figure 1.2 Solution curves of y' = e-•2_ t The symbol I- is to be read as "goes into".
2 Fundamental Theorem of the Calculus
5
< 1, all solutions are decreasing functions in the strip Iy I < 1 and increasing
functions outside it; see Figure 1.3.
Using the partial fraction decomposition 2/(y2 - 1) = 1/(y - 1) - 1/(y + 1), one can rewrite (6) as 2 dx = dy/(y - 1) - dy/(y + 1) from which we obtain, by integrating, 2(x - c) = In I(y - 1)/(y + 1) I- Exponentiating both sides, we get ±e2<x-c) = (y - 1)/(y + 1), which reduces after some manipulation to
(6')
± 1 e2<x-c> { tanh} + y = l e2<x-c) = coth (c - x)
This procedure "loses" the special solutions y = l and y = - l, but gives all others. Note that if y = J(x) is a solution of (6), then so is 1/y = 1/J(x), as can be directly verified from (6) (provided y 'F 0).
Example 3. A more complicated DE tractable by the same methods is y' =
y3 - y. Since y3 - y = y(y + l)(y - 1), the constant functions y = - l, y = 0, and y = l are particular solutions. Since y3 > y if - 1 < y < 0 or 1 < y, whereas y3 < y if y < - l or O < y < l, all solutions are increasing functions in the strips - 1 < y < 0 and y > l, and decreasing in the complementary strips.
To find the other solutions, we replace the DE y' = dy/dx = y3 - y by its reciprocal, dx/dy = l/(y3 - y). We then use partial fractions to obtain the DE
(6")
dx
l
1{ 1
1
2}
dy - y3 - y - 2 y + l + y - l y
The DE (6") can be integrated termwise to give, after some manipulation,
X =¼In 11 - y-2 1 + C, or y = ±[l + exp (2x - k)r 112, k = 2c.
Symmetry. The labor of drawing solution curves of the preceding DEs is reduced not only by their invariance under horizontal translation, but by the use of other symmetries as well. Thus, the DEs y' = y and y' = y3 - y are invariant under reflection in the x-axis [i.e., under (x,y) 1--+ (x, -y)]; hence, so are
their solution curves. Likewise, the DEs y' = 1 + y2 and y' = y2 - 1 (and their
solution curves) are invariant under (x,y) 1--+ (-x, -y)-i.e., under rotation through 180° about the origin. These symmetries are visible in Figures 1.3 and
1.4.
EXERCISES A
I. (a) Show that iff(x) satisfies (6), then so do 1/f(x) and - /(- x). (b) Explain how these facts relate to Figure 1.2.
2. Show that every solution curve (6') of (6) is equivalent under horizontal translation
and/or reflection in the x-axis toy = (I + e2")/(l - e2") or toy = (I - e2")/(l +
e2").
3. (a) Show that if y' = y2 + I, then y is an increasing function and x = arctan y + c. (b) Infer that no solution of y' = y2 + I can be defined on an interval of length
exceeding 'If.
6
CHAPTER 1 First-Order Differential Equations
y
= Figure 1.3 Solution curves of y' y2 - l.
(c) Show that a nonhorizontal solution curve of y' = y2 ± l has a point of inflection
on the x-axis and nowhere else.
4. Show that the solution curves of y' = y2 are the x-axis and rectangular hyperbolas
having this for one asymptote. [HINT: Rewrite y' = y2 as dy/y2 = dx.] 5. Sketch sample solution curves to indicate the qualitative behavior of the solutions of
the following DEs: (a) y' = l - y3, (b) y' = sin 1ry, (c) y' = sin2 y.
6. Show that the solutions of y' = g(y), for any continuous function g, are either all
increasing functions or all decreasing functions in any strip y,_ 1 < y < y1 between
successive zeros of g(y) [i.e., values yl' such that g(y) = 0]. 7. Show that the solutions of y' = g(y) are convex up or convex down for given y accord-
ing as Ig I is an increasing or decreasing function of y there.
y
i :Jl,.,
\ \\ ~~~~~~~· ,--,
Figure 1.4 Solution curves of y' = y3 - y.
3 First-Order Linear Equations
7
*8. (a) Prove in detail that any nonconstant solution of (6) must satisfy
x = c +½In I(y - 1)/(y + 1) I
(b) Solve (6") in detail, discussing the case k = 0 and the limiting case k = oo (y =
0).
*9. (a) Show that the choice k < 0 in (6') gives solutions in the strip -1 < y < 1.
(b) Show that the choice k = 1 gives two solutions having the positive and negative y-axes for asymptotes, respectively.
3 FIRST-ORDER LINEAR EQUATIONS
In the next five sections, we will recall some very elementary, but extremely useful methods for solving important special families of first-order DEs. We begin with the first-order linear DE
(7)
a(x)y' + b(x)y + c(x) = 0
= It is called homogeneous if c(x) 0, and inhomogeneous otherwise.
Let the coefficient functions a, b, c be continuous. In any interval I where a(x)
does not vanish, the linear DE (7) can be reduced to the normal form
(8)
y' = -p(x)y - q(x)
with continuous coefficient functions p = b/a and q = c/a. The homogeneous linear case y' = -p(x)y of (8) is solved easily, if not rig-
orously, as follows. We separate variables, dy/y = ,p(x) dx; then we integrate (by quadratures), In ly I = - Jp(x) dx + C. Exponentiating both sides, we obtain lyl = Ke-fp(x)dx, where K = ec and any indefinite integral P(x) = fp(x) dx may
be used.
This heuristic reasoning suggests that, if P'(x) = p(x), then yt!'<x> is a constant.
Though this result was derived heuristically, it is easily verified rigorously:
if and (since eP(x) 'F 0) only if y satisfies (8). This proves the following result.
THEOREM 1. If P(x) =f p(x) dx is an indefinite integral of the continuous func-
tion p, then the function ce-P(x) = ce-fp(x)dx is a solution of the DE y'+p(x)y =0 for any constant c, and all solutions of the DE are of this form.
* The more difficult exercises in this book are starred.
8
CHAPTER I First-Order Differential Equations
We can treat the general case of (8) similarly. Differentiating the function eP<x>y, where P(x) is as before, we get
It follows that, for some constant y0, we must have l'(x)y = y~ - J: e1'<1>q(t) dt,
whence
(8')
Conversely, formula (8') defines a solution of (8) with y(a) = y0 for every y0, by
the Fundamental Theorem of the Calculus. This proves
THEOREM 2. If P(x) is as in Theorem l, then the general solution of the DE (8)
is given by (8'). Moreover, y0 = y(a) if and only if P(x) = J: p(x)dx.
Quadrature. In the Fundamental Theorem of the Calculus, if the function
g is nonnegative, the definite integral in (5) is the area under the curve y = g(x)
in the vertical strip between a and x. For this reason, the integration of (4) is called a quadrature. Formula (8') reduces the solution of any first-order linear DE to the performance of a sequence of quadratures. Using Tables of Indefinite Integrals,t the solutions can therefore often be expressed explicitly, in terms of "elementary" functions whose numerical values have been tabulated ("tabulated functions").
Initial Value Problem. In general, the "initial value problem" for a firstorder DE y' = F(x,y) consists in finding a solution y = g(x) that satisfies an initial
condition y(a) = y0, where a and y0 are given constants. Theorem 2 states that
the initial value problem always has one and only one solution for a linear DE (8), on any interval a ~ x ~ b where p(x) and q(x) are defined and continuous.
Remark. There are often easier ways to solve linear DEs than substitution in (8'). This fact is illustrated by the following example.
Example 4. Consider the inhomogeneous linear DE
(9)
y' + y = X + 3
Trying y = ax + b, one easily verifies that x + 2 is one solution of (9). On the other hand, if y = f(x) is any other solution, then z = y - (x + 2) must satisfy z' + z = (y' + y) - (x + 3) = 0, whence z = ce-x by Theorem 1. It follows that the general solution of (9) is the sum ce-x + x + 2.
t See the book by Dwight listed in the Bibliography. Kamke's book listed there contains an extremely useful catalog of solutions of DEs not of the form y' = g(x). For a bibliography of function tables,
see Fletcher, Miller, and Rosenhead.
4 SEPARABLE EQUATIONS
4 Separable Equations
9
A differential equation that can be written in the form
(10)
y' = g(x)h(y)
is said to be separable. Thus, the DEs y' = y2 - 1 and y' = y3 - y of Examples 2 and 3 are obviously separable, with g(x) = 1. The DE x + yy' = 0 of Example 1, rewritten as y' = (-x)(l /y) is separable except on the x-axis, where 1/y
= V becomes infinite. As we have seen, the solutions y ± C - x2 of this DE
cannot be expressed as single-valued functions of x on the x-axis, essentially for
this reason. A similar difficulty arises in general for DEs of the form
(11)
M(x) + N(y)y' = 0
These can also be rewritten as
(11')
M(x) dx + N(y) dy = 0
or as y' = - M(x)/N(y) and are therefore also said to be "separable." Whenever
N(y) vanishes, it is difficult or impossible to express y as a function of x.
It is easy to solve separable DEs formally. If cp(x) = f M(x) dx and 1/t(y) = f N(y) dy are any antiderivatives ("indefinite integrals") of M(x) and N(y), respec-
tively, then the level curves
cp(x) + 1/t(y) = C
of the function U(x,y) = cp(x) + 1/t(y) are solution curves of the DEs (11) and
(11'). Moreover, the Fundamental Theorem of the Calculus assures us of the
existence of such antiderivatives. Likewise, for any indefinite integrals G(x) =
= fg(x) dx and H(y) f dy/h(y), the level curves of
G(x) - H(y) = C
may be expected to define solutions of (10), of the form
(11")
However, the solutions defined in this way are only local. They are defined by
the Inverse Function Theorem,t but only in intervals of monotonicity of H(y)
where h(y) and hence H(y) = 1/h(y) has constant sign. Moreover, the range of H(y) may be bounded, as in the case of the DE y' = 1 + y2. In this case,
t This theorem states that if H(y) is a strictly monotonic map of [c,d] onto [a,b], then H- 1(y) is single-
valued and monotonic from [a,b] to [c,d].
10
CHAPTER 1 First-Order Differential Equations
f ~00 dy/(1 + y2) = 1r. Therefore, no solution of the DE y' = 1 + y2 can be
continuously defined over an interval (a,b) of length exceeding 1r.
Example 5. Consider the DE y' = (1 + y2)e-x2• Separating variables, we get J dy/(1 + y2) = J e-x2 dx, whose general solution is arctan y = (\.{;-/2) erf x + C, or y = tan {(yl;/2) erf (x) + C}.
The formal transformations (10') and (10") can be rigorously justified whenever g(x) and h(y) are continuous functions, in any interval in which 'h(y) does not vanish. This is because the Fundamental Theorem of the Calculus again assures
us that cJ>(x) = J g(x) dx exists and is differentiable on any interval where g(x) is defined and continuous, while 1/t(y) = J dy/h(y) exists and is strictly monotonic in
any interval (yi,y2) between successive zeros y1 and y2 of h(y), which we also assume to be continuous. Hence, as in Example 2, the equation
f f 1/t(y) - cJ>(x) = dy - h(x) dx = c g(y)
gives for each c a solution of y' = g(x)h(y) in the strip y1 < y < y2. Near any x
with y1 - c < cJ>(x) < y2 - c, this solution is defined by the inverse function
theorem, by the formula y = t/t- 1(cJ>(x) + c).
Orthogonal Trajectories. An orthogonal trajectory to a family of curves is a curve that cuts all the given curves at right angles. For example, consider the
family of geometrically similar, coaxial ellipses x2 + my2 = C. These are integral curves of the DE x + myy' = 0, whose normal form y' = -x/my has separable
variables. The orthogonal trajectories of these ellipses have at each point a slope
y' = my/x, which is the negative reciprocal of -x/my. Separating variables, we
get dy/y = m dx/x, or ln lyl = m ln lxl, whence the orthogonal trajectories
are given by y = ± lxlm. More generally, the solution curves of any separable DE y' = g(x)h(y) have
as orthogonal trajectories the solution curves of the separable DE y' =
-1/g(x)h(y).
Critical Points. Points where au;ax = au;ay = 0 are called critical points
of the function u(x,y). Note that the directions of level lines and gradient lines
may be very irregular near critical points; consider those of the functions x2 ±
y2 near their critical point (0,0).
As will be explained in §5, the level curves of any function u E ~ 1(D) satisfy
the DE au;ax + y'au;ay = 0 in D, except at critical points of u. Clearly, their orthogonal trajectories are the solution curves of au/ay = y'au;ax, and so are everywhere tangent to the direction of Vu = grad u = (au;ax, au/ay). Curves
having this property are called gradient curves of u. Hence the gradient curves
of u are orthogonal trajectories of its level curves, except perhaps at critical
points.
5 Quasilinear Equations; Implicit Solutions
11
EXERCISESB
1. Find the solution of the DE xy' + 3y = 0 that satisfies the initial condition J(l) = 1.
+ + 2. Find equations describing all solutions of y' = (x y)2. [HINT: Set u = x y.] 3. (a) Find all solutions of the DE xy' + (1 - x)y = 0
(b) Same question for xy' + (1 - x)y = 1.
4. (a) Solve the DEs of Exercise 3 for the initial conditions y(l) = 1, y(l) = 2.
(b) Do the same for y(0) = 0 and y(0) = 1, or prove that no solution exists.
5. (a) Find the general solution of the DE y' + y = sin 2t.
(b) For arbitrary (real) constants a, b, and k =fa 0, find a particular solution of
(*)
y' = ay + b sin kt
(c) What is the general solution of(*)? 6. (a) Find a polynomial solution of the DE
(**)
y' + 2y = x2 + 4x + 7
(b) Find a solution of the DE (*) that satisfies the initial condition y(0) = 0.
7. Show that if k is a nonzero constant and q(x) a polynomial of degree n, then the DE
xy' + y = q(x) has exactly one polynomial solution of degree n.
In Exs. 8 and 9, solve the DE shown and discuss its solutions qualitatively.
8. dr/dfJ = r2 sin 1/r (polar coordinates). 9. dr/dfJ = 2/log r. 10. (a) Show that the ellipses 5x2 + 6xy + 5y2 = C are integral curves of the DE
(5x + 3y) + (3x + 5y)y' = 0
(b) What are its solution curves?
5 QUASILINEAR EQUATIONS; IMPLICIT SOLUTIONS
In this section and the next, we consider the general problem of solving quasilinear DEs (1), which we rewrite as
(12)
M(x,y) dx + N(x,y) dy = 0
to bring out the latent symmetry between the roles of x and y. Such DEs arise naturally ifwe consider the level curves of functions. If G(x,y) is any continuously differentiable function, then the DE
(12')
ac
ax (x,y)
+
ac
ay (x,y) y'
=
0
12
CHAPTER I First-Order Differential Equations
is satisfied on any level curve G(x,y) = C, at all points where oG/oy =I=- 0. This DE is of the form (1), with M(x,y) = oG/ox and N(x,y) = oG/oy.
For this reason, any function G which is related in the foregoing way to a quasilinear DE (1) or (12), or to a nonzero multiple of (12) of the form
(12")
+ µ(x,y)[M(x,y) dx N(x,y) dy] = 0,
is called an implicit solution of (12). Slightly more generally, an integral of (1) or (12) is defined as a function G(x,y) of two variables that is constant on every solution curve of (1).
+ For example, the equation x4 - 6x2y2 y4 = C is an implicit solution of the
quasilinear DE
+ (x3 - 3xy2) (y3 - 3x2y)y' = 0
(x3 - 3xy2)
or
y' = (3x2y - y3)
+ The level curves of x4 - 6x2y2 y4 have vertical tangents on the x-axis and the lines y = ± \/'3x. Elsewhere, the DE displayed above is of the normal form y' = F(x,y).
Critical Points. At points where ocJ>/ox = ocJ>/oy = 0, the directions of the gradient and level curves are undefined; such points are called "critical points"
of cJ>. Thus, the function x2 + y2 has the origin for its only critical point, and the + same is true of the function x4 - 6x2y2 y4. (Can you prove it?) On the other + hand, the function sin (x2 y2) also has circles of critical points, occurring wher-
ever r2 is an odd integral multiple of 1r/2. Most functions have only isolated crit-
ical points, however, and in general we shall confine our attention to such
functions.
We will now examine more carefully the connection between quasilinear DEs
and level curves of functions, illustrated by the two preceding examples. To describe it accurately, we will need two more definitions. We first define a
domaint as a nonempty open connected set. We call a function c/> = cJ>(x1, ... , x,)
of class rJn in a domain D when all its derivatives ocJ>/ox., o2cJ>/ox,oxl' . .. of orders 1, ... , n exist and are continuous in D. We will write this condition in symbols as cJ> E (Jn or cJ> E fJn(D). When cJ> is merely assumed to be continuous, we will write
c/> E fJ or cJ> E fJ(D). To make the connection between level curves and quasilinear DEs rigorous,
we will also need to assume the following basic theorem.
t See Apostol, Vol. 1, p. 252. Here and later, page references to authors refer to the books listed in
the selected bibliography.
5 Quasilinear Equations; Implicit Solutions
13
IMPLICIT FUNCTION THEOREM,t Let u(x,y) be a function of class &n (n >
1) in a domain containing (x ,y let u denote u(x ,y and let uy(x ,y *0. Then
);
),
)
0
0
0
0
0
0
0
there exists positive numbers E and 71 such that for each x E (x0 - + E,x0 E) and C E
(u 0
E,u0 + E), the equation u(x,y) = Chas a unique solution y = f(x,C) in the
interval (y0 - 71, Yo + 71). Moreover, the function f so de.fined is also ofclass &n.
It follows that if u E (§) n(D), n > 1, the level curves of u are graphs of func-
tions y = f(x,c), also of class <rn, except where au;ay = 0. In Example 1, u =
x2 + y2 and there is one such curve, the x-axis y = O; this divides the plane into
two subdomains, the half-planes y > 0 and y < 0. Moreover, the locus (set) where
au;ay = 0 consists of the points where the circles u = const have vertical tan-
gents and the "critical point" (0,0) where au;ax = au;ay = 0-that is, where
the surface z = u(x,y) has a horizontal tangent plane.
This situation is typical: for most functions u(x,y), the partial derivative au;a
* y vanishes on isolated curves that divide the (x,y)-plane into a number of regions
where au;ay 0 has constant sign, and hence in which the Implicit Function
Theorem applies.
* THEOREM 3. In any domain where au;ay 0, the level curves of any function
u E &1 are solution curves of the quasilinear DE
(13)
ct,(x,y,y') = M(x,y) + N(x,y)y' = 0
where M(x,y) = au;ax and N(x,y) = au/iJy.
Proof By the Chain Rule, du/dx = au;ax + (au;ay)y' along any curve y =
f(x). Hence, such a curve is a level curve of u if and only if
du au au, -=-+-y =O dx ax ay
By the Implicit Function Theorem, the level curves of u, being graphs of func-
* tions y = f(x) in domains where af;ay 0, are therefore solution curves of the
quasilinear DE (13). In the normal formy' = F(x,y) of this DE, therefore, F(x,y) = -(au;ax)/(au;ay) becomes infinite precisely when au;ay = 0.
To describe the relationship between the DE (13) and the function u, we need a new notion.
DEFINITION. An integral of a first-order quasilinear DE (1) is a function
of two variables, u(x,y), which is constant on every solution curve of (1).
Thus, the function u(x,y) = x2 + y2 is an integral of the DE x + yy' = 0
tCourant andjohn, Vol. 2, p. 218. We will reconsider the Implicit Function Theorem in greater depth in §12.
14
CHAPTER I First-Order Differential Equations
because, upon replacing the variable y by any function ± V C - x2 , we obtain
u(x,y) = C. This integral is most easily found by rewriting x + y dy/dx = 0 in differential form, as x dx + y dy = 0, and recognizing that x dx + y dy = ½d(x2 + y2) is an "exact" differential (see §6).
Level curves of an integral of a quasilinear DE are called integral curves of the
DE; thus, the circles x2 + y2 = C are integral curves of the DE x + yy' = 0,
although not solution curves.
Example 6. From the DE yy' = x, rewritten as y dy/dx = x, we get the equation y dy - x dx = 0. Since y dy - x dx = ½d(y2 - x2), we see that the integral
curves of the DE are the branches of the hyperbolas y2 = x 2 + C and the asymp-
totes y = ±x, as shown in Figure 1.5. The branches y = ± Vx2 + k2 are solution curves, but each level curve y = ± V x2 - k2 has four branches separated
by the x-axis (the line where the integral curves have vertical tangents).
Note that, where the level curves y = x and y = -x of y2 - x2 cross, the gradient (aFJax, aF/ay) of the integral F(x,y) = y2 - x2 vanishes: (aFJax,
aF/ax) = (0,0).
y
Figure 1.5 Level curves c = O, ±1, ±2, ±3, ±4, ±6, ±9, ±12 of y2 - x'-.
6 Exact Differentials; Integrating Factors
15
6 EXACT DIFFERENTIALS; INTEGRATING FACTORS
A considerably larger class of "implicit solutions" of quasinormal DEs can be
found by examining more closely the condition that M(x,y) dx + N(x,y) dy be an
"exact differential" dU, and by looking for an "integrating factor" µ(x,y) that
will convert the equation
(14)
M(x,y) dx + N(x,y) dy = 0
into one involving a "total" or "exact" differential
µdU = µ(x,y)[M(x,y) dx + N(x,y) dy] = 0
whose (implicit) solutions are the level curves of U. In general, the quasinormal DE (1) or
(14')
M(x,y) + N(x,y)y' = 0
is said to be exact when there exists a function U(x,y) of which it is the 'total
differential', so that au;ax = M(x,y) and au;ay = N(x,y), or equivalently
(14")
dU
=
au ax
dx
+
au ay dy
=
M(x,y)
dx
+
N(x,y)
dy
Since dU = 0 on any solution curve of the DE (14), we see that solution curves
of (14) must lie on level curves of U, just as in the "separable variable" case.
Since a2U/axay = a2U/ayax, clearly a necessary condition for (14') to be an exact differential is that aN;ax = aM;ay. It is shown in the calculus that the
converse is also true locally. More precisely, the following result is true.
THEOREM 4. If M(x,y) and N(x,y) are continuously differentiable Junctions in a simply connected domain, then (14') is an exact differential if and only if
aN/ax = aM/ay.
g The function U = U (P) for (14) is constructed as the line integral [M(x,y)dx
+ N (x,y) dy] from a fixed point 0 in the domain [perhaps 0 = (0,0)] to a variable point P = (x,y). Thus, for the DE x + yy' = 0 of Example 1, this procedure gives J~ (x dx + y dy) = (x2 + y2) /2, showing again that the solution curves of x + yy' = 0 lie on the circles x2 + y2 = C with center (0,0). More generally, in the separable equation case of g(x) dx + dy/h(y), we have a[g(x)] ;ay = 0 = a[l/h(y)] ;ax, giving G(x) + H(y) = C as in §5.
Even when the differential M dx + N dy is not exact, one can often find a
function µ (x,y) such that the product
(µM) dx + (µN) dy = du
16
CHAPTER 1 First-Order Differential Equations
is an exact differential. The contour lines u(x,y) = C will then again be integral + + curves of the DE M(x,y) N (x,y)y' = 0 because du/dx = µ(M Ny') = 0; and
segments of these contour lines between points of vertical tangency will be solution curves. Such a function µ is called an integrating factor.
+ DEFINITION. An integratingja,ctor for a differential M(x,y) dx N(x,y) dy is + a nonvanishing function µ(x,y) such that the product (µM) dx (µN) dy is an
exact differential.
Thus, as we saw in §3, for any indefinite integral P(x) = fp(x) dx of p(x), the
function exp {P(x)} is an integrating factor for the linear DE (8). Likewise, the function l/h(x) is an integrating factor for the separable DE (11).
The differential x dy - y dx furnishes another interesting example. It has an
integrating factor in the right half-plane x > 0 of the form µ(x) = l/x2, since
dy/x - y dx/x2 = d(y/x); cf. Ex. Cl 1. A more interesting integrating factor is l/(x2 + y2). Indeed, the function
= f + fJ(x,y)
(x,y) (x dy - y dx)
(I,O) (x2 y2)
is the angle made with the positive x-axis by the vector (x,y). That is, it is just the
polar angle fJ when the point (x,y) is expressed in polar coordinates. Therefore,
the integral curves of xy' = y in the domain x > 0 are the radii fJ = C, where -1r/2 < fJ < 1r/2; the solution curves are the same.
+ Note that the differential (x dy - y dx)/(x2 y2) is not exact in the punctured
plane, consisting of the x,y-plane with the origin deleted. For fJ changes by 21r
+ in going around the origin. This is possible, even though a[x/(x2 y2)]/ax = a + [-y/(x 2 y2)]/ay, because the punctured plane is not a simply connected
domain.
Still another integrating factor of x dy - y dx is l/xy, which-replaces x dy -
+ y dx = 0 by dy/y = dx/x, or ln ly I = ln Ix I C in the interior of each of the
four quadrants into which the coordinate axes divide the (x,y)-plane. Exponen-
tiating both sides, we get y = kx. A less simple example concerns the DE x(x3 - 2y3)y' = (2x3 - y3)y. Here an
integrating factor is l/x2y2. Ifwe divide the given DE by x2y2, we get
+ + = .!!:__ (x2 y2) 2x3y - x4y' - y4 2xy3y'
dxy x
x2y2
Hence the solution curves of the DE are (x2/y) + (y2jx) = C, or x3 + y3 = Cxy.
Parametric Solutions. Besides "explicit" solutions y = J(x) and "implicit"
solutions U(x,y) = C, quasinormal DEs (14) can have "parametric" solutions.
Here by a parametric solution is meant a parametric curve x = g(t), y = h(t) along
+ which the line integral f M(x,y) dx N(x,y) dy, defined as
(15)
+ f[M(g(t),h(t))g'(t) N(g(t),h(t))h'(t)] dt
7 Linear Fractional Equations
17
vanishes. Thus, the curves x = A cos t, y = A sin t are parametric solutions of x + yy' = 0. They are also solutions of the system of two first-order DEs dx/dt = -y, dy/dt = x, and will be studied from this standpoint in Chapter 5.
EXERCISESC
1. Find an integral of the DE y' = y2/x2, and plot its integral curves. Locate its critical
points, if any.
2. Sketch the level curves and gradient lines of the function x3 + 3x2y + y3. What are
its critical points?
3. Same question as Exercise 2 for xs _ 3x2y + y3.
4. Find equations describing all solutions of
y
2
=2x-1+-
y
5. For what pairs of positive integers n,r is the function Ix In of class @'?
6. Solve the DE xy' + y = 0 by the method of separation of variables. Discuss its
solution curves, integral curves, and critical points.
7. (a) Reduce the Bernoulli DE y' + p(x)y = q(x)yn, n -=fa 1, to a linear first-order DE = by the substitution u y1-n.
(b) Express its general solution in terms of indefinite integrals.
In Exs. 8 and 9, solve the DE exhibited, sketch its solution curves, and describe them qualitatively:·
8. y' = y/x - x2.
9. y' = y/x -
In
11x-
1
.
10. Find all solutions of the DE !xi + lyly' = 0. In which regions of the plane is the
differential on the left side exact?
*11. Show that the reciprocal of any homogeneous quadratic function Q(x) = Ax2 + 2Bxy + Cy2 is an integrating factor of x dy - y dx.
*12. Show that if u and v are both integrals of the DE M(x,y) + N(x,y)y' = 0, then so are u + v, uv except where v = 0, ;\u + µv for any constants ;\ and µ, and g(u)
for any single-valued function g.
*13. (a) What are the level lines and critical points of sin (x + y)? (b) Show that for u = sin (x + y), (x0,y0) = (0,0), and r, = E = ¼,J(x,c) in the Implicit
Function Theorem need not exist if 11 < ¼while it may not be unique if n > 4.
7 LINEAR FRACTIONAL EQUATIONS
An important first-order DE is the linear fradional equation
(16)
dy ex+ dy dx=ax+by'
ad 'F- be
18
CHAPTER I First-Order Differential Equations
which is the normal fonn of
(16')
+ + (ax by)y' - (ex dy) = 0
It is understood that the coefficients a, b, c, d are constants. The integration of the DE (16) can be reduced to a quadrature by the sub-
stitution y = vx. This substitution replaces (16) by the DE
c + dv
xv'+ v = --a+ bv
in which the variables x and v can be separated. Transposing v, we are led to the separation of variables
+ = _____,,--(a_+_b_v_)_d_v__ dx 0 + bv2 (a - d)v - c x
Since the integrands are rational functions, this can be integrated in terms of elementary functions. Thus, x can be expressed as a function of v = y/x: we have x = kG(y/x), where
More generally, any DE of the form y' = F(y/x) can be treated similarly. Set-
+ ting v = y/x and differentiating y = xv, we get xv' v = F(v). This is clearly
equivalent to the separable DE
dv
dx
- - - = - = d (ln x)
F(v) - v x
whence x = K exp {Jdv/[F(v) - v]}.
Alternatively, we can introduce polar coordinates, setting x = r cos 8 and
y = r sin 8. If 1/t = 'Y - 8 is the angle between the tangent direction 'Y and
the radial direction 8, then
.!_ dr = cot 1/t = cot 'Y cot 8 + l
r d8
cot 8 - cot 'Y
Since tan 'Y = y' = F(y/x) = F(tan 8), we have
(17)
+ + -l -dr= 1
tan 'Y tan 8 l (tan 8)F(tan 8)
II\
= - - - - - - Q(v1
r d8 tan 'Y - tan 8 F(tan 8) - tan 8
7 Linear Fractional Equations
19
This can evidently be integrated by a quadrature:
(17')
r((/) = r(O) exp lo Q(O) d8
(17')
The function on the right is well-defined, by the Fundamental Theorem of the Calculus, as long as tan 'Y 'F tan 8, that is, as long as y' 'F y/x.
Invariant Radii. The radii along which the denominator of Q(8) vanishes
are those where (16) is equivalent to d8/dr = 0. Hence, these radii are particular
solution curves of (16); they are called invariant radii. They are the solutions
y = TX, for constant T = tan 8. Therefore, they are the radii y = TX for which y' = T = (c + dT)/(a + br), by (16), and so their slopes Tare the roots of the
quadratic equation
(18)
+ = bT2 (a - d)T c
If b 'F 0, Eq. (18) has zero, one, or two real roots according as its discriminant is negative, zero, or positive. This discriminant is
(18')
d = (a - d)2 + 4bc = (a + d)2 - 4(ad - be)
In the sectors between adjacent invariant radii, d8/dr has constant sign; this fact facilitates the sketching of solution curves. Together with the invariant radii, the solution curves (17') form a regular curve family in the punctured plane, consisting of the xy-plane with the origin deleted.
Similarity Property. Each solution of the linear fractional DE (16) is trans-
formed into another solution when x and y are both multiplied by the same
nonzero constant k. The reason is, that bothy' = dy/dx and y/x are unchanged
by the transformation (x,y)-+ (kx,ky). In polar coordinates, if r = f(8) is a solu-
tion of (17), then so is r = kf(8). Since the transformation (x,y) -+ (kx,ky) is a
similarity transformation of the xy-plane for any fixed k, it follows that the solu-
tion curves in the sector between any two adjacent invariant radii are all geo-
metrically similar (and similarly placed). This fact is apparent in the drawings of
Figure 1.6.
Note also that the hyperbolas in Figure 1.6a are the orthogonal trajectories
of those of Figure 1.5. This is because they are integral curves of yy' = x and xy' = -y, respectively, and x/y is the negative reciprocal of -y/x.
EXERCISESD
1. Sketch the integral curves of the DEs in Exs. CB and C9 in the neighborhood of the origin of coordinates.
2. Express in closed form all solutions of the following DEs:
(a) y' = (x2 - y2)/(x2 + y2)
(b) y' = sin (y/x)
20
CHAPTER I First-Order Differential Equations
y
y
y
(a) xy' +y = 0
= (b) xy' 2y
Figure 1.6 Integral curves.
(c) y' = (3x + y)/(x - 3y)
3. (a) Show that the inhomogeneous linear fractional DE
(ex + dy + e) dx - (ax + by + J) dy = 0,
can be reduced to the form (16) by a translation of coordinates.
= + + (b) Using this idea, integrate (x y 1) dx (2x - y - 1) dy.
(c) For what sets of constants a, b, e, d, e, J, is the displayed DE exact?
+ 4. Find all integral curves of (xn yn)y' - xn-Iy = 0. [HINT: Set u = y/x.] 5. Prove in detail that the solutions of any homogeneous DE y' = g(y/x) have the
Similarity Property described in §7. 6. Show that the solution curves of y' = G(x,y) cut those of y' = F(x,y) at a constant
+ angle fl if and only if G = (r F)/(1 - rF), where r = tan fl.
7. Let A, B, C be constants, and K a parameter. Show that the coaxial conics
+ + + Ax2 2Bxy Cy2 = K, satisfy the DE y' = -(Ax+ By)/(Bx Cy). + + + 8. (a) Show that the differential (ax by) dy - (ex dy) dx is exact if and only if a
d = 0, and that in this case the integral curves form a family of coaxial conics.
+ (b) Using Exs. 6 and 7, show that if tan fl = (a d)/(e - b), the curves cutting the + + solution curves of the linear fractional DE y' = (ex dy)/(ax by) at an angle
fl form a family of coaxial conics. 9. For the linear fractional DE (16) show that
= + y" (ad - be)[ex2 - (a - d)xy - by2]/(ax by)3
Discuss the domains of convexity and concavity of solutions.
10. Find an integrating factor for y' + (2y/x) = a, and integrate the DE by quadratures.
8 GRAPHICAL AND NUMERICAL INTEGRATION
The simplest way to sketch approximate solution curves of a given first-order normal DE y' = F(x,y) proceeds as follows. Draw a short segment with slope ;\, = F(x.,y;) = tan 0; through each point (x;,y,) of a set of sample points sprinkled fairly densely over the domain of interest. Then draw smooth curves so as to have at every point a slope y' approximately equal to the average of the F(x,,yJ
8 Graphical and Numerical Integration
21
at nearby points, weighting the nearest points most heavily (i.e., using graphical interpolation). Methods of doing this systematically are called schemes of graphical integration.
The preceding construction also gives a graphical representation of the direction field associated with a given normal first-order DE. This is defined as follows.
DEFINITION. A direction field in a region D of the plane is a function that assigns to every point (x,y) in D a direction. Two directions are considered the same if they differ by an integral multiple of 180°, or 1r radians.
+ With every quasinormal DE M(x,y) N(x,y)y' = 0, there is associated a direc-
tion field. This associates with each point (xk,yk) not a critical point where M =
N = 0, a short segment parallel to the vector (N(xk,yJ, - M(xk,yk)). Such seg-
ments can be vertical whereas this is impossible for normal DEs. It is very easy to integrate graphically the linear fractional equation (16)
because solution curves have the same slope along each radius y = vx, v = constant: each radius y = kx is an isocline. We need only draw segments having
the right direction fairly densely on radii spaced at intervals of, say, 30°. After tracing one approximate integral curve through the direction field by the graphical method described above, we can construct others by taking advantage of the Similarity Property stated in §7.
Numerical Integration. With modem computers, it is easy to construct
accurate numerical tables of the solutions of initial value problems, where they
exist, for most reasonably well-behaved functions F(x,y). Solutions may exist
only locally. Thus, to solve the initial value problem for y' = 1 + y2 for the initial value y(O) = 0 on [0,1.6] is impossible, since the solution tan x becomes infinite
when y = 1r/2 = 1.57086.... We will now describe three very simple methods (or "algorithms") for computing such tables; the numerical solution of ordinary DEs will be taken up systematically in Chapters 7 and 8.
Simplest is the so-called Euler method, whose convergence to the exact solution (for FE <§11) was first proved by Cauchy around 1840 (see Chapter 7, §2).
One starts with the given initial value, y(a) = y0 = c, setting X0 = a and Y0 =
y0, and then for a suitable step-size h computes recursively
(19)
A reasonably accurate table can usually be obtained in this way, by letting h =
.001 (say), and printing out every tenth value of Yn. If greater accuracy is desired, one can reduce h to .0001, printing out
,Y: Y0,Y100,Y200 100, . .. , and "formatting" the results so that values are easy to look
up.
Improved Euler Method. The preceding algorithm, however, is very wasteful, as Euler realized. As he observed, one can obtain much more accurate
22
CHAPTER 1 First-Order Differential Equations
results with roughly the same computational effort by replacing (19) with the following "improved" Euler algorithm
(20)
With h = .001, this "improved" Euler method gives 5-digit accuracy in most
cases, while requiring only about twice as much arithmetic per time step. Whereas with Euler's method, to use 10 times as many mesh points ordinarily
gives only one more digit of accuracy, the same mesh refinement typically gives two more digits of accuracy with the improved Euler method.
As will be explained in Chapter 8, when truly accurate results are wanted, it is better to use other, more sophisticated methods that give four additional digits of accuracy each time h is divided by 10. In the special case of quadrature-that
is, of DEs of the form y' = g(x) (see §2)-to do this is simple. It suffices to
replace (I 9) by Simpson's Rule.
(21)
6 Yn+I = + Yn h [g(xJ + 4g (x-n 2+- h) + g(xn + h)]
For example, one can compute the natural logarithm of 2,
1 y(2) = ln 2 = 2 dx/x = .69314718...
with 8-digit accuracy by choosing n = 25 and using the formula
ln
2
=
1 150
~25
[
50
48 +
2k
+
50
49 +
2k
+
50
50 +
] 2k
Caution. To achieve 8-digit accuracy in summing 25 terms, one must use a computer arithmetic having at least 9-digit accuracy. Many computers have only 7-digit accuracy!
Taylor Series Method. A third scheme of numerical integration is obtained by truncating the Taylor series formula after the term in y!, and writing
For the DE y' = y, since y~ = y! = Ym this method gives Yn+I = (1 + h + h2/
2) Ym and so it is equivalent to the improved Euler method.
For the DE y' = 1 + y2, since y" = 2yy' = 2y(l + y2), the method gives
8 Graphical and Numerical Integration
2
This differs from the result given by Euler's improved method. In general, since
d[F(x,y)]/dx = aF/ax + (aF/ay) dy/dx, Y: = (Fx + FF)n· This makes the
method easy to apply. The error per step, like that of the improved Euler method, is roughly pro-
portional to the cube of h. Since the number of steps is proportional to h- 1, the cumulative error of both methods is roughly proportional to h2. Thus, one can obtain two more digits of accuracy with it by using 10 times as many mesh points.
As will be explained in Chapter 8, when truly accurate results are wanted, one should use other, more sophisticated methods that give four additional digits of accuracy when 10 times as many mesh points are used.
Constructing Function Tables. Many functions are most simply defined as
solutions of initial value problems. Thus ex is the solution of y' = y that satisfies the initial condition e0 = 1, and tan x is the solution of y' = 1 + y2 that satisfies tan O = 0. Reciprocally, ln xis the solution of y' = l/y that satisfies ln O = 1, while arctan xis the solution of y' = 1/(1 + y2) that satisfies arctan O = 0.
It is instructive and enjoyable (using modern computers) to try to construct tables of numerical values of such functions, using the methods described in this section, and other methods to be discussed in Chapters 7 and 8. The accuracy of the computer output, for different methods and choices of the mesh length h, can be determined by comparison with standard tables.t One can often use simple recursion formulas instead, like
tanx+tanh 1- tanxtanh'
after evaluating i = 1.0100516'7, and also by its Taylor series tan x = x + x3/3 + 2x5/15 + · · · tan (.01) = 0.0100003335.... Such comparisons will
often reveal the limited accuracy of machine computations (perhaps six digits).
EXERCISESE
1. For each of the following initial value problems, make a table of the approximate numerical solution computed by the Euler method, over the interval and for the mesh lengths specified:
(a) y' = y withy(0) = 1, on [0,1], for h = 0.1 and 0.02. (b) y' = 1 + y2 with y(0) = 0, on [0, 1.6], for h = 0.1, 0.05, and 0.02.
2. Knowing that the exact solutions of the preceding initial value problems are e" and tan x:
(a) Evaluate the errors En = Yn - y(Xn) for the examples of Exercise 1.
(b) Tabulate the ratios E.jhx, verifying when it is true that they are roughly independent of h and x.
t See for example Abramowitz and Stegun, which contains also a wealth of relevant material.
24
CHAPTER 1 First-Order Differential Equations
3. Compute approximate solutions of the initial value problems of Exercise 1 by the improved Euler method.
4. Find the errors of the approximate values computed in Exercise 3, and analyze the ratios Y,/h2x (cf. Ex. 2).
5. Use Simpson's Rule to compute a table of approximate values of the natural log-a-
rithm function In x = lx dt/t, on the interval [1,2].
6. Construct a table of the function arctan x = lx dt/(I + t2) on the interval [0,1] by
Simpson's Rule, and compare the computed value of arctan 1 with 1r/4.
*7. In selected cases, test how well your tables agree with the identities arctan (tan x) = x and In ·(ej = x.
*8. Let en be the approximate value of e obtained using Euler's method to solve y' = y for the initial conditiony(0) = 1 on [0,1], on a uniform mesh with mesh length h =
1/n.
(a) Show that In en = n ln (1 + h). (b) Infer that In en = 1 - h/2 + h2/3 -
(c) From this, derive the formula
(*)
(d) From formula(*) show that, ash J. 0, e - en = (he/2)[1 - (h/6) + O(h3)].
9 THE INITIAL VALUE PROBLEM
For any normal first-order differential equation y' = F(x,y) and any "initial"
x 0 (think of x as time), the initial value problem consists in finding the solution
or solutions of the DE, for x > x 0, which also satisfy f(x0) = c. In geometric
language, this amounts to finding the solution curve or curves that issue from the point (x0,c) to the right in the (x,y)-plane. As we have just seen, most initial value problems are easy to solve on modern computers, if one is satisfied with approximate solutions accurate to (say) 3-5 decimal digits.
However, there is also a basic theoretical problem of proving the uniqueness of this solution.
When F(x,y) = g(x) depends on x alone, this theoretical problem is solved by
the Fundamental Theorem of the Calculus (§2). Given x0 = a and y0 = c, the
= initial value problem for the DE y' g(x) has one and only one solution, given
by the definite integral (5'). The initial value problem is said to be well-posed in a domain D when there is
one and only one solution y = J(x,c) in D of the given DE for each given (x0,c)
E D, and when this solution varies continuously with c. To show that the initial value problem is well-posed, therefore, requires proving theorems of existence (there is a solution), uniqueness (there is only one solution), and continuity (the solution depends continuously on the initial value). The concept of a well-posed initial value problem gives a precise mathematical interpretation of the physical
9 The Initial Value Problem
25
concept of determinism (cf. Ch. 6, §5). As was pointed out by Hadamard, solu-
tions which do not have the properties specified are useless physically because
no physical measurement is exact. It is fairly easy to show that the initial value problems discussed so far are
well-posed. Thus, using formula (8'), one can show that the initial value problem
is well-posed for the linear DE y' + p(x)y = q(x) in any vertical strip a < x < b
where p and q are continuous. The initial value problem is also well-posed for
the linear fractional DE (16) in each of the half-planes ax + by > 0 and
ax+ by< 0.
Actually, for the initial value problem for y' = F(x,y) to be well-posed in a
domain D, it is sufficient that F E (§11 in D. But it is not sufficient that F e <r:
though the continuity of F implies the existence of at least one solution through
every point (cf. Ch. 6,§13), it does not necessarily imply uniqueness, as the fol-
lowing example shows.
Example 7. Consider the curve family y = (x - C)3, sketched in Figure 1.7.
For fixed C, we have
(22)
y' = aayx = 3(x - c)2 = 3y213
a DE whose right side is a continuous function of position (x,y). Through every
point (x0,c) of the plane passes just one curve y = (x - C)3 of the family, for = which C x0 - c1!3 depends continuously on (x0,c). Hence, the initial value
problem for the DE (22) always has one and only one solution of the form
y = (x - C)3. But there are also other solutions.
Thus, the function y = 0 also satisfies (22). Its graph is the envelope of the
curves y = (x - C)3. In addition, for any a < fJ, the function defined by the
three equations
(x - a)3, x<a
(22')
y = { 0
a<x<fJ
(x - fJ)3 X > fJ
Figure 1.7 Solution curves of y' = 3y213•
26
CHAPTER 1 First-Order Differential Equations
is a solution of (22). Hence, the first-order DE y' = 3y213 has a two-parameter
family of solutions, depending on the parameters a and {3.
*IO UNIQUENESS AND CONTINUITY
The rest of this chapter will discuss existence, uniqueness, and continuity theorems for initial value problems concerning normal first-order DEs
y' = F(x,y). Readers who are primarily interested in applications are advised to
skip to Chapter 2. Example 9 shows that the mere continuity of F(x,y) does not suffice to ensure
the uniqueness of solutions y = f(x) of y' = F(x,y) with given f(a) = c. However,
it is sufficient that FE ~ 1-(D). We shall prove this and continuity at the same time, using for much of the proof the following generalization of the standard Lipschitz condition.
DEFINITION. A function F(x,y) satisfies a one-sided Lipschitz condition in a domain D when, for some finite constant L
(23)
implies
identically in D. It satisfies a Lipschitz conditiont in D when, for some nonnegative constant L (Lipschitz constant), it satisfies the inequality
(23')
IF(x,y) - F(x,z)I < Lly - zl
for all point pairs (x,y) and (x,z) in D having the same x-coordinate.
The same function F may satisfy Lipschitz conditions with different Lipschitz
constants, or no Lipschitz conditions at all, as the domain D under consideration
varies. For example, the function F(x,y) = 3y213 of the DE in Example 9 satisfies a Lipschitz condition in any half-plane y > E, E > 0, with L = 2E- 113, but no
Lipschitz condition in the half-plane y > 0. More generally, one can prove the
following.
LEMMA 1. Let F be continuously differentiable in a bounded closed convext
domain D. Then it satisfies a Lipschitz condition there, with L = supv IoFjoy 1-
* In this book, starred sections may be omitted without loss of continuity.
t R. Lipschitz, Bull. Sci. Math. 10 (1876), p. 149; the idea of the proof is due to Cauchy (1839). See
Ince, p. 76, for a historical discussion.
t A set of points is called convex when it contains, with any two points, the line segment joining them.
IO Uniqueness and Continuity
27
Proof The domain being convex, it contains the entire vertical segment joining (x,y) with (x,z). Applying the Law of the Mean to F(x;q) on this segment, considered as a function of 77, we have
-a:;-I IF(x,y) - F(x,z)I = ly - zl l aF(x,11)
for some 11 between y and z. The inequality (23'), with L = supn IaF;ay I, follows. A similar argument shows that (23) holds with L = maxn aF;ay.
= The case F(x,y) g(x) of ordinary integration, or "quadrature," is easily
identified as the case when L = 0 in (23'). A Lipschitz condition is satisfied even
if g(x) is discontinuous.
LEMMA 2. Let u be a differentiable Junction satisfying the differential inequality
(24)
u'(x) < Ku(x), a<x<b
where K is a constant. Then
(24')
u(x) < u(a)eK(x-a>, for a < x < b
Proof Multiply both sides of (24) by e-Kx and transpose, getting
The function u(x)e-Kx thus has a negative or zero derivative and so is nonincreasing for a < x < b. Therefore, u(x)e-Kx < u(a)e-Ka, q.e.d.
LEMMA 3. The one-sided Lipschitz condition (23) implies that
[g(x) - f(x)] [g'(x) - f'(x)] < L[g(x) - f(x)] 2
for any two solutions J(x) and g(x)ofy' = F(x,y).
Proof Settingf(x) = yi, g(x) = y2, we have
from the DE. If y2 > yi, then, by (23'), the right side of this equation has the upper bound L(y2 - y1)2. Since all expressions are unaltered when y1 and y2 are
interchanged, we see that the inequality of Lemma 3 is true in any case.
We now prove that solutions of y' = F(x,y) depend continuously (and hence
uniquely) on their initial values, provided that a one-sided Lipschitz condition holds.
28
CHAPTER I First-Order Differential Equations
THEOREM 5. Let f(x)and g(x) be any two solutions of the first-order normal DE
y' = F(x,y)in a domain D where F satisfies the one-sided Lipschitz condition (23). Then
(25)
IJ(x) - g(x) I < eL<x-a) IJ(a) - g(a) I if
Proof Consider the function
u(x) = [g(x) - J(x)] 2
x>a
Computing the derivative by elementary formulas, we have
u'(x) = 2[g(x) - f(x)] • [g'(x) - J'(x)]
By Lemma 3, this implies that u'(x) < 2Lu(x); and by Lemma 2, this implies u(x) < e2L<x-a>u(a). Taking the square root of both sides of this inequality (which
are nonnegative), we get (25), completing the proof.
As the special case J(a) = g(a) of Theorem 5, we get uniqueness for the initial
value problem: in any domain where F satisfies the one-sided Lipschitz condition
(23), at most one solution of y' = F(x,y) for x > a, satisfies f(a) = c. However,
we do not get uniqueness or continuity for decreasing x. We now prove that we have uniqueness and continuity in both directions when the Lipschitz condition (23') holds.
THEOREM 6. If (23') holds in Theorem 5, then
(26)
IJ(x) - g(x) I < eLlx-al IJ(a) - g(a) I
In particular, the DE y' = F(x,y) has at most one solution curve passing through any
point (a,c) ED.
Proof Since (23') implies (23), we know that the inequality (23) holds; from Theorem 5, this gives (26) for x > a. Since (23') also implies (23) when x goes to -x, we also have by Theorem 5
IJ(x) - g(x) I < ~(a-x) IJ(a) - g(a) I = eLlx-al IJ(a) - g(a) I
giving (26) also for x < a, and completing the proof.
EXERCISES F
1. In which domains do the following functions satisfy a Lipschitz condition?
+ (a) F(x,y) = I x2
(b) F(x,y) = I + y2
+ (c) F(x,y) = y/(I x2)
+ (d) F(x,y) = xj(I y2)
2. Find all solutions of y' = Ixy I.
3. Show that the DE xu' - 2u + x = 0 has a two-parameter family of solutions.
[HINT: Join together solutions satisfying u(O) = 0 in each half-plane separately.]
11 A Comparison Theorem
29
4. Let f and g be solutions of y' = F(x,y), where Fis a continuous function. Show that
the functions m and M, defined as m(x) = min (f(x), g(x)) and M(x) = max (f(x),
g(x)), satisfy the same DE. [HINT: Discuss separately the cases f(x) = g(x), J(x) < g(x), andf(x) > g(x).] 5. Let u(t), positive and of class l!' 1 for a .:5 t .:5 a + E, satisfy the differential inequality
u'(t) .::5 Ku(t) log u(t). Show that u(t) .::5 u(a) exp [K(t - a)].
6. Let F(x,y) = y log (1/y) for 0 < y < l, F(y) = 0 for y = 0. Show that y' = F(x,y)
has at most one solution satisfying /(0) = c, even though F does not satisfy a
Lipschitz condition.
7. (Peano uniqueness theorem). For each fixed x, let F(x,y) be a nonincreasing function
of y. Show that, if/(x) and g(x) are two solutions of y ' = F(x,y), and b > a, then
1/(b) - g(b) I .::5 1/(a) - g(a) I, Infer a uniqueness theorem.
8. Discuss uniqueness and nonuniqueness for solutions of the DE y' = -y113• [HINT:
Use Ex. 7.]
9. (a) Prove a uniqueness theorem for y' = xy on -oo < x,y < +oo. *(b) Prove the same result for y' = y213 + 1.
10. (Generalized Lipschitz condition.) Let F E e' satisfy
IF(x,y) - F(x,z) I .::5 k(x) ly - z I
identically on the strip 0 < x < a. Show that, if the improper integral JOk(x) dx is finite, then y' = F(x,y) has at most one solution satisfying y(O) = 0.
*11. Let F be continuous and satisfy
IF(x,y) - F(x,z)I .::5 Kly - zl log (ly - zl-1), , for Show that the solutions of y' = F(x,y) are unique.
ly - zl < 1
*11 A COMPARISON THEOREM
Since most DEs cannot be solved in terms of elementary functions, it is important to be able to compare the unknown solutions of one DE with the known solutions of another. It is also often useful to compare functions satisfying the differential inequality
(27)
f '(x) < F(xJ(x))
with exact solutions of the DE (3). The following theorem gives such a comparison.
THEOREM 7. Let F satisfy a Lipschitz condition for x > a. If the function f sat-
isfies the differential inequality (27) for x > a, and if g is a solution of y' = F(x,y)
satisfying the initial condition g (a) = f(a), then f(x) < g(x) for all x > a.
Proof Suppose that f(x 1) > g(x1) for some x1 in the given interval, and define
x0 to be the largest x in the interval a < x < x1 such that f(x) < g(x). Then
30
CHAPTER 1 First-Order Differential Equations
= = f(x 0 ) g(x0). Letting u(x) f(x) - g(x), we have u(x) > 0 for x0 < x < x 1 ; and,
also for x0 < x < xi,
u'(x) = f '(x) - g'(x) < F(xJ(x)) - F(x,g(x)) < L(J(x) - g(x)) = Lu(x)
where L is the Lipschitz constant for the function F. That is, the function u
satisfies the hypothesis of Lemma 2 of §10 on x0 < x < xi, with K = L. Hence
u(x) < u(x0 )eL<x-xol = 0 and so u, being nonnegative, vanishes identically. But
this contradicts the hypothesis f(x 1 ) > g(x1). We conclude that f(x) < g(x) for
all x in the given interval, q.e.d.
THEOREM 8 (Comparison Theorem). Let f and g be solutions of the DEs
(28)
y' = F(x,y), z' = G(x,z)
respectively, where F(x,y) < G(x,y) in the strip a < x < b and F or G satisfies a
Lipschitz condition. Let also f(a) = g(a). Then f(x) < g(x) for all x E [a,b].
Proof. Let G satisfy a Lipschitz condition. Since y' = F(x,y) < G(x,y), the functions f and g satisfy the conditions of Theorem 7 with Gin place of F. Therefore, the inequality f(x) < g(x) for x > a follows immediately.
If F satisfies a Lipschitz condition, the functions u = - f(x) and v = - g(x)
satisfy the DEs u' = - F(x, -u) and
v' = -G(x, -v) < -F(x, -v)
Theorem 6, applied to the functions v, u and H(u,v) = -F(x, -v) now yields the inequality v(x) < u(x) for x > a, or g(x) > f(x), as asserted.
The inequalityf(x) < g(x) in this Comparison Theorem can often be replaced by a strict inequality. Either f and g are identically equal for a < x < Xi, or else
f(x 0 ) < g(x0) for some x0 in the interval (a, x1). By the Comparison Theorem,
the function u1 (x) = g(x) - f(x) is nonnegative for a < x < Xi, and moreover
u1 (x0) > 0. Much as in the preceding proof
uHx) = G(x,g(x)) - F(xJ(x)) > G(x,g(x)) - G(xJ(x)) > - Lu1
+ Hence [eLxu 1 (x)]' = eLx[u1 Lu1 ] > 0; from this expression ~xu1(x) is a non-
decreasing function on a < x < x 1 • Consequently, we have
which gives a strict inequality. This proves
= COROLLARY 1. In Theorem 6, for any x 1 > a, either f(x 1 ) < g(x1), or f(x)
g(x)for all x E [a,x 1].
Theorem 7 can also be sharpened in another way, as follows.
12 Regular and Normal Curve Families
31
COROLLARY 2. In Theorem 7, assume that F, as well as G, satisfies a Lipschitz
condition and, instead off(a) = g(a), that f(a) < g(a). Then f(x) < g(x) for x > a.
Proof The proof will be by contradiction. If we had f(x) > g(x) for some
x > a, there would be a first x = x 1 > a where f(x) > g(x). The two functions y = cf>(x) = f(-x) and z = if;(x) = g(-x) satisfy the DEs y' = -F(-x,y) and z' = -G(-x,z) as well as the respective initial conditions </>(-x1) = if;(-x1).
Since -F(-x,y) > -G(-x,y), we can apply Theorem 7 in the interval [-x1, -a], knowing that the function -F(-x,y) satisfies a Lipschitz condition. We conclude that <b(-a) > if;(-a), that is, thatf(a) > g(a), a contradiction.
*12 REGULAR AND NORMAL CURVE FAMILIES
In this chapter, we have analyzed many methods for solving first-order DEs
of the related forms y' = F(x,y),M(x,y) + N(x,y)y' = 0, and M(x,y) dx + N(x,y) dy = 0, describing conditions under which their "solution curves" and/or "inte-
gral curves" constitute "one-parameter families" filling up appropriate domains of the (x,y)-plane. In this concluding section, we will try to clarify further the relationship between such first-order DEs and one-parameter curve families.
A key role is played by the Implicit Function Theorem, which showst that the
level curves u = C of any function u E (jl 1(D) have the following properties in
any domain D not containing any critical point: (i) one and only one curve of the family passes through each point of D, (ii) each curve of the family has a tangent at every point, and (iii) the tangent direction is a continuous function of position. Thus, they constitute a regular curve family in the sense of the following definition.
DEFINITION. A regular curve family is a curve family that satisfies conditions (i) through (iii).
Thus, the circles x2 + y2 = C (C > 0) form a regular curve family; they are the integral curves of x + yy' = 0, the DE of Example 1. Concerning the DE y' = y3 - y of Example 2, even though it is harder to integrate, we can say
more: its solution curves form a normal curve family in the following sense.
DEFINITION. A regular curve family is normal when no curve of the family has a vertical tangent anywhere.
Almost by definition, the curves of any normal curve family are solution
curves of the normal DE y' = F(x,y), where F(x,y) is the slope at (x,y) of the
curve passing through it. Moreover, by Theorem 5', if FE (jl1, there are no other solution curves.
The question naturally arises: do the solution curves of y ' = F(x,y) always
form a normal curve family in any domain where FE (jl1? They always do locally, but the precise formulation and proof of a theorem to this effect are very dif-
t Where iJu/iJy = 0 but iJu/iJx ,;, 0, we can set x = g(y) locally on the curve; see below.
32
CHAPTER I First-Order Differential Equations
ficult, and will be deferred to Chapter 6. There we will establish the simpler result that the initial value problem is locally well-posed for such DEs, after treating (in Chapter 4) the case that Fis analytic (i.e., the sum of a convergent power series).
In the remaining paragraphs of this chapter, we will simply try to clarify further what the Implicit Function theorem does and does not assert about "level curves."
Parametrizing Curve Families. Although the name "level curve" suggests
that for each C the set of points where F(x,y) = C is always a single curve, this is not so. Thus, consider the level curves of the function F(x,y) = (x2 + y2)2 2x2 + 2y2• The level curve F = 0 is the lemniscate r2 = 2 cos 20, and is divided
by the critical point at the origin into two pieces. Inside each lobe of this lem-
niscate is one piece of the level curve F = C for -1 < C < 0, while the "level curve" F = -1 consists of the other two critical points ( ±1, 0).
Similarly, in the infinite horizontal strip -1 < y < 1, every solution curve y
= sin x + C of the DE y ' = cos x consists of an infinite number of pieces. The same is true of the interval curves of the DE cos x dx = sin x dy, which are the
level curves of e-Y sin x. (These can also be viewed as the graphs of the functions
y = y = In I sin x I + C and the vertical lines y = ±n-ir.) In general, one cannot
parametrize the level curves of F(x,y) globally by the parameter C.
However, one can parametrize the level curves of any function u E ~ 1 locally,
in some neighborhood of any point (x0,y0) where ou/oy ¥= 0. For, by the Implicit
Function Theorem, there exist positive E and 77 such that for all x E (x0 - E, x0
+ E) and c E (u0 - E, u0 + E), there is exactly one y E (y0 - 17, y0 + 77) such that u(x,y) = c. This defines a function y(x,c) locally, in a rectangle of the (x,u)-plane.
The parameter c parametrizes the level curves of u(x,y) in the corresponding
neighborhood of (x0,y0) in the (x,y)-plane; cf. Figure 1.8.
x= x0 -E
X:Xo+E
Figure 1.8
12 Regular and Normal Curve Families
33
EXERCISES G
+ 1. Let J(u) be continuous and a bf(u) =fa 0 for p .:5 u .:5 q. Show that the DE = + + y' J(ax by c) (a,b,c are constants) has a solution passing through every point of the strip p < ax + by + c < q.
2. Find all solutions of the DE y' = Ix3y3 I.
3. Show that, if Mand N are homogeneous functions of the same degree, then (l')
+ has the integrating factor (xM yN)- 1 in any simply connected domain where xM + yN does not vanish.
4. Show that if g(y) satisfies a Lipschitz condition, the solutions of y' = g(y) form a
normal curve family in the (x,y)-plane. [HINT: Apply the Inverse Function Theorem
to x = J dy/g(y) + C.] 5. Let g(x) be continuous for 0 .:5 x < oo, lim,_00 g(x) = b and a > 0. Show that, for
+ every solution y = f(x) of y' ay = g(x), we have lim,-oo/(x) = b/a.
6. Show that if a < 0 in Ex. 5, then there exists one and only one solution of the DE such that lim,-oo/(x) = b/a.
*7. (Osgood's Uniqueness Theorem.) Suppose that <f,(u) is a continuous increasing function
defined and positive for u > 0, such that JJ du/<f,(u) -- oo as t -- 0. If IF(x,y) F(x,z) I < </)( ly - z I), then the solutions of the DE (3) are unique. [HINT: Use Ex.
E4.]
8. Let F, G, f, g be as in Theorem 8, and F(x,y) < G(x,y). Show that /(x) < g(x) for x > a, without assuming that For G satisfies a Lipschitz condition.
9. Show that the conditions dx/dt = Ix I 1/2 and x(0) = -1 define a well-posed initial
value problem on [0,a) if a ~ l, but not if a > l.
= 10. (a) Find the critical points of the DE x dy y dx.
(b) Show that in the punctured plane (the x,y-plane with the ongm deleted),
the integral curves of xy' = y are the lines (J = c, where (J is a periodic angular
variable only determined up to integral multiples of 21r. (c) What are its solution curves?
(d) Show that the real variables x/r = cos (J and y/r = sin (J are integrals of xy' = y, and describe carefully their level curves.
*11. (a) Prove that there is no real-valued function u E (§>1 in the punctured plane of
Ex. 10 whose level curves are the integral curves of xy' = y. (b) Show that the integral curves of y' = (x + y)/(x - y) are the equiangular spirals
= = r ki e<fJ-c>, k =fa 0.
(b) Prove that there is no real-valued function u E l§l 1 whose level curves are these
spirals.
CHAPTER 2
SECOND-ORDER LINEAR
EQUATIONS
I BASES OF SOLUTIONS
The most intensively studied class of ordinary differential equations is that of second-order linear DEs of the form
(1)
The coefficient-functions p,(x) [i = 0, 1, 2, 3] are assumed continuous and real-
valued on an interval I of the real axis, which may be finite or infinite. The inter-
val / may include one or both of its endpoints, or neither of them. The central
problem is to find and describe the unknown functions u = J(x) on I satisfying
this equation, the solutions of the DE. The present chapter will be devoted to second-order linear DEs and the behavior of their solutions.
Dividing (1) through by the leading coefficient p0(x), one obtains the normal form
+ + = d2u
du
dx
2
p(x) dx
q(x)u
r(x)
P = P1
Po'
r = p3
Po
This DE is equivalent to (1) so long as p0(x) ¥= O; if p0(x0) = 0 at some point x = x0, then the functions p and q are not defined at the point x0 . One therefore
says that the DE (1) has a singular point, or singularity at the point x0, when p0(x0)
= 0.
For example, the Legendre DE
(*)
.!!:_ [ (1 - + x2) du] AU = 0
dx
dx
has singular points at x = ± l. This is evident since when rewritten in the form + (1) it becomes (1 - x2)u" = 2xu' AU = 0. Although it has polynomial solu-
34
I Bases of Solutions
35
tions when A = n(n + 1), as we shall see in Ch. 4, §1, all its other nontrivial solutions have a singularity at either x = l or x = -1.
Likewise, the Bessel DE
(**)
has a singular point at x = 0, and nowhere else. More commonly written in the
normal form
u" + -1 u' + ( 1 - -n2) u = 0,
X
x2
its important Bessel function solution ] 0(x) will be discussed in Ch. 4, §8.
= = Linear DEs of the form (1) or (1') are called homogeneous when their right-
hand sides are zero, so that p3(x) 0 in (1)-or, equivalently, r(x) 0 in (1').
The homogeneous linear DE
(2)
obtained by dropping the forcing term p3(x) from a given inhomogeneous linear
DE (1) is called the reduced equation of (1). Evidently, the normal form of the reduced equation (2) of (1) is the reduced equation
(2')
d2u -dx 2
+
p(x)
du dx
+
q(x)u
=
0
of the normal form (l ') of (1). A fundamental property of linear homogeneous DEs is the following Super-
position Principle. Given any two solutions Ji (x) and h(x) of the linear homoge-
neous DE (2), and any two constants c1 and c2, the function
(3)
is also a solution of (2). This property is characteristic of homogeneous linear
equations; the function f is called a linear combination of the functions Ji and f2.
Bases of Solutions. It is a fundamental theorem, to be proved in §5, that if
f 1(x) and h(x) are two solutions of (2'), and if neither is a multiple of the other,
then every solution of (2') can be expressed in the form (3). A pair of functions with this property is called a basis of solutions.
Example 1. The trigonometric DE is u" + k2u = 0; its solutions include cos kx and sin kx. Hence, all linear combinations a cos kx + b sin kx of these
basic solutions are likewise solutions.
== Evidently, the zero function u(x) 0 is a trivial solution of any homogeneous
36
CHAPTER 2 Second-Order Linear Equations
Va linear DE. Letting A = 2 + b2, and expressing (a,b) = (A cos 'Y, A sin -y) in
polar coordinates, we can also write
a cos kx + b sin kx = A cos(kx - -y)
for any nontrivial solution of u" + k2u = 0. The constant A in (2) is called the
amplitude of the solution; 'Y its initial phase, and k its wave number; k/21r is called its frequency, and 21r/k its period.
Constant-coefficient DEs. We next show how to construct a basis of solutions of any second-order constant-coefficient homogeneous linear DE
u" + pu' + qu = 0, (p,q constants)
(5)
The trick is to set u = e-pxf2v(x), so that u' = e-pxf2[v' - pv/2] and u" = e-pxf2[v" - pv' + p2v/4], whence (5) is equivalent to
(5')
v" + (q - p2/4)v = 0,
There are three cases, depending on whether the discriminant d = p2 - 4q is
positive, negative, or zero.
Case 1. If d > 0, then (5') reduces. to v" = k2v, where k = ~/2. This DE has the functions v = ix, e-kx as a basis of solutions whence
(6a)
= u e<-vA-pJx/2
are a basis of solutions of (5). Actually, it is even simpler to make the "exponen-
tial substitution" u = lx in this case. Then (5) is equivalent to (X2 + pX + q)lx = O; the roots of the quadratic equation X2 + pX + q = 0 are the coeffi-
cients of the exponents in (6a).
Case 2. If d < 0, then (5') reduces to v" + k2v = 0, where k = ....;-::::&;2.
This DE has cos kx, sin kx as a basis of solutions, whence
(6b)
u = e-px/2 cos(....;-:::;&x/2, u = e-px/2 sin(....;-:::;&x/2)
form a basis of solutions of (5) when d < 0.
Case 3. When d = 0, (5') reduces to v" = 0, which has 1 and x as a basis of
solutions. Hence the pair
(6c)'
u = xe-px/2
is a basis of solutions of (5) when p2 = 4q.
2 INITIAL VALUE PROBLEMS
2 Initial Value Problems
37
With differential equations arising from physical problems, one is often inter-
ested in particular solutions satisfying additional initial or boundary conditions.
Thus, in Example 1, one may wish to find a solution satisfying u(0) = u0 and
u'(0) = Uo, An easy way to find a solution satisfying these initial conditions is to
use Eq. (4) with a = Uo and b = Uo/k. In general, given a second-order linear
DE such as (1) or (1'), the problem of finding a solution u(x) that satisfies given
initial conditions u(a) = Uo and u'(a) = Uo is called the initial value problem.
Example 2. Suppose we supplement the normal DE of Example 1 with the
= "forcing function" r(x) 3 sin 2x, and wish to find the solution of the resulting
DE u" + u = 3 sin 2x satisfying the initial conditions u(0) = u'(0) = 0.
To solve this initial value problem, we first construct a particular solution of
this DE, trying u = A sin 2x, where A is an unknown coefficient to be determined. Substituting into the DE, we get (-4A + A) sin 2x = 3 sin 2x, or A = -1. Since a cos x + b sin x satisfies u" + u = 0 for any constants a and b, it
follows that any function of the form
u = a cos x + b sin x - sin 2x
satisfies the original DE u" + u = 3 sin 2x. Such a function will satisfy u(0) = 0 if and only if a = 0, so that
u'(x) = b cos x - 2 cos 2x
In particular, therefore, u'(0) = b - 2 = 0. Hence the function u = 2 sin x -
sin 2x solves the stated initial value problem.
Particular solutions of constant-coefficient DEs with polynomial forcing terms can be treated similarly. Thus, to solve
u" + pu' + qu = ex + d (p, q, c, d constants)
it is simplest to look first for a particular solution of the form ax + b. Substituting into the DE, we obtain the equations qa = c and pa + qb = 0. Unless q = 0, these give the particular solution
c qd-pc
u=-x+ q
q 2
When q = 0 but p 'F 0, we look for a quadratic solution; thus u" + u' = x has
the solution
x2
--x
2
38
CHAPTER 2 Second-Order Linear Equations
Finally, u" = ex + d has the cubic solution u = cx3/6 + dx2/2.
The procedure just followed can be used to solve initial value problems for many other second-order linear DEs of the form (1) and (1'). It requires four
steps.
STEP 1. Find a particular solution up(x) of the DE. STEP 2. Find the general solution of the reduced equation obtained by setting
p3(x) = 0 in (1), or r(x) = 0 in (1'). It suffices to find two solutions cf>(x)
and 1/l(x) of the reduced DE, neither of which is a multiple of the other.
+ + STEP 3. Recognize u = a<f>(x) btft(x) up(x), where a and b are constants to
be determined from the initial conditions, as the general solution of the inhomogeneous DE.
STEP 4. Solve for a and b the equations
</>(0)a + 1/l(0)b = u0 - up(0)
+ </>'(0)a 1/l'(b) = u6 - u11(0)
For these equations to be uniquely solvable, the condition
I I </>(O) 1/l(O) = </>(0)1/1'(0) - 1/1(0)</>(0) -:/= 0
</>'(0) 1/1'(0)
is clearly necessary and sufficient-the expression (4') is called the Wronskian of cf> and 1/1; we will discuss it in §5.
EXERCISES A
1. (a) Find the general solution of u" + 3u' + 2u = K, where K is an arbitrary constant. (b) Same question for u" + 3u' = K.
2. Solve the initial value problem for u" + 3u' + 2u = 0, and the following initial
conditions:
(a) u(0) = 1, u'(0) = 0 (b) u(0) = 0, u'(0) = 1. 3. Answer the same questions for u" + 2u' + 2u = 0.
4. Find a particular solution of each of the following DEs:
(a) u" + 3u' + 2u = ex
(b) u" + 3u' + 2u = sin x
(c) u" + 3u' + 2u = e-x
*(d) u" + 2u' + u = e-x
5. Find the general solution of each of the DEs of Ex. 4.
6. Solve the initial value problem for each of the DEs of Exercise 4, with the initial con-
ditions u(0) = u'(0) = 4.
7. Find a particular solution of: (a) u" + 2u' + 2u = e-x, (b) u" + 2u' + 2u = sin x, *(c) u" + 2u' + 2u = e-x sin x.
8. Solve the initial value problem for each of the DEs of Exercise 7, and the initial conditions u(0) = u'(0) = 0.
9. Show that any second-order linear homogeneous DE satisfied by x sin x must have a
singular point at x = 0.
3 Qualitative Behavior; Stability
39
3 QUALITATIVE BEHAVIOR; STABILITY
Note that when d < 0 (i.e., in Case 2), all nontrivial solutions of (5) reverse
sign each time that x increases by 1r/k. Qualitatively speaking, they are oscillatory in the sense of changing sign infinitely often. These facts become evident if we rewrite (6b) in the form (4), as
(7)
u(x) = Ae-P"/2 cos[k(x - ef,)]
Contrastingly, when d > 0, a nontrivial solution of (6) can vanish only when
aeax = -b/1". This implies that e<a-fJ)x = -b/a, so that (i) a and b must have opposite signs, and (ii) x = lnlb/al/(.8 - a). Hence, a nontrivial solution can
change sign at most once: it is nonoscillatory. Likewise, in Case 3, a nontrivial
solution can vanish only where a + bx = 0, or x = (-a/b), giving the same
result. We conclude:
THEOREM 1. Ifd ~ 0, then a nontrivial solution of(5) can vanish at most once.
If d < 0, however, it vanishes periodically with period 1r/~.
Stability. Even more important than being oscillatory or nonoscillatory is the property of being stable or unstable, in the sense of the following definitions.
DEFINITION. The homogeneous linear DE (2) is strictly stable when every solution tends to zero as x - oo; it is stable when every solution remains bounded as x - oo. When not stable, it is called unstable.
THEOREM 2. The constant-coefficient DE (5) is strictly stable when p > 0 and q > 0; it is stable when p = 0 but q > 0. It is unstable in all other cases.
Proof This result can be proved very simply if complex exponents are used freely (see Chapter 3, §3). In the real domain, however, one must distinguish several possibilities, viz.:
(A) If q < 0, then a > 0 and -X2 + p-X + q = 0 must have two real roots of
opposite sign. Instability is therefore obvious.
(B) If p < 0, instability is obvious from (6a)-(6b), if one keeps in mind the sign
of pin each case.
(C) If p = 0 and q > 0, then we have Example 2: the DE (5) is stable but not
strictly stable.
(D) If p > 0 and q > 0, there are two possibilities: (i) d ~ 0, in which case we
have strict stability by (6b) and (6c); (ii) d > 0, in which case vii< p since a = p2 - q < p2, and strict stability follows from (6a).
Second-order linear DEs with constant coefficients have so many applications that it is convenient to summarize their qualitative properties in a diagram; we have done this in Figure 2.1. (The words "focal," "nodal," and "saddle" point will be explained in §7; to have a focal point is equivalent to having oscillatory solutions.)
40
CHAPTER 2 Second-Order Linear Equations
q
q=1N4
Unstable focal point
Stable focal point
Unstable nodal point >-t<Aa<O
Stable nodal point O<Aa<>-t
Saddle point (unstable)
p
>-1<0' <>-t
I
Figure 2.1 Stability Diagram for ;,; + pu + qu = 0.
4 UNIQUENESS THEOREM
We are now ready to treat rigorously the initial value problem stated in §I.
The first basic concept involved is very general and applies to any normal sec-
ond-order DE u0 = F(x,u,u'), whether linear or not.
Think of x as time, and of the possible pairs (u,u') as states of a physical system,
which is governed (or modeled mathematically) by the given DE. Since u'
expresses the rate of change of u at any "time" x, while u" = du'/dx gives the
rate of change of u', it is natural to surmise that the present state of any such
system uniquely determines its state at all future times. Indeed, the theoretical
initial value problem is to prove this result as generally as possible.
In this section, we will prove it for second-order linear DEs of the form (1)
having continuous coefficient-functions p/x) and no singular points. Since p0(x) ,t= 0, it suffices to consider the normal form (1').
One would like to prove also that there always exists a solution for any initial
(u0,u0); this will be proved for second-order linear DEs having analytic coeffi-
cient-functions in Chapter 4, and (locally) for linear DEs having continuously
differentiable coefficient-functions in Chapter 6. For the present, we will have to
construct "particular" solutions and bases of solutions for homogeneous DEs
u" + p(x)u' + q(x)u = 0 by other methods.
Linear Operators. We begin by discussing carefully the general concept of
a "linear operator." Clearly, the operation of transforming a given function J
into a new function g by the rule
4 Uniqueness Theorem
41
(for continuous p;) is a transformation from one family of functions [in our case, the family (jl2(/) of continuously twice-differentiable functions on a given interval J] to another family of functions [in our case, (jl(/)]. Such a functional transformation is called an operator, and is written in operator notation
L[f] =Pol"+ PJ' + Pd
In our case, the operator L is linear; that is, it satisfies
L[cf + dg] = cL[f] + dL[g]
for any constants c and d.
As a special case (setting c = l, d = -1), if u and v are any two solutions of
the inhomogeneous linear DE (1), then their difference u - v satisfies
That is, their difference is a solution of the homogeneous second-order linear DE (2).
The preceding simple observations, whose proofs are immediate, have the following result as a direct consequence.
LEMMA 1. If the function v(x) is any particular solutiont of the inhomogeneous DE (1), then the general solution of (l) is obtained iJ,y adding to v(x) the general solution of the corresponding homogeneous linear DE (2).
For, if u(x) is any other solution of (1), then u(x) = v(x) + [u(x) - v(x)],
where L[u(x) - v(x)] = 0 as before. More generally, the following lemma holds.
LEMMA 2. If u(x) is a solution of L[u] = r(x), if v(x) is a solution of L[u] = s(x), and if c,d are constants, then w = cu(x) + dv(x) is a solution of the DE L[u] = cr(x) + ds(x).
The proof is trivial, but the result describes the fundamental property of linear operators. Its use greatly simplifies the solution of inhomogeneous linear DEs.
Main Theorem. Having established these preliminary results, it is easy to prove a strong uniqueness theory for second-order linear DEs.
THEOREM 3. (Uniqueness Theorem). If p and q are continuous, then at most one
solution of(l') can satisfy given initial conditionsf(a) = c0 andf'(a) = c1.
Proof Let v and w be any two solutions of (l') that satisfy these initial con-
ditions; we shall show that their differences u = v - w vanishes identically.
t The phrase "particular solution" is used to emphasize that only one solution of (1) need be found, thus reducing the problem of solving it to the case p3(x) = 0.
42
CHAPTER 2 Second-Order Linear Equations
= Indeed, u satisfies (8) by Lemma 1. It also satisfies the initial conditions u u'
= 0 when x = a. Now consider the nonnegative function u(x) = u2 + u' 2. By definition, u(a) = 0. Differentiating, we have, since r(x) = 0,
u'(x) = 2u'(u + u") = 2u'[u - p(x)u' - q(x)u] = -2p(x)u'2 + 2[1 - q(x)]uu'
Since (u ± u')2 > 0, it follows that I 2uu' I < u2 + u'2. Hence
2[1 - q(x)]uu' < (1 + lq(x) l)(u2 + u'2)
and
u'(x) < [l + Iq(x) I]u2 + [l + Iq(x) I + I2p(x) I]u'2
Therefore, if K = l + max [ Iq(x) I + 21 p(x) I], the maximum being taken over
any finite closed interval [a, b], we obtain
u'(x) < Ku(x), K< +oo
= = By Lemma 2 of Ch. 1, §10, it follows that u(x) = 0 for all x E [a, b]. Hence
u(x) 0 and v(x) w(x) on the interval, as claimed.
The Uniqueness Theorem just proved implies an important extension of the Superposition Principle stated in §1.
THEOREM 4. Let f and g be two solutions of the homogeneous second-order linear DE
(8)
u" + p(x)u' + q(x)u = 0, . p,q E ~
= For some x x0, let (f(x0), J'(x0)) and (g(x0), g'(x0)) be linearly independent
vectors. Then every solution of this DE is equal to some linear combination
h(x) = cf(x) + dg(x) off and g with constant coefficients c,d.
In other words, the general solution of the given homogeneous DE (8) is cj(x)
+ dg(x), where c and d are arbitrary constants.
Proof. By the Superposition Principle, any such h(x) satisfies (8). Conversely, suppose the function h(x) satisfies the given DE (8). Then, at the given point x0, constants c and d can be found such that
= cj(xo) + dg(x0) h(x0), cf '(xo) + dg'(xo) = h'(xo)
5 The Wronskian
43
In fact, the constants c and d are given by Cramer's Rule, as
C = (hogo - goh6)/(fogo - gJo) d = (Joh' - hJo)/(fogo - gJo)
where we have used the abbreviations/0 = .f(x0),/ 0 = f'(x 0), and so on. For this
choice of c and d, the function
u(x) = h(x) - cf(x) - dg(x)
.
satisfies the given homogeneous DE by the Superposition Principle and the
= initial conditions u(x0) = u'(x0) = 0. Hence by the Uniqueness Theorem,
u(x) is the trivial solution, u(x) 0, of the given homogeneous DE; therefore
h =cf+ dg. Two solutions, f and g, of a homogeneous linear second-order DE (8) with
the property that every other solution can be expressed as a linear combination of them are said to be a basis ofsolutions of the DE.
5 THE WRONSKIAN
The question of whether two solutions of a homogeneous linear DE form a basis of solutions is easily settled by examining their Wronskian, a concept that we now define.
DEFINITION. The Wronskian of any two differentiable functions f(x) and g(x) is
(9)
I I W(f, g· x) = J(x)g'(x) - g(x)J'(x) = .f(x) f '(x)
' '
g(x) g'(x)
THEOREM 5. The Wronskian (9) of any two solutions of (8) satisfies the identity
(10)
W(f, g; x) = W(f, g; a) exp ( - lx p(t) dt)
Proof If we differentiage (9) and write W(f, g; x) = W(x) for short, a direct computation gives W = Jg" - gf ". Substituting for g" and f" from (8) and cancelling, we have the linear homogeneous first-order DE
(11)
+ W'(x) p(x) W(x) = 0
Equation (10) follows from the first-order homogeneous linear DE (11) by Theorem 4 of Ch. 1, §6.
44
CHAPTER 2 Second-Order Linear Equations
COROLLARY. The Wronskian ofany two solutions ofthe homogeneous linear DE (8) is identically positive, identically negative, or identically zero.
We now relate the Wronskian of two functions to the concept of linear inde-
pendence. In general, a collection of functions f 1, f 2 ... , fn is called linearly
independent on the interval a < x < b when no linear combination cif1(x) + c2f 2(x) + ... + cnfn (x) of these functions gives the identically zero function for
a < x < b, except the trivial linear combination where all coefficients vanish.
Functions that are not linearly independent are called linearly dependent. If f
and g are any two linearly dependent functions, then cf + dg = 0 for suitable constants c and d, not both zero. Hence g = -(d/d)J or f = -(c/c)g; the func-
tions f and g are proportional.
LEMMA. If f and g are linearly dependent differentiable functions, then their Wronskian vanishes identically.
Proof Suppose that f and g are linearly dependent. Then there are two constants c and d, not both zero, which satisfy the two linear equations
cj(x) + dg(x) = 0, cf'(x) + dg'(x) = 0
identically on the interval of interest. Therefore, the determinant of the two equations, which is the Wronskian W(J, g; x), vanishes identically.
The interesting fact is that when f and g are both solutions of a second-order linear DE, a strong converse of this lemma is also true.
THEOREM 6. Iff and g are two linearly independent solutions ofthe nonsingular second-order linear DE (8), then their Wronskian never vanishes.
Proof Suppose that the Wronskian W(J, g; x) vanished at some point x 1. Then the vectors [J(x1), J'(x1)] and [g(x1), g'(x1)] would be linearly dependent
and, therefore, proportional: g(x1) = kf(x1) and g'(x1) = kf '(x1) for some con-
stant k. Consider now the function h(x) = g(x) - kf(x). This function is a solu-
tion of the DE (8), since it is a linear combination of solutions. It also satisfies
the initial conditions h(x1) = h'(x1) = 0. By the Uniqueness Theorem, this func-
tion must vanish identically. Therefore, g(x) = kfix) for all x, contradicting the
hypothesis of linear independence off and g.
Remark 1. The fact that the DE (8) is nonsingular is essential in Theorem 6. For example, the Wronskian x4 of the two linearly independent solutions x2 and
x3 of the DE x2u" - 4xu' + 6u = 0 vanishes at x = 0. This is possible because
the leading coefficient p0(x) of the DE vanishes there.
Remark 2. There is an obvious connection between the formula for the Wronskian of two functions and the formula for the derivative of their quotient:
(~)' = (jg I - gf ') = W(J, g)
f
!2
!2
5 The Wronskian
45
This suggests that the ratio of two functions is a constant if and only if their Wronskian vanishes identically. However, this need not be true iffvanishes: the
= ratio of the two functions x3 and Ix 13 is not a constant, yet their Wronskian
W(x 3, / x / 3) 0. (Note also that both functions satisfy the DEs xu' = 3u and
3xu" - 2u' = 0.)
Nevertheless, the connection between W(J,g) and g/Jis a useful one. Thus,
it allows one to construct a second solution g(x) of (8) if one nontrivial solution
is known. Namely, if P(x) = f p(x)dx is any indefinite integral of p(x), then the
function
(12)
I [ g(x) = f(x)
-P(x)]
; 2(x) dx
is a second, linearly independent solution of (8) in any interval where f(x) is nonvanishing. This is evident, since (g/j)' = W(J,g)/J 2, whence
+ For example, knowing that e3x is one nontrivial solution of u" - 6u' 9 =
0, since P(x) = -6x = fpdx, setting e-P(x) = e6x, we obtain the second solution
f [(: ~ f g(x) = e3x
2 ] dx = e3x dx = xe3x
Riccati Equation. Finally, consider the formula for the derivative of the ratio v = u'/u,t where u is any nontrivial solution of (8):
(13)
v' = (-u')' = -u" - -u'2 = -p(x)v - q(x) - v2
u
u
u2
The quadratic first-order DE (13) is called the Riccati equation associated with (8); its solutions form a one-parameter family. Conversely, if v(x) is any solution of the Riccati equation (13) and if u' = v(x)u, then u satisfies (8). Hence, every solution u(x) of (8) can be written in any interval where u does not vanish, in the form,
(14)
u(x) = C exp f v(x)dx
where v(x) is some solution of the associated Riccati equation (13).
The Riccati substitution v = u'/u thus reduces the problem of solving (8) to
the integration of a first-order quadratic DE and a quadrature. For instance,
t Since v = u'/u = d(ln u)/dx, this is called the logarithmic derivative of u.
46
CHAPTER 2 Second-Order Linear Equations
the Riccati equation associated with the trigonometric equation u" + k 2u = 0 is v' + v2 + k2 = 0, whose general solution is v = k tan k(x 1 - x).
EXERCISESB
1. Show that all solutions of (8) have continuous second derivatives. Show also that this is not true for (1).
2. Find a formula expressing the fourth derivative u'v of any solution u of (8) in terms
of u, u', and the derivatives of p and q. What differentiability conditions must be
assumed on the coefficients of (8) to justify this formula? For the solution pairs of the DEs specified in Exs. 3-5 to follow, (a) calculate the Wronskian, and (b) solve the initial-value problem for the DE specified with each of the
initial conditions u(0) = 2, u'(0) = 1, and u(0) = 1, u'(0) = -1 (or explain why there is
no solution).
3. fix) = cos x, g(x) = sin x (solutions of u" + u = 0). 4. fix) = e-x, g(x) = e-3x (solutions of u" + 4u' + 3u = 0). 5. fix) = x + 1, g(x) = e" (solutions of xu" - (1 + x)u' + u = 0).
6. Let fix), g(x), and h(x) be any three solutions of (8). Show that
f J' J"
gg'g" ==0 h h' h"
7. (a) Prove the Corollary of Theorem 5. (b) Prove that if fix) and g(x) satisfy the hypotheses of Theorem 6, then
p(x) = (gf" - fg")/W and q(x) = (f'g" - g 'f ")/W.
8. What is wrong with the following "proof" of Theorem 5: "Let w(x) = log W(x);
then w'(x) = -p(x). Hence, w(x) = w(a) - f':,p(x) dx, from which (10) follows."
9. Construct second-order linear homogeneous DEs having the following bases of solutions; you may assume the result of Ex. 7: (a) x, sin x, (b) xm, xn, (c) sinh x, sin x, (d) tan x, cot x. For each of the examples of Ex. 9, determine the singular points of the resulting DE.
10. (a) Show that if p,q E @n, then every solution of (8) is of class @n+ 2.
(b) Show that if every solution of (8) is of class @n+ 2, then p E @" and q E @n.
11. Let fix), g(x), h(x) be three solutions of the linear third-order DE
Derive a first-order DE satisfied by the determinant
w(x) =
f J' f"
g g' g"
h h' h"
6 Separation and Comparison Theorems
4
+ *12. Let yu q(x)y = 0, where g(x) is "piecewise continuous" (i.e., continuous except
for a finite number of finite jumps). Define a "solution" of such a DE as a function
y = J(x) E e'1 that satisfies the DE except at these jumps.
(a) Show that any such solution has left- and right-derivatives at every point of
discontinuity.
+ (b) Describe explicitly a basis of solutions for the DE yu q(x)y = 0, if
q(x)
=
{
+1 -1
whenx > 0 whenx < 0
[N.B. The preceding function q(x) is commonly denoted sgn x,.]
6 SEPARATION AND COMPARISON THEOREMS
The Wronskian can also be used to derive properties of the graphs of solutions of the DE (8). The following result, the celebrated Sturm Separation Theorem, states that all nontrivial solutions of (8) have essentially the same number of oscillations, or zeros. (A "zero" of a function is a point where its value is zero; functions have two zeros in each complete oscillation.)
THEOREM 7. Iff(x) and g(x) are linearly independent solutions of the DE (8), then j(x) must vanish at one point between any two successive zeros of g(x). In other words, the zeros ofj(x) and g(x) occur alternately.
Proof If g(x) vanishes at x = x., then the Wronskian
W(f, g; x,) = f(x,)g'(x,) ,f= 0
since f and g are linearly independent; hence, f(x,) ¥= 0 and g '(x,) ,f= 0 if g(x,)
= 0. If x1 and x2 are two successive zeros of g(x), then g'(x1), g'(x2), f(x 1), and
f(x2) are all nonzero. Moreover, the nonzero numbers g'(x1) and g'(x2) cannot
have the same sign, because if the function is increasing at x = Xi, then it must
be decreasing at x = x 2, and vice-versa. Since W(J, g; x) has constant sign by
the Corollary of Theorem 4, it follows thatj(x1) andf(x2) must also have opposite signs. Therefore j(x) must vanish somewhere between x 1 and x2.
For instance, applied to the trigonometric DE u" + k2u = 0, the Sturm Sep-
aration Theorem yields the well-known fact that the zeros of sin kx and cos kx must alternate, simply because these functions are two linearly independent solutions of the same linear homogeneous DE.
A slight refinement of the same reasoning can be used to prove an even more useful Comparison Theorem, also due to Sturm.
THEOREM 8. Let f(x) and g(x) be nontrivial solutions ofthe DEs u" + p(x)u = 0 and v" + q(x)v = 0, respectively, where p(x) > q(x). Then j(x) vanishes at least once
== between any two zeros ofg(x), unless p(x) q(x) and f is a constant multiple of g.
48
CHAPTER 2 Second-Order Linear Equations
Proof Let x 1 and x2 be two successive zeros of g(x), so that g(x1) = g(x2) =
0. Suppose that.fix) failed to vanish in x1 < x < x2. Replacingf and/or g by their negative, if necessary, we could find solutions J and g positive on x1 < x < x2. This would make
and
On the other hand, since J > 0, g > 0, and p > q on x1 < x < x2, we have
d -
[W(f, g; x)] =Jg" -
gf' = (p -
q)Jg >
0
on
dx
Hence, Wis nondecreasing, giving a contradiction unless
= = p - q W(f, g; x) 0
In this event,J= kg for some constant k by Theorem 4, completing the proof.
COROLLARY 1. If q(x) < 0, then no nontrivial solution of u" + q(x) u = 0
can have more than one zero.
The proof is by contradiction. By the Sturm Comparison Theorem, the solu-
tion v = l of the DE v" = 0 would have to vanish at least once between any
two zeros of any nontrivial solution of the DE u" + q(x)u = 0. The preceding results show that the oscillations of the solutions of u" + q(x)u
= 0 are largely determined by the sign and magnitude of q(x). When q(x) < 0,
oscillations are impossible: no solution can change sign more than once. On the
other hand, if q(x) > k2 > 0, then any solution of u" + q(x)u = 0 must vanish
between any two successive zeros of any given solution A cos k(x - x1) of the
= trigonometric DE u" + k2u 0, hence in any interval of length 1r/k.
This result can be applied to solutions of the Bessel DE(**) of §1 (i.e., to the
Bessel function of order n; see Ch. 4, §4). Substituting u = v/VX into(**), we
obtain the equivalent DE
(15)
1] v" + [ l
-
4
n2 4
x-
2
v =0
whose solutions vanish when u does (for x 'F 0). Applying the Comparison Theo-
rem to (15) and u" + u = 0, we obtain the following.
COROLLARY 2. Each interval of length 1r of the positive x-axis contains at least one zero of any solution of the Bessel DE of order zero, and at most one zero of any
nontrivial solution of the Bessel DE of order n if n > ½-
The fact that the oscillations of the solutions of u" + q(x)u = 0 depend on
the sign of q(x) is illustrated by Figures 2.2 and 2.3, which depict sample solution
curves for the cases q(x) = 1 and q(x) = -1, respectively.
7 The Phase Plane
49
11
X
= Figure 2.2 Solution curves of u" + u 0.
7 THE PHASE PLANE
In the theory of normal second-order DEs u" = F(x,u,u'), linear or nonlin-
ear, the two-dimensional space of all vectors (u,u') is called the phase plane. As was noted in §5, the points of this phase plane correspond to the states of any physical system whose behavior is modeled by such a DE.
Clearly, any solution u(x) of the given DE determines a parametric curve or trajectory in this phase plane, which consists of all [u(x),u'(x)] associated with this
= solution. [A trivial exception arises at equilibrium states at which F(x,c,O) 0, so = = that u'(x) 0 and u(x) c. Clearly, any such equilibrium point is necessarily
on the u-axis, where u' = O.]
= Figure 2.3 Solution curves of u" - u O.
50
CHAPTER 2 Second-Order Linear Equations
The trajectories just defined have some important general geometrical prop-
erties. For example, since u is increasing when u' > 0 and decreasing when u' < 0, the paths of solutions must go to the right in the upper half-plane and to
the left in the lower half-plane. Furthermore, paths of solutions ("trajectories")
must cut the u-axis u' = 0 orthogonally, except where F = 0.
We will treat in this chapter only homogeneous second-order linear DEs (8),
deferring discussion of the nonlinear case to Chapter 5. Using the letter v to
signifly u', this DE is obviously equivalent to the system
(16)
-du= V
dv - = -p(x)v - q(x)u
dx ' dx
which can also be written in vector form as
Note that if q(x) < 0, then du'/dx = -q(x)u has the same sign as u on the u-
axis. It follows that if q(x) is negative, then any trajectory once trapped in the first quadrant can never leave it, because it can neither cross the u-axis into the fourth quadrant nor recross the u'-axis into the second quadrant. The same is true, for similar reasons, of trajectories trapped in the third quadrant.
Even more important, two nontrivial solutions of (8) are linearly dependent if and only if they are on the same straight line through the origin in the (u,v)plane. It follows that each straight line through the origin moves as a unit. The preceding facts also become evident analytically, if we introduce clockwise polar coordinates in the phase plane, by the formulas
(17)
= u'(x) = r cos 8(x), u(x) r sin 8(x)
(We adopt this clockwise orientation so that 8 will be an increasing function on the u'-axis.) Differentiating the relation cot 8 = u'ju, we then have the formulas
-(csc2 8)8' = (u"/u) - (u'/u) 2 = -p(u'/u) - q - (u'/u)2
= -p cot 8 - q - cot2 8
If we multiply through by -sin2 8, this equation gives
(18)
+ + d8/dx = cos2 8 p(x) cos 8 sin 8 q(x) sin2 8
This first-order DE gives much information about the oscillations of u.
+ Differentiating r 2(x) = u2(x) u'2(x) as in the proof of Theorem 1, where
u(x) = r2(x), we get
+ rr' = uu' u'u" = u'(u - pu' - qu)
= r 2 cos 8[(1 - q(x)) sin 8 - p(x) cos 8]
Dividing through by r 2 and simplifying, we obtain
7 The Phase Plane
51
(19)
-1 -dr = -p(x) cos2 0 + (1 - q(x)) cos 0 s.m 0
r dx
As in Theorem 1, it follows that the magnitude Id(ln r)/dx I of the logarithmic
derivative ofr(x) is bounded by IPlmax + (1 + lqlmax)/2.
Now, consider the graph of the multiple-valued function 0(x) in the (x, 0)-
plane. Since cot 0 is periodic with period 1r, the graph at 0 = arc cot(u'/u) for
any solution of (8) consists of an infinite family of congruent curves, all obtained
from any one by vertical translation through integral multiples of 1r. The curves
that form the graphs of 01(x) and 02(x), for any two linearly independent solu-
tions ui, u2 of (8), occur alternately. Moreover, by the uniqueness theorem of
= Ch. 1, they can never cross.
In (17), u = 0 precisely when sin 0 = 0, that is, when 0 0 (mod 1r). Inspect-
ing (18), we also see that
(20a) (20b)
When When
=0 0 (mod 1r), that is, u = 0, then d0/dx > 0 =0 1r/2 (mod 1r), d0/dx has the sign of q
From (20a) it follows that, after the graph of any 0(x) has crossed the line 0 =
n1r, it can never recross it backwards. Where u(x) next vanishes (if it does), we
must have 0 = (n + l)1r; in other words, successive zeros of u(x) occur precisely
where 0 increases from one integral multiple of 1r to the next!
After verifying that the right side of (18) satisfies a Lipschitz condition, we
see that this inequality can never cease to hold; hence, in any interval where
01(x) increases from n1r to n1r + 1r, 02(x) must cross the line 0 = n1r + 1r and so
u2 must vanish there. Sturm's Comparison Theorem follows similarly: if q(x) is
increased and p(x) is left constant, the Comparison Theorem of Ch. 1, applied
to (19), yields it as a corollary.
Oscillatory Solutions. The preceding considerations also enable one to
extend some of the results stated in §3 for constant-coefficient DEs to second-
order linear DEs with variable coefficients. When q(x) > p2(x)/4, the quadratic
form on the right side of (18) is positive de.finite; hence d0/dx is identically positive. Unless q(x) gets very near to p 2(x)/4, the zero-crossings of solutions occur
with roughly uniform frequency, and so the DE (8) may be said to be of oscil-
latory type.
When q(x) < 0, the DE (8) is said to be of positive type. One can also say by
(20a) and (20b), that once 0(x) has entered the first or third quadrant, it can
never escape from this quadrant; it is trapped in it. Therefore, a given solution
u(x) of (8) can have at most one zero if q(x) < O; solutions are nonoscillatory. Moreover, since uu' > 0 in the first and third quadrants, u2(x) and hence Iu(x) I
are perpetually increasing after a solution has been trapped in one of these
quadrants.
Using more care, one can show that when q(x) < 0 the limit as a! - oo of
52
CHAPTER 2 Second-Order Linear Equations
the solutions ua(x) satisfying ua(0). = 1 and ua(a) = 0 is an everywhere increasing
positive solution. Moreover, replacing x by - x, which reverses the sign of u'(x), one can construct similarly an everywhere decreasing positive solution. These two
monotonic solutions (ex and e-x for u" - u = 0) are usually unique, up to con-
stant positive factors, and provide a natural basis of solutions.
Focal, Nodal, and Saddle Points. Even more interesting than Sturm's theorems are the qualitative differences between the behavior of solutions of different second-order DEs that become apparent when we look at the corresponding trajectories in the phase plane (their so-called phase portraits). We shall discuss these for nonlinear DEs in Chapter 5; here we shall discuss only the linear, constant-coefficient case. We have already discussed this case briefly in §§2-3, primarily from an algebraic standpoint.
In the linear constant-coefficient case, using the letter v to signify u', we obviously have
(21)
-du
= V
dv
-
=
-pv
-
qu.
dx 'dx
Deferring to Chapter 5, §5, the discussion of the possibilities q = 0 and Ll = p2 - 4q = 0, the original DE u" + pu' + qu = 0 has a basis of solutions
of one of the following three main kinds: A) if p2 < 4q, eax cos kx and eax sin kx, B) if p2 > 4q > 0, functions eax and eftx, where a and {3 have the same sign, C) if p2 > 0 > 4q, functions eax and efJx where a and {3 have opposite signs.
These three cases give very different-looking configurations of trajectories
in the phase plane.
Note that Cases B and C are subcases of the "Case l" discussed in §1, while
Case A coincides with "Case 2" discussed there. As will be explained in Chapter
5, §5, most of the qualitative differences to be pointed out below have analogues
for
. nonlinear
DEs
of
the
general
form -dv
=
F(u,v),
of whi.ch
the
form
du
(21')
dv -pv - qu
-=
du
V
of (21) is a special case.
Case A. By (18), writmg 'Y = cot 8, we have 8' = d8/dx = (sin28)(-y2 + p-y + q) > 0 for all 8.
Hence, 8 increases monotonically. In each half-turn around the ongm, r
is amplified or damped by a factor e1a1..-, according as a > 0 or a < 0. In
either case there are no invariant lines; the critical point at (0,0) is said to be
a focal point. Figure 2.4a shows the resulting phase portrait for u" + 0.2u' + 4.0lu = 0.
7 The Phase Plane
53
V
u
u
(a) u" + 0.2u' + 4.0lu = 0.
(b) 2u" - 5u' + 2u = 0.
Figure 2.4 Two phase portraits.
When p2 > 4q (i.e., in Cases Band C), the two lines u' = au and u' = {:Ju in
the phase plane are invariant lines (Ch. 1, §7). These lines, which correspond to the solutions eax and efJX, divide the uu'-plane up into four sectors, in each of
which 8' is of constant sign and so 8 is monotonic. If q = a{:J > 0, the two invar-
iant lines lie in the same quadrant; if q < 0, they lie in adjacent quadrants.
Case B. In this case, the trajectories in each sector are all tangent at the origin to the same invariant line, and have an asymptotic direction parallel to the other invariant line at oo. Fig. 2.4b depicts the phase portrait for 2u0 - 5u'
+ 2u = 0. The lines v = 2u and u = 2v are the invariant lines of the corresponding linear fractional DE, dv/du = (5v - u)/v. In Case B, the origin is said
to be a nodal point.
Case C. In the saddle point case that p2 > 0 > 4q, the two invariant lines lie
in different quadrants, and all trajectories are asymptotic to one of them as they come in from infinity, and to the other as they recede to it. Figure 1.5 depicts
the phase portrait for the case u" = u, with hyperbolic trajectories u2 - v2 = 4AB in the phase plane given parametrically by u = Ac + Be-x, v = u' = Ac - Be-x. The invariant lines are the asymptotes v = ±u.
EXERCISES C
1. (a) Show that if g(x) = f'(x), then g(x) vanishes at least once between any two zeros
of fix).
(b) Show how to construct, for any n, a function fix) satisfying fi0) = fil) = 0,
fix) -=fa 0 on (0,1), yet for whichf'(x) vanishes n times on (0,1).
54
CHAPTER 2 Second-Order Linear Equations
2. Show that there is a zero ofJ1(x) between any two successive zeros ofJ0(x).
3. Show that every solution of u" + (1 + e')u = 0 vanishes infinitely often on (-oo
,0), and also infinitely often on (0;oo).
4. Show that no nontrivial solution of u" + (1 - x2)u = 0 vanishes infinitely often. *5. The Legendre polynomial Pn(x) satisfies the DE (1 - x2)u" - 2xu' + n(n + l)u =
0. Show that Pn(x) must vanish O(n) times on [-1,l].
6. Apply numerical methods (Ch. 1, §8) to (18), to determine about how many times
any solution of u" + xu = 0 must vanish on (0,100). 7. Same question for the Mathieu DE u" + [1r2 + 4 cos 2x]u = 0.
* 8. (a) Show that no normal second-order linear homogeneous DE can be satisfied by both some cos kx with k 0, and some e=. [HINT: Consider the Wronskian.] (b) Find a normal third-order homogeneous linear DE that has as solutions both the oscillatory functions sin x, cos x, and the nonoscillatory function e'.
8 ADJOINT OPERATORS; LAGRANGE IDENTITY
Early studies of differential equations concentrated on formal manipulations yielding solutions expressible in terms of familiar functions. Out of these studies emerged many useful concepts, including those of integrating factor and exact differential discussed in Ch. 1, §6. We will now extend these concepts to secondorder linear DEs, and derive from them the extremely important notions of adjoint and self-adjoint equations.
DEFINITION. The second-order homogeneous linear DE
(22)
+ + L[u] = p0(x)u"(x) pi(x)u'(x) p2(x)u(x) = 0
is said to be exact if and only if, for some A(x),B(x) E ~ 1,
(22')
+ + + p0(x)u"
p 1 (x)u'
d
p2(x)u
=
dx
[A(x)u'
B(x)u]
for all functions u E ~ 2. An integrating factor for the DE (22) is a function v(x)
such that vL[u] is exact. [Here and later, it will be assumed that Po E ~ 2 and that
pi,p0 E ~ 1 in discussing the DEs (22) and (22').]
If an integrating factor v for (22) can be found, then clearly
v(x)[p0(x)u"
+ pi(x)u'
+ p2(x)u]
=
d dx
[A(x)u'
+ B(x)u]
Hence, the solutions of the homogeneous DE (22) are those of the first-order inhomogeneous linear DE
(23)
+ A(x)u' B(x)u = C
8 Adjoint Operators; Lagrange Identity
55
where C is an arbitrary constant. Also, the solutions of the inhomogeneous DE
L[u] = r(x) are those of the first-order DE
(23')
A(x)u' + B(x)u = Jv(x)r(x) dx + C
The DEs (23) and (23') can be solved by a quadrature (Ch. 1, §6). Hence, if an
integrating factor of (22) can be found, we can reduce the solution L[u] = r(x)
to a sequence of quadratures.
Evidently, L[u] = 0 is exact in (22) if and only if Po = A, p1 = A' + B, and p2 = B'. Hence (22) is exact if and only if
P2 = B' = {P1 - A')' = pf - (pfi)'
This simple calculation proves the following important result.
LEMMA. The DE (22) is exad if and only if its coefficient functions satisfy
COROLLARY. A Junction v E (j2is an integrating factor for the DE (22) if and only if it is a solution of the second-order homogeneous linear DE
(24)
M[v] = [p0(x)v]" - [pi(x)v]' + p2(x)v = 0
DEFINITION. The operator Min (24) is called the adjoint of the linear operator L. The DE (24), expanded to the DE
(24')
is called the adjoint of the DE (22).
Clearly, whenever a nontrivial solution of the adjoint DE (24) or (24') of a given second-order linear DE (22) can be found, every solution of any DE
L[u] = r(x) can be obtained by quadratures, using (23').
Lagrange Identity. The concept of the adjoint of a linear operator, which originated historically in the search for integrating factors, is of major importance because of the role which it plays in the theory of orthogonal and biorthogonal expansions. We now lay the foundations for this theory.
Substituting into (24), we find that the adjoint of the adjoint of a given second-order linear DE (20) is again the original DE (20). Another consequence of
(24) is the identity, valid whenever Po E {y2, p, E {j 1,
56
CHAPTER 2 Second-Order Linear Equations
Since wu" - uw' = (wu' - uw')' and (uw)' = uw' + wu', this can be simplified
to give the Lagrange identity
(25)
vL[u] -
uM[v]
= -d
dx
[p0(u'v
-
uv') -
(Po - P1)uv]
The left side of (25) is thus always an exact differential of a homogeneous bilinear expression in u,v, and their derivatives.
Self-Adjoint Equations. Homogeneous linear DEs that coincide with their
adjoint are of great importance; they are called self-adjoint. For instance, the
Legendre DE of Example 2, §1, is self-adjoint. The condition for (22) to be self-
adjoint is easily derived. It is necessary by (24') that 2p0 - p1 = p1, that is, Po = p1. Since this relation implies pi - p~ = 0, it is also sufficient. Moreover,
in this self-adjoint case, the last term in (25) vanishes. This proves the first
statement of the following theorem.
THEOREM 9. The second-order linear DE (22) is self-adjoint if and only if it has the form
(26)
-d [ p(x) -du] + q(x)u = 0
dx
dx
The DE (22) can be made self-adjoint by multiplying through b-y
(26')
J h(x) = [exp (P1/Po) dx] /Po•
To prove the second statement, first reduce (22) to normal form by dividing through by Po, and then observe that the DE
hu" + (ph)u' + (qh)u = 0
is self-adjoint if and only if h' = ph, or h = exp (Jp dx).
For example, the self-adjoint form of the Bessel DE of Example 1 is
(xu')' + [x - (n2/x)]u = 0
For self-adjoint DEs (26), the Lagrange identity simplifies to
(26")
d vL[u] - uL[v] = - [p(x)(u'v - uv')]
dx
8 Adjoint Operators; Lagrange Identity
57
EXERCISESD 1. Show that if u(x) and v(x) are solutions of the self-adjoint DE
(fru')' + q(x)u = 0
then p(x)[uv' - vu'] is a constant (Abel's identity).
2. Reduce the following DEs to self-adjoint form:
(a) (1 - x2)u" - xu' + Xu = 0 (Chebyshev DE) (b) x2u" + xu' + u = 0 (c) u" + u' tan x = 0
3. For each of the following DEs, y = x3 is one solution; use (12) to find a second, linearly independent solution by quadratures.
(a) x2y" - 4xy' + 6y = 0 (b) xy" + (x - 2)y' - 3y = 0 4. Show that the substitution y = t!fHl•l2u replaces (8) by
y" + l(x)y = 0,
p2
I(x) = q - 4 - P'/2
* *5. Show that two DEs of the form (8) can be transformed into each other by a change
of dependent variable of the form y = v(x)u,v 0, if and only if the function
/(x) = q(x) - p2(x)/4 - p'(x)/2 is the same for both DEs [/ (x) is called the invariant
of the DE].
6. Reduce the self-adjoint DE (pu')' + qu = 0 to normal form, and show that, in the + notation of Ex. 5, /(x) = (p'2 - 2pp" 4pq)/4p2•
7. (a) Show that, for the normal form of the Legendre DE [(1 - x2)u']' + Xu = 0
(Use Ex. 6.)
(b) Show that, if X = n(n + 1), then every solution of the Legendre equation has at least (2n + l)/1r zeros on (-1, 1).
8. Let u(x) be a solution of u" = q(x)u, q(x) > 0 such that u(0) and u'(0) are positive.
Show that uu' and u(x) are increasing for x > 0.
9. Let h(x) be a nonnegative function of class @1. Show that the change of independent
variable t = J; h(s) ds, u(x) = v(t), changes (8) into v" + p1(t)v' + q1(t)v = 0, where + p1(t) = [p(x)h(x) = h'(x)]/h(x)2 and q1(t) q(x)/h(x)2•
* 10. (a) Show that a change of independent variable t = ± J Iq(x) I112 dx, q 0,
q e @1 changes the DE (8) into one whose normal form is
(*)
d2u + (q' + 2pq) du + u = 0
dt2
2 Iq I312 dt -
(b) Show that no other change of independent variable makes IqI = 1
*11. Using Ex. 10, show that Eq. (8) is equivalent to a DE with constant coefficients
+ under a change of independent variable if and only if (q' 2pq)/q312 is a constant. 12. Making appropriate definitions, show that p0u'" + p1u" + p2u' + p3u = 0 is an
exact DE if and only if pi - pf + p~ - p3 = 0.
58
CHAPTER 2 Second-Order Linear Equations
9 GREEN'S FUNCTIONS
The inhomogeneous linear second-order DE in normal form,
(27)
L[u]
=
d2u dx2
+
p(x)
du dx
+
q(x)u
= r(x)
differs from the homogeneous linear DE
(27')
L[u]
= d2u
dx2
+
p(x)
du dx
+
q(x)u
= 0
by the nonzero function r(x) on the right side. In applications to electrical and
dynamical systems, the function r(x) is called the forcing term or input function. By the Uniqueness Theorem of §4 and Lemma 2 of §4, it is clear that the solu-
tion u(x) of L[u] = r(x) for given homogeneous initial conditions such as u(O) = u'(O) = 0 depends linearly on the forcing term. We will now determine the
exact nature of this linear dependence.
Given the inhomogeneous linear DE (27), we will show that there exists an
integral operator G,
(28)
such that G[r] = u. In fact, one can always find a function G that makes G[r]
satisfy given homogeneous boundary condition~, provided that the latter define a well-set problem.
The kernel G(x, ~) of Eq. (28) is then called the Green's Junctiont associated with the given boundary value problem. In operator notation, it is defined by
the identity L[G[r]] = r (G is a "right-inverse" of the linear operator L) and
the given boundary conditions. Green's functions can be defined for linear differential operators of any
order, as we will show in Ch. 3, §9. To provide an intuitive basis for this very general concept, we begin with the simplest, first-order case. In this example, the independent variable will be denoted by t and should be thought of as representing time.
Example 3. Suppose that money is deposited continuously in a bank account, at a continuously varying rate r(t), and that interest is compounded continuously
at a constant rate p ( =100p% per annum). As a function of time, the amount
t To honor the British mathematician George Green (1793-1841), who was the first to use formulas
like (28) to solve boundary value problems. Cauchy and Fourier used similar formulas earlier to solve DEs in infinite domains.
u(t) in the account satisfies the DE
9 Green's Functions
59
+ du
- = pu r(t) dt
If the account is opened when t = 0 and initially has no money: u(0) = 0, then
one can calculate u(1) at any later time T > 0 as follows. Each infinitesimal
+ deposit r(t) dt, made in the time interval (t, t dt), increases through compound
interest accrued during the time interval from t to T by a factor ef'(T-t) to the
amount eP<T-t>r(t) dt. Hence the account should amount, at time T, to the integral (limit of sums)
(29)
This plausible argument is easily made rigorous. It is obvious that u(0) = 0 in
(29). Differentiating the product in the final expression of (29), we obtain
where the derivative of the integral is evaluated by ·the Fundamental Theorem of the Calculus.
Example 4. Consider next the motion of a mass m on an elastic spring, which
we model by the DE u" + pu' + qu = r(t). Here p signifies the damping coef-
ficient and q the restoring force; r(t) is the forcing function; we will assume that
q > p2j4. Finally, suppose that the mass is at rest up to time t0, and is then given
an impulsive (that is, instantaneous) velocity v 0 at time t0.
The function J describing the position of the mass m as a function of time under such conditions is continuous, but its derivative J' is not defined at t0,
because of the sudden jump in the velocity. However, the left-hand derivative
ofJ at the point t0 exists and is equal to zero, and the right-hand derivative also exists and is equal to v 0, the impulsive velocity. For t > t0, the function J is
obtained by solving the constant-coefficient DE u" + pu' + qu = 0. Since
q > p2j4, the roots of the characteristic equation are complex conjugate, and
we obtain an oscillatory solution
= { u(t)
~v0/v)e-,.(t-tol sin v(t - t0)
whereµ = p/2 and v = Vq - p2/4.
60
CHAPTER 2 Second-Order Linear Equations
Now suppose the mass is given a sequence of small velocity impulses t:..vk = r(tk) t:..t, at successive instants t0, t1 = t0 + t:..t, ... , tk = t0 + k t:..t, . ... Summing
the effects of these over the time interval t0 < t < T, and passing to the limit as t:..t--+ 0, we are led to conjecture the formula
(30)
l -u(T) = T-l e-,.(T-() sin v(T - t)r(t) dt
to "
This represents the forced oscillation associated with the DE
(31)
u" + pu' + qu = r(t), q> p2j4
having the forcing term r(t).
Variation of Parameters. The conjecture just stated can be verified as a special case of the following general result, valid for all linear second-order DEs with continuous coefficients.
THEOREM 10. Let the function G(t, r) be defined as follows:
(i) G(t, r) = 0, for a < t < r,
(ii) for each fixed T >a and all t > T, G(t, r) is that particular solution of the DE
L[G] = G11 + p(t)G1 + q(t)G = 0 which satisfies the initial conditions G = 0
and G1 = l at t = r.
Then G is the Green's Junction of the operator L for the initial value problem on
t < a.
Proof We must prove that, for any continuous function r, the definite integral
(32)
l' 1 u(t) = G(t, r)r(r) dr = 00 G(t, r)r(r) dr
[by (i)]
is a solution of the second-order inhomogeneous linear DE (27), which satisfies
the initial conditions u(a) = u'(a) = 0.
The proof is based on Leibniz' Rule for differentiating definite integrals.t
This rule is: For any continuous function g(t, r) whose derivative ag;at is piece-
wise continuous, we have
df' f'a - g(t, r) dT = g(t, t) + ag (t, T) dr
dt a
a t
t Kaplan, Advanced Calculus, p. 219. In our applications, iJg/iJt has, at worst, a simple jump across
t = T.
9 Green's Functions
61
Applying this rule twice to the right side of formula (32), we obtain successively,
since G(t, t) = 0,
l' l' + u'(t) = G(t, t)r(t)
G,(t, r)r(r) dr =
G,(t, r)r(r) dr
l' = + u"(t) G1(t, t)r(t)
G11(t, r)r(r) dr
By assumption (ii), the last equation simplifies to
l'+ u"(t) = r(t)
G11(t, r)r(r) dr
Here the subscripts indicate partial differentiation with respect tot. Thus
+ + L[u] = u"(t) p(t)u'(t) q(t)u(t)
l'+ + + = r(t)
[G11(t, r) p(t)G,(t, r) q(t)G(t, r)]r(r) dr = r(t)
completing the proof. The reader can easily verify that the function v- 1e-µ(t-rl sin v(t - r) in (30)
satisfies the conditions of Theorem 10, in the special case of Example 4. To construct the Green's function G(t, r) of Theorem 9 explicitly, it suffices
to know two linearly independent solutionsf(t) and g(t) of the reduced equation
+ L[u] = 0. Namely, to compute G(t, r) for t > r write G(t, r) = c(r)f(t)
d(r)g(t), by Theorem 3. Solving the simultaneous linear equations G = 0, G, = 1, at t = r specified in condition (ii) of Theorem 10
we get the formulas
c(r) = - g(r)/W(f, g; r), d(r) = f(r)/W(f, g; r)
where W(f, g; r) = f(r)g'(r) - g(r)f'(r) is the Wronskian. This gives for the Green's function the formula
G(t, r) = [f(r)g(t) - g(r)f(t)]/[J(r)g'(r) - g(r)J'(r)]
Substituting into (32), we obtain our final result.
COROLLARY. Let f(t) and g(t) be any two linearly independent solutions of the
= linear homogeneous DE (27'). Then the solution ofL[u] r(t) for the initial conditions
62
CHAPTER 2 Second-Order Linear Equations
u(a) = u'(a) = 0 is the .function
(33)
u(t) = (' j(r)g(t) - g(r)j(t) r(r) dr
J a W[f(r),g(r)]
Consequently, if we define the Junctions <b(t) and 1/t(t) as the following definite integrals:
(' f(r)
= <J>(t) J a W(f,g) r(r) dr,
l t g(r)
1/t(t) = - - - r(r) dr a W(f,g)
we can write the solution ofL[u] = r(t), in the form
(33')
u(t) = <b(t)g(t) + 1/t(t)J(t)
In textbooks on the elementary theory of DEs, formula (33) is often derived formally by posing the question: What must c(r) and d(r) be in order that the function
G(t, r) = c(r)f(t) + d(r)g(t)
when substituted into (28), will give a solution of the inhomogeneous DE L[u]
= r(t)? Since c(r) and d(r) may be regarded as "variable parameters," which vary
with T, formula (33) is said to be obtained by the method of variation of
parameters.
EXERCISESE
1. Integrate the following DEs by using formula (33):
(a) y" - y = xn
(b) y" + y = e'
(c) y" - qy' + y = 2xe'
(d) y" + IOy' + 25y = sin x
2. Show that the general solution of the inhomogeneous DE y" + k2y = R(x) is given by y = (Ijk)[f~ sin k(x - t)R(t) dt] + c1 sin kx + c2 cos kx.
3. Solve y" + 3y' + 2y = x3 for the initial conditions y(O) = y'(O) = (0).
4. Show that any second-order inhomogeneous linear DE which is satisifed by both x2 and sin2 x must have a singular point at the origin.
5. Construct Green's functions for the initial-value problem, for the following DEs:
(a) u" = 0 (b) u" = u (c) u" + u = 0 (d) x2u" + (x2 + 2x)u' + (x + 2)u = 0 [HINT: x is a solution.]
6. Find the general solutions of the following inhomogeneous Euler DEs:
(a) x2y" - 2xy' + 2y = x2 + px + q (b) x2y" + 3xy' + y = R(x)
[HINT: Any homogeneous Euler DE has a solution of the form x'.]
7. (a) Construct a Green's function for the inhomogeneous first-order DE
du/dt = p(t)u + r(t)
10 Two-Endpoint Problems
63
(b) Interpret in terms of compound interest (cf. Example 3).
(c) Relate to formula (8') of Ch. 1.
+ 8. Show that, if q(t) < 0, then the Green's function G(t, T) of Uu q(t)u = 0 for the
initial-value problem is positive and convex upward for t > T.
*10 TWO-ENDPOINT PROBLEMS
So far, we have considered only "initial conditions." That is, in considering
solutions of second-order DEs such as y" = -p(x)y' - q(x)y, we have supposed
y and y' both given at the same point a. That is natural in many dynamical problems. One is given the initial position and velocity, and a general law relating the acceleration to the instantaneous position and velocity, and then one wishes to determine the subsequent motion from this data, as in Example 4.
In other problems, two-endpoint conditions, at points x = a and x = b, are
more natural. For instance, the DE y" = 0 characterizes straight lines in
the plane, and one may be interested in determining the straight line joining two given points (a, c) and (b, d). That is, the problem is to find the solution
y = j(x) of the DE y" = 0 which satisfies the two endpoint conditions j(a) = c andf(b) = d.
Many two-endpoint problems for second-order DEs arise in the calculus of variations. Here a standard problem is to find, for a given function F(x,y,y'), the
curve y = J(x) which minimizes the integral
(34)
/(J) = lb F(x,y,y') dx
By a classical result of Euler,t the line integral (34) is an extremum (maximum,
minimum, or minimax), relative to all curves y = f(x) of class fJ2 satisfying f(a) = c and f(b) = d, if and only ifJ(x) satisfies the Euler-Lagrange variational
equation
(34')
fx (~) = (!;)
For example, if F(x,y,y') = \!I + y'2 so that /(J) is the length of the curve, Eq.
(34') gives zero curvature:
t See for example Courant and John, Vol. 2, p. 743.
64
CHAPTER 2 Second-Order Linear Equations
as the condition for the length to be an extremum. This is equivalent toy" = 0, whose solutions are the straight lines y = ex + d.
It is natural to ask: Under what circumstances does a second-order DE have
a unique solution, assuming given values j(a) = c and J(b) = d at two given
endpoints a and b > a? When this is so, the resulting two-endpoint problem is Galled well-set. Clearly, the two-endpoint problem is always well-set for y" = 0.
Example 5. Now consider, for given p, q, r E C1, the curves that minimize the
integral (34) for F = ½[p(x)y'2 + 2q(x)yy' + r(x)y2]. For this F, the EulerLagrange DE is the second-order linear self-adjoint DE (py')' + (q' - r)y = 0.
The question of when the two-endpoint problem is well-set in this example is partially answered by the following result.
THEOREM 10. Let the second-order linear homogeneous DE
(35)
p0(x) =i'- 0
t with continuous coefficient-Junctions have two linearly independent solutions. Then the
two-endpoint problem de.fined by (35) and the endpoint conditions u(a) = c, u(b) = d
is well-set if and only if no nontrivial solution satisfies the endpoint conditions
(36)
u(a) = u(b) = 0
Proof By Theorem 2, the general solution of the DE (35) is the function u
= af(x) + f)g(x), where f and g are a basis of solutions of the DE (35), and a, f)
are arbitrary constants. By the elementary theory of linear equations, the
equations
af(a) + f)g(a) = c af(b) + f)g(b) = d
have one and only one solution vector (a, f)) if and only if the determinant
j(a)g(b) - g(a)J(b) =i'- 0. The alternative j(a)g(b) = f(b)g(a) is, however, the con-
dition that the homogeneous simultaneous linear equations
(37)
af(a) + f)g(a) = af(b) + f)g(b) = 0
have a nontrivial solution (a, f)) =f,. (0, 0). This proves Theorem 11.
When the DE (35) has a nontrivial solution satisfying the homogeneous end-
point conditions u(a) = u(b) = 0, the point (b, 0) on the x-axis is called a con-
jugate point of the point (a, 0) for the given homogeneous linear DE (35) or for a variational problem leading to this DE. In general, such conjugate points exist
t In Ch. 6, it will be shown that this hypothesis is unnecessary; a basis of solutions always exists for
continuous p,(x).
11 Green's Functions, II
65
for DEs whose solutions oscillate but not for those of nonoscillatory type, such
as u" = q(x)u, q(x) > 0. Thus, in Example 5, let p = 1, q = 0, and r = - k2 < 0. Then the general
solution of (35) for the initial condition u(a) = 0 is u = A sin [k(x - a)]. For
u(b) = 0 to be compatible with A =I= 0, it is necessary and sufficient that b = a
+ (n-,r/k). The conjugate points of a are spaced periodically. On the other hand,
the DE y" - l\y = 0, corresponding to the choice p = 1, q = 0, r = l\ in
Example 5, admits no conjugate points if ll. = r is positive.
*11 GREEN'S FUNCTIONS, II
We now show that, except in the case that a and bare conjugate points for the reduced equation L[u] = 0, the inhomogeneous linear DE (27) can be solved for the boundary conditions u(a) = u(b) = 0 by constructing an appropriate Green's function G(x, ~) on the square a < x, ~ < b and setting
(38)
u(x) = lb G(x, ~)r(~) d~ = G[r]
Note that G is an integral operator whose kernel is the Green's function G(x, ~The existence of a Green's function for a typical two-endpoint problem is
suggested by simple physical considerations, as follows.
Example 6. Consider a nearly horizontal taut string under constant tension T, supporting a continuously distributed load w(x) per unit length. If y(x) denotes the vertical displacement of the string, then the load w(x) Llx supported
by the string in the interval (x0, x0 + Llx) is in equilibrium with the net vertical
component of tension forces, which is
in the nearly horizontal ("small amplitude") approximation.t Dividing through
by Llx and letting Llx l 0, we get Ty"(x) = w(x).
The displacement y(x) depends linearly on the load, by Lemma 2 of §4. This suggests that we consider the load as the sum of a large number of point-con-
centrated loads w, = w(t) Llt at isolated points ~;- For each individual such load,
the taut string consists of two straight segments, the slope jumping by w.fT at
t. Thus, if the string extends from O to 1, the vertical displacement is
O<x:St
t:Sx:51
t For a more thorough discussion, seeJ. L. Synge and B. A. Griffith, Principles ofMechanics, McGraw-
Hill, 1949, p. 99.
66
CHAPTER 2 Second-Order Linear Equations
where E, is set equal to w,/T in order to give a jump in slope of w,IT at the point
x = t. Passing to the limit as the Llt ~ 0, we are led to guess that
where
_{<~ - l)x/T 0<x<~
G(x, ~) - ~(x - 1)/T ~<x<l
These heuristic considerations suggest that, in general, the Green's function G(x, ~) for the two-endpoint problem is determined for each fixed ~ by the following four conditions:
(i) L[G] = 0 in each of the intervals a < x <~and~< x < b.
(ii) G(a, ~ = G(b, ~) = 0.
(iii) G(x, ~ is continuous across the diagonal x = ~ of the square a < x, ~ < b
over which G(x, ~ is defined.
(iv) The derivative ac;ax jumps by l/p0(x) across this diagonal. To fulfill these
conditions for any given t let J(x) and g(x) be any nontrivial solutions of
L[u] = 0 that satisfyJ(a) = 0 and g(b) = 0, respectively. Then for any factor
E(~), the function
G(x, ~ = {E(~f(x)g(~
E(~j(~g(x)
a<x<~ ~<x<b
will satisfy L[G] = 0 in the required intervals because L[/] = L[g] = O; it will
satisfy (ii) because J(a) = g(b) = 0; and it approaches the same limit E(~)J(~)g(~)
from both sides of the diagonal x = ~; hence it is continuous there. For the
factor E(~) to give ac;ax a jump of l/p0(x) across x = t a direct computation
gives the condition
We are therefore led to try the kernel
(39')
G(x, ~) = {f(x)gm/Po(~)W(~) a<x<~
f(~)g(x)/Po(~) W(~) ~<x<b
where W = Jg' - gf' is the Wronskian ofJ and g. Observe again that since J(a)
= g(b) = 0, G(a, ~) = G(b, ~) = 0 for all~ E [a, b].
11 Green's Functions, II
67
THEOREM 11 '. For any continuous Junction r(x)on [a, b], the Junction u(x) E
fJ 2 de.fined by (39) and (39') is the solution ofp0u" + p 1u' + p2u = r that satisfies the boundary conditions of u(a) = u(b) = 0, provided that W(f, g) =f,. 0, i.e., that
there is no nontrivial solution of (35) satisfying the same boundary conditions.
The proof is similar to that of Theorem 10; the existence of two linearly independent solutions of (35) is again assumed. Rewriting (38) in the form
and differentiating by Leibniz' Rule, we have
The endpoint contributions cancel since G(x, 0 is continuous for x = f Differ-
entiating again, we obtain
u"(x) = lx GxxCx, ~)r(~) d~ + Gx(x, x-)r(x-) + ib GxxCx, 0r(~) d~ - Gx(x, x+)r(x+)
where f(x+) signifies the limit off(~) as ~ approaches x from above, and f(x-)
the limit as ~ approaches x from below. The two terms corresponding to the
contributions from the endpoints come from the sides ~ < x and ~ > x of the
diagonal; since ris continuous, r(x-) = r(x+). Hence their difference is [Gx(x+,x)
- Gx(x-, x)]r(x), which equals r(x/p0(x) by (39). Simplifying, we obtain
lb
r(x)
u"(x) = a GxxCx, 0rm d~ + P-o(X)
From this identity, we can calculate L[u]. It is
L[u] = lb L[G(x, ~)]r(0 d~ + r(x) = r(x)
m = since L[G(x, = 0 except at x f Here the operator L acts on the variable
x in G(x, 0; though G is not in fJ2, the expression L[G] is meaningful for one-
sided derivatives and the above can be justified. This gives the identity (38).
Since G(x, 0, considered as a function of x for fixed~. satisfies the boundary
68
CHAPTER 2 Second-Order Linear Equations
conditions G(a, ~) = G(b, ~) = 0, it follows from (38) that u(a) = u(b) = 0, completing the proof of the theorem.
Delta-Function Interpretation. The ideas underlying the intuitive discus-
sion for Examples 4 and 5 can be given the following heuristic interpretation.
Let the symbolic function o(x) stand for the limit of nonnegative "density" func-
tions p(x) concentrated in a narrow interval (-E, E) near x = 0, with total mass
J:.E p(x) dx = l, as E ! 0. Likewise, o(x - ~) stands for the density of a unit mass
(or charge) concentrated at x = ~For any JE C[a, b] a < 0 < band any such p with support (-E, E) C [a, b],
we will have by the Second Law of the Mean for integrals
f f lb J(x)p(x) dx = ~J(x)p(x) dx = J(x1) ~E p(x) dx = J(x1)
where -E < x1 < E. Letting E approach zero, we get in the limit
(40)
lb J(x)o(x) dx = J(0)
Translating through t we have similarly
(40')
~ E (a, b)
In particular, setting/(x) equal to one, we get
(41)
if~ E (a, b) if~ ft [a, b]
Finally, the Green's function of a differential operator L and given horrwgeneous linear initial or boundary conditions satisfies the symbolic equation
(42)
L"G(x, ~) = o(x - ~
and the same initial or boundary conditions (in x). Now consider the function
(43)
I= u(x)
G(x, ~)r(~) d~
Extending heuristically the Superposition Principle to integrals (considered as limits of sums), we are led to the good guess that u(x) satisfies the same initial (resp. boundary) conditions and also
(44)
I I L[u] = L"[ G(x, ~r(~) d~] = o(x - ~)r(~) d~ = r(x)
11 Green's Functions, II • 69
EXERCISESF
In Exs. 1-3, (a) construct Green's functions for the two-endpoint problem defined by the
DE sp~cified and the boundary conditions u(O) = u(l) = 0, and (b) solve for r(x) = x2:
I. u" - u = r(x)
2. u" + 4u = r(x)
3.
u" -
4x ---u'
4 +---u
=
r(x)
2x - 1
2x - 1
In Exs. 4-5, find the conjugate points nearest to x = 0 for the DE specified.
4. u" + 2u' + lOu = 0
5. (x2 - x + l)u" + (1 - 2x)u' + 2u = 0 [HINT: Look for polynomial solutions.]
*6. u11 - u, + e21u = 0
7. (a) Show that, for two-endpoint problems containing no pairs of conjugate points,
Green's function is always negative.
(b) Show that, if q(x) < 0, then the Green's function for u" + q(x)u = 0 in the two-
endpoint problem is always negative and convex (concave downward), with neg-
ative slope where x < ~ and positive slope where x > ~-
*8. Set F(x, y, y') = y'2(1 - y')2 in (34), and find the curves joining (0, 0) and (1, ½) which
minimize /(/).
9. Show that the Euler-Lagrange DE for F(x, y, y') = gp(x)y + ½Ty'2 (g, T constants) is
Ty" = gp(x). Relate to the sag of a loaded string under tension T.
ADDITIONAL EXERCISES
1. Show that the ratio v = flg of any two linearly independent solutions of the DE u" + q(x)u = 0 is a solution of the third-order nonlinear DE
(*)
(v") ~ S[v] = v" -
2 = 2q(x)
v' 2 v'
2. The Schwarzian S[v] of a function v(x) being defined by the middle term of(*), show
that S[(av + b)j(cv + d)] = S[oo] for any four constants, a, b, c, d with ad =I= be.
*3. Prove that, if v0, v 1, v2, v3 are any four distinct solutions of the Riccati DE, their
cross ratio is constant: (v0 - v 1)(v3 - v2)/(v0 - v2)(v3 - v1) = c.
4. Find the general solutions of the following inhomogeneous Euler DEs:
(a) x2y" - 2xy' + 2y = x2 + px + q (b) x2y" + 3xy' + y = R(x)
5. (a) Show that, if/and g satisfy u" + q(x)u = 0, the product Jg = y satisfies the DE 2yy" = (y')2 - 4y2g(x) + c for some constant c.
(b) As an application, solve 2yy" = (y')2 - (x + l)-2y2. 6. Show that, if u is the general solution of the DE (1) of the text, and Wy = p,p1-
p~p,, then v = u' is the general solution of
7. (a) Show that the Riccati equation y' = 1 + x2 + y2 has no solution on the interval
(0, 11").
(b) Show that the Riccati equation y' = 1 + y2 - x2 has a solution on the interval
(-oo, +oo).
70
CHAPTER 2 Second-Order Linear Equations
+ l d
8. Let 0k(x) = - - (arctan kx = k/11'(1 k2x2). Show that, if J(x) is any continuous 11' dx
function bounded on (-oo, oo), then limktoo J~oo 0h - c)J(x) dx = J(c).
9. For the DE u" + (B/x2)u = 0, show that every solution has infinitely many zeros
on (1, +oo) if B > ¼and a finite number if B < ~- [HINT: The DE is an Euler DE.]
10. For the DE u" + q(x)u = 0, show that every solution has a finite number of zeros
on (1, +oo) if q(x) < ¼x2, and infinitely many if q(x) > B/x2, B > ¼,
*11. For the DE u" + q(x)u = 0, show that every solution has infinitely many zeros on
(1, +oo) if
f :J 00 [ xq(x) -
dx = +oo.
*12. Show that, if p, q E <J2, we can transform the DE (8) to the form d2z/de = 0
in some neighborhood of the y-axis by transformations of the form z = J(x)y and d~ = h(x) dx. [HINT: Transform a basis of solutions to y1 = 1, y2 = ~.]
CHAPTER 3
LINEAR EQUATIONS WITH CONSTANT COEFFICIENTS
1 THE CHARACTERISTIC POLYNOMIAL
So far, we have discussed only first- and second-order DEs, primarily because so few DEs of higher order can be solved explicitly in terms of familiar functions. However, general algebraic techniques make it possible to solve constantcoefficient linear DEs of arbitrary order and to predict many properties of their solutions, including especially their stability or instability.
This chapter will be devoted to explaining and exploiting these techniques. In particular, it will exploit complex algebra and the properties of the complex exponential function, which will be reviewed in this section. It will also apply polynomial algebra to linear differential operators with constant coefficients, using principles to be explained in §2.
The nth order linear DE with constant coefficients is
Here u<kl stands for the kth derivative dku/dxk of the unknown function u(x); a 1, ... , an are arbitrary constants; and r(x) can be any continuous function. As in Ch. 2, §1, the letter Lin (1) stands for a (homogeneous) linear operator. That
+ + is, L[au /jv] = aL[u] {jL[v] for any functions u and v of class ~n and any
constant a and /j. As in the second-order case treated in Chapter 2, the solution of linear DEs
of the form (1) is best achieved by expressing its general solution as the sum u
+ = Up uh of some particular solution up(x) of (2), and the general solution uh(x)
of the "reduced" (homogeneous) equation
(2)
obtained by setting the right-hand side of (1) equal to 0.
Solutions of (2) can be found by trying the exponential substitution u = tr, where A is a real or complex number to be determined. Since ~(i")/dxn =
Antr, this substitution reduces (2) to the identity
71
72
CHAPTER 3 Linear Equations with Constant Coefficients
This is satisfied if and only if A is a (real or complex) root of the characteristic polynomial of the DE (1), defined as
(3)
For the second-order DE u" + pu' + qu = 0, the roots of the characteristic polynomial are A = ½(-p ± Vp2 - 4q). In Ch. 2, §2, it was shown by a special
method that, when p 2 < 4q so that the characteristic polynomial has complex
roots A= -p/2 ± iv (v = Y4q - p2), the real functions e-px/2 {~:} vx form
a basis of solutions. We will now show how to apply the exponential substitution
u = e""x to solve the general DE (2), beginning with the second-order case.
Loosely speaking, when the characteristic polynomial p(A) has n distinct roots A1, ... , A,., the functions ct>i(x) =: / 1x form a basis of complex solutions of the
DE (2). By this we mean that for any "initial" x = x0 and specified (complex)
numbers u 0, u 0' , . • • , u 0Cn- 1>, there exi•st uni•que numbers c1, . . . , en sueh that
= the solution fix) = uh(x) = E;=i c1<b/x) satisfies f(.x0) = u0, f'(x0) = u0, ... ,
J<n-l)(x) u&n-1)_
Moreover, the complex roots A1 of p(A) occur in pairs µ1 ± iv1, just as in the
second-order case treated in Ch. 2. Therefore, the real functions rl'1x cos v1x,
eµ1x sin v1x together with the / 1x corresponding to real roots of p(A) = 0, form a
basis of real solutions of (2).
Initial Value Problem. By the "initial value problem" for the nth order DE
(1) is meant finding, for specified x0 and numbers u0, u6, ... , u&n-1>, a solution u(x) of (1) that satisfies u(x0) = u 0, and u(l\x0) = uW for j = 1, ... , n - 1.
If a basis of solutions <bix) of the "reduced" DE (2) is known, together with one "particular" solution up(x) of the inhomogeneous DE (1), then the sum
+ u(x) = up(x) Ec1<bJ<x), with the ci chosen to make uh(x) = Ec1<bix) satisfy
uh(Xo) -_ Uo - Up(Xo)' uh'(Xo) -_ Uo, - Up'(Xo)' ••• ' uh(n-1)(Xo) -_ Uo(n-1) - Up(n-1)(Xo)' constitutes one solution of the stated initial value problem. In §4, we will prove that this is the only solution (a uniqueness theorem), so that the stated initial
value problem is "well-posed."
2 COMPLEX EXPONENTIAL FUNCTIONS
When the characteristic polynomial of u" + pu' + qu = 0 has complex roots A = -p/2 ± iv, as before, the exponential substitution gives two complex expo-
nential Junctions as formal solutions, namely
= e-px/2±ivx e-px/2 {cos vx ± i sin vx}
From these complex solutions, the real solutions e-px/'J. { ~:} vx obtained by a special method in Ch. 2 can easily be constructed as linear combinations. The
2 Complex Exponential Functions
73
present section will be devoted to explaining how similar constructions can be applied to arbitrary homogeneous linear constant-coefficient DEs (2).
The first consideration that must be invoked is the so-called Fundamental Theorem of Algebra.t To apply this theorem effectively, one must also be familiar with the basic properties of the complex exponential function. We shall now take these up in tum.
The Fundamental Theorem of Algebra states that any real or complex polynomial p(X) can be uniquely factored into a product of powers of distinct linear factors (A - A):
(4)
Clearly, the roots of the equation p(A) = 0 are the Ar The exponent k1 in (4) is
called the multiplicity of the root A1; evidently the sum of the k1 is the degree of
p. When all Ai are distinct (i.e., all k1 = 1 so that m = n), the DE has a basis of
complex exponential solutions <f>ix) = i'-1", j = 1, 2, ... , n; see §4 for details.
Example 1. For the fourth-order DE uiv = u, the characteristic polynomial ± is A4 = 1, with roots ±1, i. Therefore, a basis of complex solutions is provided
bye", e-", e"', and e-"'. From these we can construct a basis of four real solutions
Complex Exponentials. In this chapter and in Ch. 9, properties of the complex exponential function e' will be used freely, and so we recall some of
them. The exponent z = x + iy is to be thought of as a point in the (x,y)-plane,
which is also referred to as the complex z-plane. The complex "value" w = e'
of the exponential function is evidently a vector in the complex w-plane with
magnitude Ie' I = e", which makes an angle y with the u-axis. (Here w = u +
iv, so that u = e" cosy and v = e" sin y if w = e•.)
Because e'8 = cos 8 + i sin 8, one also often writes z = x + iy as z = re'8,
V where r = x2 + y2 and 8 = arctan(y/x) are polar coordinates in the (x,y)-plane.
In this notation, the inverse of the complex exponential function e' is the complex "natural logarithm" function
ln z = ln(x + iy) = ln r + i8
Since 8 is defined only modulo 211', ln z is evidently a multiple-valued function. In the problems treated in this chapter, the coefficients a1 of the polynomial
(2) will usually all be real. Its roots A1 will then all be either real or complex
conjugate in pairs, A = µ ± iv. Thus, for the second-order DE u" + pu' + qu = O discussed in Chapter 2, the roots are A = ½(-p ± yp2 - 4q). They are
real when p2 > 4q, and complex conjugate when p2 < 4q.
t Birkhoff and MacLane, p. 113.
74
CHAPTER 3 Linear Equations with Constant Coefficients
In this chapter, the independent variable x will also be considered as real.
Now recall that if;\ = µ + iv, where µ,v are real, then we have for real x
(5)
;.x = eµx+ivx = ~(cos vx + i sin vx)
Hence, if A = µ + iv and A* = µ - iv are both roots of PL(A) = 0 in (3), the functions eµx(cos vx ± i sin vx) are both solutions of (2). Since Ieivx I = 1 for all
real v,x, it also follows that, where ;.xis considered as a complex-valued function of the real independent variable x
(5')
Example 1'. For the DE uw + 4u = 0, the characteristic polynomial A4 + 4 ± ± has the roots 1 i. Hence it has a basis of real solutions ex cos x, ex sin x,
e-xcos x, e-xsin x. (An equivalent basis is provided by the functions cosh x cos x, cosh x sin x, sinh x cos x, sinh x sin x.)
Euler's Homogeneous DE. The homogeneous linear DE
(6)
is called Euler's homogeneous differential equation. It can be reduced to the
form (2) on the positive semi-axis x > 0, by making the substitutions
t = ln x,
-d= xddt dx'
Corresponding to the real solutions lit, trl"-1', ••• of (2), we have real solutions
x""-1, x""-1ln x, of (6).
Moreover, these can easily be found by substituting x""- for u in (6). This sub-
stitution yields an equation of the form l(A)x""- = 0, where /(A) is a polynomial of degree n, called the indicial equation. Any A for which /(A) = 0 gives a so-
lution x""- of (6); if A is a double root, then x""- and x""- ln x are both solutions, and
so on.
For example, when n = 2, Euler's homogeneous DE is
(7)
x2u" + pxu' + qu = 0, p,q real constants
Trying u = x\ we get the indicial equation of (8):
(7')
Alternatively, making the change of variable x = e1, we get
(8)
-d2u
dt2
+
<P
-
du 1) -dt
+
qu
=
o '
t = ln x
since
2 Complex Exponential Functions
75
If (p - 1)2 > 4q, the indicial equation has two distinct real roots A = a and A = (:3, and so the DE (7) has the two linearly independent real solutions xa and
xfJ, defined for positive x. For positive or negative x, we have the solutions Ix Ia
and Ix IfJ since the substitution of - x for x does not affect (7). Note that the
DE (7) has a singular point at x = 0 and that Ix I has discontinuous slope there
if a~ I.
When the discriminant (p - 1)2 - 4q is negative, the indicial equation has
two conjugate complex roots A = µ ± iv, whereµ = (1 - p)/2 and v = [4q -
(p - 1)2] 112;2. A basis of real solutions of (8) is then~ cos vt and~ sin vt; the
corresponding solutions of the second order Euler homogeneous DE (7) are
xµ cos(v ln x) and xµ sin(v ln x). These are, for x > 0, the real and imaginary
parts of the complex power function
as in (5). For x < 0, we can get real solutions by using Ix I in place of x. But for x < 0, the resulting real solutions of (7) are no longer the real and imaginary
parts ofx\ because ln(-x) = ln x ± i1r; cf. Ch. 9, §I.
General Case. The general nth-order case can be treated in the same way. We can again make the change of independent variable
= X
e1,
t = ln x,
d d x-=-
dx dt
This reduces (6) to a DE of the form (2), whose solutions ti'-1 give a basis of
solutions for (6) of the form (ln x)'x>-.
EXERCISES A
In Exs. 1-4, find a basis of real solutions of the DE specified.
I. u" + 5u' + 4u = 0
2. u'" = u
3. u'" = u
*4. u'" + u = 0
In Exs. 5-6, find a basis of complex exponential solutions of the DE specified.
5. u" + 2iu' + 3u = 0
6. u" - 2u' + 2u = 0
In Exs. 7-10, find the solution of the initial value problem specified.
7. u" + 5u' + 4u = 0, u(O) = 1, u'(O) = 0 u(O) = u"(O) = 0, u'(O) = 1
*9. u'" = u, u(O) = u"(O) = 0, u'(O) = u"'(O) = 1
*IO. u" - 2u' + 2u = 0, u(O) = 1, u'(O) = 0
76
CHAPTER 3 Linear Equations with Constant Coefficients
In Exs. 11 and 12, find a basis of solutions of the Euler DE.
11. x2u" + 5xu' + 3u = 0
12. x2u" + 2ixu - 3u = 0
13. Describe the behavior of the function z' of the complex variable z = x + iy as z traces the unit circle z = e'8 around the origin.
14. Do the same as in Ex. 13 for the function z'e...
3 THE OPERATIONAL CALCULUS
We have already explained the general notion of a linear operator in Ch. 2,
§2. Obviously, any linear combination M = c1L 1 + c2L2 of linear operators L 1 and L 2, defined by the formula M[u] = c1L 1[u] + c2L 2 [u], is itself a linear operator, in the sense that M[au + bv] = aM[u] + bM[v] for all u,v to which L 1 and L2
are applicable. Moreover the same is true of the (left-) composite L 2L 1 of L 1 and L 2, defined by the formula L 2 [L1[u]].
For linear operators with constant coefficients, one can say much more. In the first place, they are permutable, in the sense of the following lemma.
LEMMA. Linear operators with constant coefficients are permutable: for any constants a1,b,,, if p(D) = Ea1D1 and q(D) = EbkDk, then p(D)q(D) = q(D)p(D) = Eaikn1+k.
Proof Iterate the formula D[bkD[u]] = bkD2[u]. It follows that, for any two
constants a1 and bk and any two positive integers j and k, we have a1D1bkDk = = aJbk n 1+k bk DkaJ D1•
This is not true of linear differential operators with variable coefficients. Thus, since
Dxf = (xf)' + xf' + J = (xD + l)f, for
we have Dx = xD + I. This shows that the differentiation operator D is not permutable with the operator "multiply by x." Likewise (x 2D)(xD) = x3D2 + x2D, whereas (xD)(x 2D} = x3D2 + 2x2D.
Because constant-coefficient linear differential operators are permutable, we can fruitfully apply polynomial algebra to them. As an immediate application, we have
THEOREM 1. If;\ is a root of multiplicity k of the characteristic polynomial (3),
then the Junctions x'lx (r = 0, ... , k -1) are solutions of the linear DE (2).
Proof An elementary calculation gives, after cancelation, (D - X)[tFfix)]
= e Xxf'(x) for any differentiable function fix). By induction; this implies (D - Xl[ftx)i'x] = lxj<k>(x) for any JE (J k_ Since the kth derivative of x' is zero
when k > r, it follows that
if k > r
3 The Operational Calculus
77
Moreover, the operators (D - X/• being permutable, we can write, for any i
II L[u] = q,(D)(D - Xi', where q,(D) = (D - x/1
1+•
Hence L[x'/•1 = 0 for each A, and r = 0, 1, ... , k, - 1, as stated.
Real and Complex Solutions. Theorem I holds whether the coefficients ak of the DE (2) are real or complex. Indeed, although the independent variable x
will be interpreted as real in this chapter (especially in discussing stability), the
operational calculus just discussed above, and the solutions constructed with it,
are equally applicable to functions of the complex variable z = x + iy.
However, when all the coefficients ak are real numbers, more detailed infor-
mation can be obtained about the solutions, as follows.
LEMMA. Let the complex-valued function w(x) = u(x) + iv(x) satisfy a homo-
geneous linear DE (1) with real coefficients. Then the functions u(x) and v(x) [the real and imaginary parts of w(x)] both satisfy the DE.
Proof The complex conjugatet w*(x) = u(x) - iv(x) of w(x) satisfies the
complex conjugate of the given DE (2), obtained by replacing every coefficient
ak by its complex conjugate a'f, because L*[w*] = {L[w]}* = 0. If the ak are real, then a'f = a,., and so w*(x) also satisfies (2). Hence, the linear combinations
= [w(x) + w*(x)]
u(x)
2
and
v(x)
=
[w(x)
- w*(x)] 2i
also satisfy (2), as stated. This result is also valid for DEs with variable coefficients ak(x).
COROLLARY 1. If the DE (2) has real coefficients and /x satisfies (2), then so does /*x. The nonreal roots of the characteristic polynomial (3) thus occur in conjugate
± pairs ;\1 = µ1 ivj, having the same multiplicity kr
Now, using formula (5), we obtain the following.
f COROLLARY 2, Each pair of complex conjugate roots ;\J';\ of (3) of multiplicity
k1 gives real solutions of (2) of the form
(9)
r = 0, ... , k1 - 1
These solutions differ from the solutions /x with real Ain that they have infinitely many zeros in any infinite interval of the real axis: that is, they are oscillatory. This proves the following result.
w t The complex conjugate w* of a complex number w = u + iv is u - iv. Some authors use instead
of w* to denote the complex conjugate of w.
78
CHAPTER 3 Linear Equations with Constant Coefficients
THEOREM 2. If the characteristic polynomial (3) with real coefficients has 2r nonreal roots, then the DE (2) has 2r distinct oscillatory real solutions of the form (9).
4 SOLUTION BASES
We now show that all solutions of the real homogeneous linear DE (2) are linear combinations of the special solutions described in Corollary 2 above. The proof will appeal to the concept of a basis of solutions of a general nth order linear homogeneous DE
(10)
The coefficient-functions pk(x) in (10) may be variable, but they must be real and continuous.
DEFINITION. A basis of solutions of the DE (10) is a set of solutions uk(x) of (10) such that every solution of (10) can be uniquely expressed as a linear
combination c1u 1(x) + · · · + cnun(x).
The aim of this section is to prove that the special solutions described in Corollary 2 form a basis of real solutions of the DE (2). The fact that every nth order homogeneous linear DE has a basis of n solutions is, of course, a theorem to be proved.
First, as in Ch. 2, §2, we define a set of n real or complex functions Ji, h, ... , fn defined on an interval (a,b) to be linearly independent when no linear
= combination of the functions with constant coefficients not all zero can vanish
identically: that is, when Ei:=I c,Ji.(x) 0 implies c1 = c2 = • • • = cn = 0. A
set of functions that is not linearly independent is said to be linearly dependent. There are two notions oflinear independence, according as we allow the coef-
ficients ck to assume only real values, or also complex values. In the first case, one says that the functions are linearly independent over the real field; in the second case, that they are linearly independent over the complex field.
LEMMA 1. A set of real-valued Junctions on an interval (a,b) is linearly independent over the complex field if and only if it is linearly independent over the real field.
Proof. Linear dependence over the real field implies linear dependence over the complex field, a fortiori. Conversely, the Ji(x) being real, suppose that Ec1fi (x)
== = == 0 for a< x < b. Then [Ec1Ji(x)]* 0, and hence Ecjfi(x) 0. Subtracting,
== we obtain E[(c1 - cj)/i]fi(x) 0. If all c1 are real, there is nothing to prove. If
some c1 is not real, some real number (c1 - cj)/i will not vanish, and we still have a vanishing linear combination with real coefficients.
A set of functions that is linearly dependent on a given domain may become linearly independent when the functions are extended to a larger domain. However, a linearly independent set of functions clearly remains linearly independent when the functions are so extended.
4 Solution Bases
79
LEMMA 2. Any set ofJunctions ofthe form
(11)
j = 1, ... , n
where the r are nonnegative integers and the X1 complex numbers, is linearly indepen-
dent on any nonvoid open interval, unless two or more of the Junctions are identical.
= Proof. Suppose that "E.cTJJTJ(x) 0. For any given Al' choose R to be the larg-
est r such that cTJ 1= 0. Form the operator
t II = x q(D) (D - 1
(D - xl•+ 1
l'F')
where for each i 1= j, k, is the largest r such that x'i·•" is a member of the
set of functions in (11). It follows that q(D)[fnl = 0 unless i = j, and that q(D)[fry] = 0 for r < R. Hence, we have
On the other hand, as in the proof of Theorem 1, we see that
I I = q(D)[xR/•"] (R!) (X1 - A,)k,+I/.,x 1= 0
l'F'}
Therefore, substituting back, we find that cR1 = 0. Since we assumed that cR1 1=
0, this gives a contradiction unless all cTJ = 0, proving linear independence.
From Theorem 1 we obtain the following corollary.
COROLLARY 1. The DE (2) has at least n linearly independent, real or complex solutions of the form x'I".
The analogous result for real solutions of DE of the form (2) with real coef-
ficients can be proved as follows. For any two conjugate complex roots A = µ
+ iv and X* = µ - iv of the characteristic equation of (2), the real solutions
x'eµx cos vx and x'eµx sin vx are complex linear combinations of x'e"llx and x'/*x, and conversely. Hence, they can be substituted for x'I" and x'i·*" in any set of solutions without affecting their linear independence. Since linear independence over the complex field implies linear independence over the real field, this proves the following.
COROLLARY 2. A linear DE (2) with constant real coefficients ak has a set of n solutions of the form x'eµx or (9), which is linearly independent over the real field in any nonvoid interval.
We now show that all solutions of the real homogeneous linear DE (2) are linear combinations of the special solutions described in Corollary 2. (The proof
80
CHAPTER 3 Linear Equations with Constant Coefficients
will be extended to the case of complex coefficient-functions in Ch. 6, §11.) To this end, we first prove a special uniqueness lemma for the more general homogeneous linear DE (10),
with real and continuous coefficient-functions pk(x).
LEMMA 3. Let J(x) be any real or complex solution of the nth order homogeneous
linear DE (10) with continuous real coefficient-functions in the closed interval [a,b]. If j(a) = J'(a) = • • • = J<n-Il (a) = 0, thenj(x) == 0 on [a,b].
Proof. We first suppose j(x) real. The function
satisfies the initial condition u(a) = 0. Differentiating u(x), we find, since u(x) is real, that
+ + · · · + u'(x) = 2[J(x)f'(x) J'(x)J"(x)
J(n-l)(x)fn>(x)]
+ Using the inequality I2a{:J I < a2 {:12 repeatedly n - 1 times, we have
Since L[f] = 0, it follows thatfnl = -Ei:=i p,J<n-kl_ Hence, the last term can be rewritten in the form
L n
J(n-I>_J(n) = _ Pd(n-I>_J(n-k) k=I
+ Applying the inequality I2a{:J I < a2 {:12 again, we obtain
L + n
2 IJ(n-I>_t(n) I .:$ IPk I([j(n-k)]2 u<n-1)]2)
k=I
Substituting and rearranging terms, we obtain
t u'(x) < (1 + 1Pnl)J2 + (2 + 1Pn-il)f'2 + (2 + IPn-2l)J"2 + ... + (2 + IP2 I)[fn-2>]2 + (1 + IPil + IPk I) u<n-!)] 2
+ + Now let K = 2 max IP1(x) I maxa::.x;ab Ei:= 1lpix) I- Then it follows from the
last inequality that u'(x) < Ku(x). From this inequality and the initial condition
4 Solution Bases
81
= u(a) = 0, the identity u(x) = have j(x) 0.
0 follows by Lemma 2 of Ch. 1, §11. Hence, we
If h(x) = j(x) + ig(x) (f,g real) is a compl,ex solution of (10), thenj(x) and g(x)
satisfy (10) by Lemma 2 of §3. Moreover, h(a) = h'(a) = • • • = h(n-Il(a) = 0
implies the corresponding equalities on f and g. Hence, by the preceding para-
graph, we have h = f + ig == 0 + 0 = 0, completing the proof.
We now show that any n linearly independent solutions of (10) form a basis of solutions.
THEOREM 3. Let u1, ... , un be n linearly independent real solutions of the nth
order linear homogeneous DE (10) with real coefficient1unctions. Then, given arbitrary
real numbers a, Uo, Uo, ... , u&n-l>, there exist unique constants C1, ... , en such that
u(x) = Eckuix) is a solution of (l 0) satisfying
(12')
The Junctions uk(x) are a basis of solutions of (l 0).
Theorem 3 follows readily from the lemma. Suppose that, for some a, u0, u6, ... , u&n-ll, there were no linear combination Eckuix) satisfying the given initial conditions (12'). That is, suppose the n vectors
k = l, ... , n
were linearly dependent. Then there would exist constants y1, • .. , Yw not all zero,
such that
L n
'Ykuia) = 0,
k=I
L n
'Yku~(a) = 0, ... ,
k=I
L n
'YkU(n-l)(a) = 0
k=I
That is, the function <f>(x) = -y1u 1(x) +
+ 'Ynun(x) would satisfy
= From this it would follow, from the lemma, that ¢(x) 0.
Recapitulating, we can find either c1, ••• , en not all zero such that
satisfies (12'), or -y1, ... , 'Yn not all zero such that
82
CHAPTER 3 Linear Equations with Constant Coefficients
The second alternative contradicts the hypothesis of linear independence in Theorem 3, which proves the first conclusion there.
To prove the second conclusion, let v(x) be any solution of (10). By the first conclusion, constants c1, .•• , en can be found such that
satisfies u(a) = v(a), u'(a) = v'(a), ... , u<n- 1>(a) = v<n- 1>(a). Hence the difference f(x) = u(x) - v(x) satisfies the hypotheses of Lemma 3. Using the lemma, we
= obtain u(x) v(x) and v(x) = I:.ckuk(x), proving the second conclusion of Theo-
rem 3.
COROLLARY 1. Let X1, ... , Am be the roots of the characteristic polynomial ofthe
= realt DE (2) with multiplicities k1, ... , km. Then the Junctions x'/1", r 0, ... , k1
- 1, are a basis of complex solutions of (2).
Referring back to Theorem 2, we have also the following.
COROLLARY 2. If the coefficients ofthe DE (2) are real, then it has a basis ofreal solutions of the form x'rfx, x'eP"' cos vx, and x'e"'x sin vx, where A, µ, and v are real constants.
EXERCISESB
1. Solve the following initial-value problems: (a) u'" - u = 0, u(0) = u'(0) = um(0) = 0, u"(0) = 1 (b) u'" = 0, u'(0) = u"'(0) = 0, u(0) = 1, u"(0) = -2
(c) u'" + u" = 0, u"(0) = um(0) = 0, u(0) = u'(0) = 1
2. (a) Find a DE L[u] = 0 of the form (2) having e-1, te-1, and e1 as a basis of solutions.
(b) For this linear operator L, find a basis of solutions of the sixth-order DE L 2[u] = 0 and the ninth-order DE L3 [u] = 0.
3. Find bases of solutions for the following DEs:
(a) u"' = u
(b) u'" - 3u" + 2u = 0
(c) u'" + 6u" + l 2u' + Bu = 0
(d) u"' + 6u" + 12u' + (8 + z)u = 0
4. Knowing bases of solutions L 1[u] = 0 and L 2[u] = 0 of the form given by Theorem 1, find a basis of solutions of L 1[L2[u]] = 0.
5. Show that in every real DE of the form (2), L can be factored as L = L 1L2 ..• L,,, where L1 = D1 + b1 or L1 = D2 + pp + q1, with all b1, p1, <Ji real.
6. Extend Lemma 2 of §4 to the case where the r are arbitrary complex numbers.
*7. State an analog of Corollary 2 of §4 for Euler's homogeneous DE, and prove your statement without assuming Corollary 2.
8. Prove that the DE of Ex. A5 has no nontrivial real solution.
t The preceding result can be proved more generally for linear DEs with constant complex coeffi-
cients, by similar methods; see Ch. 6, §11.
5 INHOMOGENEOUS EQUATIONS
5 Inhomogeneous Equations
83
We now return to the nth order inhomogeneous linear DE with constant coefficients,
(13)
already introduced in §1. As in the second-order case of Ch. 2, §8, the function
r(t) in (13) may be thought of as an "input" or "source" term, and u(t) as the
"output" due to r(t). We first describe a simple method for finding a particular
solution of the DE (13) in closed form, in the special case that r(t) = 'Epk(t)i'k1
is a linear combination of products of polynomials and exponentials.
We recall that, by Lemma 2 of §3,
(D - A)[i"fit)] = i'1f '(t)
As a corollary, since every polynomial of degrees is the derivative r(t) = q'(t) of a suitable polynomial q(t) of degree s + 1, we obtain the following result.
LEMMA 1. If r(t) is a polynomial of degree s, then (D - A)[u] = l 1r(t) has a solution of the form u = i'q(t), where q(t) is a polynomial of degrees + 1.
More generally, one easily verifies the identity
If A -=fo A1, and f (t) is a polynomial of degree s, then the right side of the preceding
identity is a polynomial of degree s times i'-1• This proves another useful alge-
braic fact: LEMMA 2. If r(t) is a polynomial of degree s and A -=fo Ai, then
has a solution of the form u = i'q(t), where q(t) is a polynomial of degrees.
Applying the two preceding lemmas repeatedly to the factors of the operator
we get the following result.
THEOREM 4. The DE L[u] = /'1r(t), here r(t) is a polynomial, has a particular
solution ofthe form l 1q(t),where q(t) is also a polynomial. The degree ofq(t) equals that
84
CHAPTER 3 Linear Equations with Constant Coefficients
of r(t) unless ;\ = ;\1 is a root of the characteristic polynomial pL(>..) = I:.(;\ - ;\/i ofL. If;\ = ;\1 is a k-Jold root ofpL(;\), then the degree of q(t) exceeds that ofr(t) l,y k.
Knowing the form of the answer, we can solve for the coefficients bk of the
unknown polynomial q(t) = I:.b/ by the method of undetermined coefficients. Namely, applying p(D) to u(t) = e.,_1(I:.bkf), one can compute the numbers Pk1 in
the formula
using formulas for differentiating elementary functions. One does not need to
factor Pv The simultaneous linear equations I:.Pk1b1 = ck can then be solved for the bk, given r(t) = I:.ckr, by elementary algebra. Theorem 4 simply states how
many unknowns bk must be u11ed to get a compatible system of linear equations.
Example 2. Find the solution of the DE
L[u] = u'" + 3u" + 2u' = 12te'
that satisfies the initial conditions u(O) = -17/3, u'(O) = u"(O) = 1/3. First,
+ since the two-dimensional subspace of functions of the form (a {:Jt)e' is
mapped into itself by differentiation, the constant-coefficient DE (*) may be
expected to have a particular solution of this form. And indeed, substituting u
= (a + {:Jt)e' into (*) and evaluating, we get
(*)
+ + L[u] = [(6a 11{:J) 6{:Jt]e1 = 12te'
Comparing coefficients, we find a particular solution u = (-11/3 + 2t)e-1
of(*).
+ + Second, the reduced DE u"' 3u" 2u' = 0 of(*) has 1, e-1, and e- 21 as a
basis of solutions. The general solution of(*) is therefore
u = + + a be-1 ce-21 + (-11/3 + 2t)e'
The initial conditions yield three simultaneous linear equations in a, b, c whose solution is a = 1, b = -4, c = 1. Hence the solution of the specified initial value problem is
(**)
u = 1 - 4e-1 + e-21 + (-11/3 + 2t)e'
EXERCISES C
In Exs. 1-4, find a particular solution of the DE specified. In Exs. 1-3, find the solutions
satisfying (a) u(0) = 0, u'(0) = 1 and (b) u(0) = 0, u'(0) = 1.
I. u" = tt!
2. u" + u = tt!
3. u" - u = tt!
6 Stability
85
In each of Exs. 5-8, find a particular solution of the DE specified.
5. u"' + 4u = sin t 7. u'" + 5u8 + 4u = e'
6. um + 2u8 + 3u' + 6u = cos t
8. u8 + iu = sin 2t, i = \/-I
In each of Exs. 9-12, find (a) the general solution of the DE specified four exercises earlier, and (b) the particular solution satisfying the initial condition specified.
9. u(v)(O) = 0 for 11 = 0, 1, 2, 3, 4 u"(0) = 1 10. u(0) = u'(0), = u8 (0) 1 11. u(0) = 10, u'(0) = U 8 (0) = um(0) = 0
12. u(0) = 0, u'(0) = i
6 STABILITY
An important physical concept is that of the stability of equilibrium. An equilibrium state of a physical system is said to be stable when small departures from equilibrium remain small with the lapse of time, and unstable when arbitrarily small initial deviations from equilibrium can ultimately become quite large.
In considering the stability of equilibrium, it is suggestive to think of the independent variable as standing for the time t. Accordingly, one rewrites the DE (2) as
(14)
== For such constant-coefficient homogeneous linear DEs, the trivial solution u 0
represents an equilibrium state, and the possibilities for stable and unstable behavior are relatively few. They are adequately described by the following definition (cf. Ch. 5, §7 for the nonlinear case).
DEFINITION. The homogeneous linear DE (14) is strictly stable when every solution tends to zero as t-+ oo; it is stable when every solution remains bounded as t -+ oo; when not stable, it is called unstable.
Evidently, a homogeneous linear DE is strictly stable if and only if it has a finite basis of solutions tending to zero, and stable if and only if it has a basis of bounded solutions. The reason for this is that every finite linear combination of bounded functions is bounded, as is easily shown. Hence Theorem 3, Corollary 6 gives algebraic tests for stability and strict stability of the DE (14). Take a basis of solutions of the form t'e"", t'eµ 1 sin vt, t'~1 cos vt. Such a solution tends to zero
if and only ifµ < 0 and remains bounded as t -+ oo if and only ifµ < Oor µ =
r = 0. This gives the following result.
THEOREM 5. A given DE (14) is strictly stable if and only if every root of its characteristic polynomial has a negative real part. It is stable if and only if every mul-
86
CHAPTER 3 Linear Equations with Constant Coefficients
tiple root;\, [with k, > 1 in (4)] has a negative real part, and no simp!,e root (with
k, = 1) has a positive real part.
Polynomials all of whose roots have negative real parts are said to be of stab/,e
type. t There are algebraic inequalities, called the Routh-Hurwitz conditions, on
the coefficients of a real polynomial, which are necessary and sufficient for it to
be of stable type. Thus, consider the quadratic characteristic polynomial of the
DE of the second-order DE (5) of Ch. 2, §1. An examination of the three cases
discussed in §2 above shows that the real DE
u =d-u
dt
is strictly stable if and only if a1 and a2 are both positive (positive damping and
positive restoring force). That is, when n = 2, the Routh-Hurwitz conditions
are a1 > 0 and a2 > 0.
To make it easier to correlate the preceding results with the more informal
discussion of stability and oscillation found in Ch. 2, §2, we can rewrite the DE
discussed there as ii+ pu + qu = 0. We have just recalled that this DE is strictly
stable if and only if p > 0 and q > 0. It is oscillatory if and only if q > p2/4,
+ + so that its characteristic polynomial ;\2
p;\
q has complex roots
-(p ± yp2 - 4q)/2.
In the case of a third-order DE (n = 3), the test for strict stability is provided
by the inequalities a1 > 0 (j = 1, 2, 3) and a1a2 > a3. When n = 4, the conditions for strict stability are a1 > 0 (j = 1, 2, 3, 4), a1a2 > a3, and a1a2a3 > a?a4 + a~.
When n > 2, there are no equally simple conditions for solutions to be oscil-
latory or nonoscillatory. Thus, the characteristic polynomial of the DE u + ii +
u + u = 0 is (A + l)(;\2 + l); hence its general solution is
a cos t + b sin t + ce-1
Unless a = b = 0, this solution will become oscillatory for large positive t, but
will be nonoscillatory for large negative t. Other illustrative examples are given in Exercises C.
7 THE TRANSFER FUNCTION
Inhomogeneous linear DEs (13) are widely used to represent electric alternating current networks or filters. Such a filter may be thought of as a "black
t See Birkhoff and MacLane, p. 122. For polynomials of stable type of higher degree, see F. R.
Gantmacher, Applications ofMatnces, Wiley-Interscience, New York, 1959.
7 The Transfer Function
87
box" into which an electric current or a voltage is fed as an input r~t) and out of which comes a resulting output u(t).
Mathematically, this amounts to considering an operator transforming the function r into a function u, which is the solution of the inhomogeneous linear DE (13). Writing this operator as u = F[r], we easily see that L[F[r]] = r. Thus, such an input-output operator is a right-inverse of the operator L.
Since there are many solutions of the inhomogeneous DE (13) for a given input r(t), the preceding definition of Fis incomplete: the preceding equations
do not define F = L-I unambiguously.
For input-output problems that are unbounded in time, this difficulty can often be resolved by insisting that F[r] be in the class B(-oo, +oo) of bounded functions; in §§7-8, we will make this restriction. For, in this case, for any two solutions u1 and u2 of the inhomogeneous DE L[u] = r, the difference v = u 1
- u2 would have to satisfy L[v] = 0. Unless the characteristic polynomial PL(X)
= 0 has pure imaginary roots, this implies v = 0. Hence, in particular, the DE L[u] = r has at most one bounded solution if the DE L[u] = 0 is strictly stablean assumption which corresponds in electrical engineering to a passive electrical network with dissipation. Moreover, the effect of initial conditions is "transient": it dies out exponentially.
For initial value problems and their Green's functions, it is more appropriate
to define F by restricting its values to functions that satisfy u(0) = u'(0)
= • • • = u<n-I)(0) = 0; this also defines F unambiguously, by Theorem 3.
We now consider bounded solutions of (13) for various input functions, without necessarily assuming that the homogeneous DE is strictly stable.
Sinusoidal input functions are of the greatest importance; they represent alternating currents and simple musical notes of constant pitch. These are functions of the form
A cos (kt + a) = Re {ce'1u}, A= lcl, x = argc
A is called the amplitude, k/21r the .freqi:ency, and a the phase constant. The frequency k/21r is the reciprocal of the period 21r/k.
Except in the case PL(ik) = 0 of perfect resonance, there always exists a
unique periodic solution of the DE (13), having the same period as the given input
= function r(t) ce'1u. This output function u(t) can be found by making the sub-
stitution u = C(k)ce'1u, where C(k) is to be determined. Substituting into the inho-
mogeneous DE (14), we see that L [C(k)ce'kt] = ce•kt if and only if
(15)
where PL(X) is the characteristic polynomial defined by (3).
DEFINITION. The complex-valued function C(k) of the real variable k
defined by (15) is called the transfer Junction associated with the linear, time-
independent operator L. If C(k) = p(k)e-if<k), then p = IC(k) I is the gainJunction,
and -y(k) = -arg C(k) = arg PL(ik) is the phase lag associated with k.
88
CHAPTER 3 Linear Equations with Constant Coefficients
The reason for this terminology lies in the relationship between the real part of u(t) and that of the input r(t). Clearly,
Re {u(t)} = Re {C(k)ce'kt} = IC(k)I • lcl cos (kt+ a - -y)
This shows that the amplitude of the output is p(k) times the amplitude of the input, and the phase of the output lags 'Y = -arg C behind that of the input at all times.
In the strictly stable case, the particular solution of the inhomogeneous linear DE L[u] = ce•kt found by the preceding method is the only bounded solution; hence F[ce'k1] = C(k)ce'kt describes the effect of the input-output operator Fon sinusoidal inputs. Furthermore, since every solution of the homogeneous DE (14) tends to zero as t-+ +oo, every solution of L[u] = ce•kt approaches C(k)ce'kt exponentially.
Example 3. Consider the forced vibrations of a lightly damped harmonic oscillator:
(*)
[L[u] = u" + w' + p2u = sin kt, E « I]
The transfer function of (*) is easily found using the complex exponential trial function e'kt. Since
we have C(k) = l/[(p2 - k2) + Eik]. De Moivre's formulas give from this the gain function p = 1/[(p2 - k2)2 + e2k2] 112 and the phase lag
-y=arctan[
(P 2
ek -
k2) ]
The solution of (*) is therefore p sin (kt - -y), where p and 'Y are as stated. One can also solve (*) in real form. Since differentiation carries functions of
the form u = a cos kt + b sin kt into functions of the same form, we look for a
periodic solution of (*) of this form. An elementary computation gives for u as before:
L[u] = [(p2 - k2)a + ekb] cos kt + [(p2 - k2)b - eka] sin kt
To make the coefficient of cos kt in (*) vanish, it is necessary and sufficient that a/b = ek/(k2 - p2), the tangent of the phase advance (negative phase lag). The gain can be computed similarly; we omit the tedious details.
Finally, note that the characteristic polynomial of any real DE (2) can be fac-
tored into real linear and quadratic factors
7 The Transfer Function
89
r
II II P1-(A) = (A + b,) (A2 + P1A + q1),
1=I
I=!
r+2s=n
and since all b1, p1, and q1 are positive in the strictly stable case that all roots of
P1-(A) = 0 have negative parts. Therefore
L L r
g
j(k) = arg (bj + ik) + + arg (q1 ikp1 - k2)
1-I
!=I
increases monotonically from Oto mr/2 as k increases from Oto oo. This is evident
since each arg (b1 + ik) increases from O to 1r/2, while arg (q1 + ikp1 - k2)
increases from O to 1r, as one easily sees by visualizing the relevant parametric
curves (straight line or parabola). Theorem 6 below will prove a corresponding
result for complex constant-coefficient DEs.
Resonance. The preceding method fails when the characteristic polynomial
p1,{A) has one or more purely imaginary roots A = ik1 (in electrical engineering,
this occurs in a "lossless passive network").
Thus, suppose that ik is a root of the equation faL(X) = 0 and that we wish to solve the inhomogeneous DE L[u] = e'k1• From the identity (cf. §6)
= we obtain, setting A ik,
= L[te'""] p[(ik)e'"'
If ik is a simple root of the characteristic equation, then p[(ik) =i'- 0. Hence a solution of L[u] = eikt is u(t) = [1/p[(ik)]te'"'. The amplitude of this solution is [1/IPHik) l]t, and it increases to infinity as t-+ oo. This is the phenomenon of resonance, which arises when a nondissipative physical system E is excited by a force whose period equals one of the periods of free vibration of E.
A similar computation can be made when ik is a root of multiplicity n of the
characteristic polynomial, using the identity L[tnei"'] = p'£>(ik)e'k1, which is proved
in much the same way. In this case the amplitude of the solution again increases to infinity.
Periodic Inputs. The transfer function gives a simple way for determining periodic outputs from any periodic input function r(t) in (13). Changing the time
90
CHAPTER 3 Linear Equations with Constant Coefficients
+ unit, we can write r(t 21r) = r(t). We can then expand r(t) in a Fourier series,
getting
(16)
+ + L[u] = -ao
~ L.., (ak cos kt
bk s.m kt)
2
k=l
or, in complex form,
(16')
00
L 2L[u] =
cki"';
k=-oo
co = ao,
summed over all integers k. Applying the superposition principle to the Fourier components of eke'"' of
r(t) in (16'), we obtain, as at least a formal solution,
(17)
provided that no PL(ik) vanishes. The series (17) is absolutely and uniformly con-
vergent, since PL(ik) = O(k-n) for an nth-order DE. We leave to the reader the
proof and the determination of sufficient conditions for term-by-term
differentiability.
EXERCISES D
In Exs. 1-4, test the DE specified for stability and strict stability.
I. u" + 5u' + 4u = 0
2. u'" + 6u" + 12u' + 8t = 0
3. um+6u"+IIu'+6u=0
4. u'"+4u"'+4u"=0
5. For which n is the DE u<n> + u = 0 stable?
In Exs. 6-9, plot the gain and transfer functions of the operator specified (/ denotes the
identity operator):
6. D2 + 4D + 41
7. n3 + 6D2 + 12D + 8/
8. D2 + 2D + 101/
9. D4 - I
10. For a strictly stable L[u] = u" + au' + bu = r(t), calculate the outputs (the responses) to the inputs r(t) = 1 and r(t) = t for a2 > 4b and a2 < 4b.
*8 THE NYQUIST DIAGRAM
The transfer function C(k) = l/pL(ik) of a linear differential equation with
constant coefficients L[u] = 0 is of great help in the study of the inhomoge-
neous DE (13). To visualize the transfer function, one graphs the logarithmic gain ln p(k) and phase lag -y(k) as functions of the frequency k/21r. If Ai, ••• ,