Differential Equations

It is not easy to solve differential equations, and often impossible, but the study is rewarding


A differential equation is an algebraic relation between variables that includes the rates of change of the variables as well as their instantaneous values. It is an excellent way to express many physical laws. Newton's Law, f = mx" is an example. Here, the prime stands for differentiation with respect to the independent variable, here the time, d/dt, and the double prime means the second derivative of the position coordinate, x, which is the acceleration. The constant m is the mass of the particle, and f is the force, which may depend on many things. If f depends on the position x and the velocity x', as well as on the time, we find the differential equation mx" = f(x,x',t). The voltage drop across an inductor of value L is given by v = Li', where i' is the rate of change of the current, and the voltage drop across a capacitance C is v = q/C, where q is the charge on the capacitor. This gives v' = i/C, another differential relation. A circuit of R, L and C in series gives the equation e = Ri + Li' + q/C, or e' = Li" + Ri' + (1/C)i, a differential equation. The fundamental laws of electromagnetism, Maxwell's Equations, are expressed as differential equations, and differential equations also describe relations in quantum mechanics. In fact, there are few fields of physics and its applications that do not rely on differential equations to express their fundamental relations.

It is no surprise, then, that a study of differential equations has traditionally been a part of the studies of a physicist or engineer, usually following directly on the elementary calculus course. In calculus, one first finds that the derivatives of elementary functions are again elementary functions, so differentiation can always be carried out by routine methods. Then one encounters the unpleasant fact that the integrals of elementary functions are not necessarily elementary functions. Actually, this is not unpleasant but exciting, showing that we can find new things from integration as we cannot from differentiation. Integration is the solution of a particularly simple kind of differential equation, one expressible as y' = f(x), to which the solution is y = ∫ f(x)dx + C, where C is an arbitrary constant. Such an integral can always be numerically evaluated, by plotting f(x) on squared paper and counting squares, for example. This gave the procedure the name of quadrature. Computers make this process quite practical. Some of the new things arising this way are Elliptic Integrals, Fresnel Integrals, and other functions useful in one problem or the other.

Following naturally on this is finding the function y(x) in more complicated cases, and this is the field of differential equations. One still speaks of integrating the differential equation, which means finding an equivalent expression that does not involve derivatives. The possibilities are even greater than for simple integration, so it is no wonder that there is no general method for integrating a differential equation. Indeed, most differential equations do not have solutions that can be expressed in terms of elementary functions (polynomials and exponentials). The method of attack is to attempt to transform a given equation into one whose solution is known, or which yields to a standard procedure. This is usually done by a substitution for the dependent or independent variable, if elementary algebra does not suffice. In this paper, the methods for integrating differential equations will be reviewed. Although some explanation will be given, it will usually not be enough for the unprepared reader to be able to use the method. For further explanation, please refer to any good textbook in differential equations.

A class of equations that is often soluble is the first-order, first-degree differential equation. This means that only the derivative dy/dx occurs, and to the first power. A quadrature is an example, so the first attempt is to see if the equation can be solved by quadrature. Such an equation is said to be separated. For example, f(x)g(y)dy = a(x)b(y)dx can be written [g(y)/b(y)]dy = [a(x)/f(x)]dx, so the solution is found by integrating both sides, which can always be done, at least numerically. Other equations of the form N(x,y)dy + M(x,y)dx = 0 are separable if they can be reduced to this form. If M and N are linear functions, substituting the new variables u = M and v = N will render the equation separated. Similarly, if M and N are homogeneous functions of the same degree in x and y, the new variable u = x/y will separate the equation. If the partial derivative of N with respect to x equals the partial derivative of M with respect to y, then the equation is equivalent to dΦ = 0, and the function Φ(x,y) can be found by quadrature. Such an equation is called exact. An equation can sometimes be made exact through multiplication by an integrating factor. The availability of all these methods makes many equations of this kind soluble, but not all.

Another class of soluble equations are the linear equations. Linearity means that the dependent variable or any derivative of it appears only as the first power. A general first-order linear equation, y' + P(x)y = Q(x) has an integrating factor of exp(∫P(x)dx), so the solution of any equation of this type can be found by quadrature. Multiplying by the factor makes the left-hand side equal to d[y exp(∫P(x)dx)]/dx, as you can easily check. Linear equations have the important property that the solutions of the homogeneous equation (no Q(x) term) are superposable, meaning that any linear combination of solutions is also a solution. Linear equations with constant coefficients, such as y" + by' + cy = 0, have solutions that are elementary functions, either polynomials or exponentials. For example, y" - y = 0 has solutions exp(x) and exp(-x). Since these are linearly independent [i.e., a exp(x) + b exp(-x) = 0 only if a = b = 0], a general solution is y(x) = A exp(x) + B exp(-x). It is general because any solution of an nth order equation with n arbitrary constants can be made equal to any solution of the equation whatsoever by choosing the arbitrary constants accordingly.

The linearity means that we can find the general solution to any equation with a Q(x) on the right-hand side, an inhomogeneous equation, provided we can find any solution whatsoever to the equation, called a particular integral. This particular integral is simply added to the general solution of the homogeneous equation. If Q(x) is made up of elementary functions, and has a limited number of different derivatives, there are easy ways to find a particular solution. For more difficult cases, Lagrange's method of variation of coefficients is useful. It is fortunate that so many methods are available for linear differential equations, because they appear quite often in practice. For this reason, a differential equations text usually emphasizes first-order, first-degree equations, and linear equations with constant coefficients, for which many methods are available. This may give the false impression that differential equations surrender easily to general methods, or that these special methods are applicable to a wider variety of equations.

Numerical methods are available for differential equations, which are, as a practical matter, very useful. They can be applied to equations expressed in analytical terms, but are essential when only numerical values are available, not analytic expressions. These methods give only numbers, of course, so they are not useful for drawing general conclusions about an equation. Only a very restricted type of equation is easily solved numerically, one of the form y' = f(x,y) that is solved for the derivative. Higher order equations can be solved as systems of first-order equations, and special methods have been developed for equations of the form y" = f(x,y,y'). These kinds of equations are very often met with in practice, but by no means represent a general type of equation. Numerical methods generally proceed by finite steps of the independent variable. Selection of the step length, and control of computational error, are very important here, as in all numerical methods. Although numerical methods were very tedious when they had to be carried out by hand, the computer has rendered them very easy and powerful, in addition to making possible the return to simpler methods that are more general, stable and robust than the many methods developed later to reduce the computational overhead.

One interesting method is Picard's iterative method for the equation y' = f(x,y). By assuming a trial function y0, one finds a first approximation y1 by integrating y1' = f(x,y0). Then, y0 is replaced by y1, and a second approximation is given by integrating y2' = f(x,y1). This procedure is repeated until sufficient accuracy is obtained (when there is little change in the new approximation). The iteration can be done by computer, and the solution can be observed as it converges to the answer. The method can be used with numerical values as well as with analytic expressions.

If f(x,y) is differentiable to all orders, one can find the derivatives at any point x0,y0, and use the result to expand the function y(x) in Taylor series about x0. This does not give a solution in closed terms (unless the higher derivatives vanish), and only one that is valid in the neighborhood of x0. However, this method can be quite useful for making short steps in the independent variable x, and is useful in the derivation of other numerical methods.

An excellent method based on Taylor series steps is the famous one of Runge-Kutta. This method allows steps to be made one after the other, does not involve the calculation of higher derivatives or quadratures, and is very suitable for computers. The simplest formulas, called second order, for solving y' = f(x,y) are as follows: yn+1 = yn + (k1 + k2)/2, where k1 = hf(xn,yn), and k2 = hf(xn + h,yn + k1). The step length is h, and the error is proportional to h2. This formula gives much better results than the lowest-order approximation yn+1 = yn + hf(xn,yn). Because of roundoff error, the accuracy cannot be indefinitely increased by decreasing h. If you are working on a computer, there is little reason not to use the fourth-order formulas that give greatly increased accuracy.

Several methods are available to improve the accuracy of estimates obtained with a large h that involve less work than using a smaller step. Adams' method is one example. It uses several known values, perhaps approximate, to predict the value one step further on. Milne's method shows the accuracy of the prediction as well as refining it by a corrector formula. These methods are not as convenient now that great computational power is available. Repeating a calculation with a smaller step length indicates the accuracy of the solution, and is a guide to the appropriate step length. Numerical methods should not be relied upon unless there is some estimate of error.

Another method of approach for equations that cannot be integrated analytically is to assume a power series solution and then to determine the coefficients in it. A very wide class of functions can be expressed in power series (analytic functions), so this method has some claim to generality. It is easy to use for only one type of equation, however, the linear equation. Since the solutions of a linear equation with constant coefficients can be obtained in closed form, as well as of the general linear equation of first order, this method is usually applied to linear equations with variable coefficients, many of which arise in physics. Frobenius elaborated this method, which works on many of the equations arising from physics and analysis.

Unfortunately, all you get is a power series, usually an infinite one. It is remarkably difficult to find the properties of a function knowing only the power series, so solution in power series is less rewarding that it might seem. It can, however, show that a solution is the same as a function known in other ways that are more convenient for study. Actually, these other ways generally show the differential equation that is satisfied by the function, so one never really has to solve the differential equation at all. An example are the Bessel's functions, Jn(x), which satisfy the equation y" + y'/x + (1 - n2/x2)y = 0. These functions can be defined by recursion relations, generating functions or definite integrals, from which their properties are more easily derived that from the power series. Of course, the power series is useful; it is just not a good way to find the general properties of the function, such as its roots or its integrals.

In the case of the Bessel's functions, we note that for rather large x, the equation becomes y" + y = 0, for which the solutions are sin x and cos x. This shows that for large x, the functions will be very much like the trigonometric functions, but as x approaches 0 will do strange things. Approximations such as this one are very important in studying equations with more or less complicated solutions, often allowing dominant behavior to be separated out as a factor. In many important cases, the solution is the product of such a dominant behavior and a polynomial. The polynomial is easily found by Frobenius' method from the simplified differential equation. As usual, the general properties of these polynomials is better found in other ways.

Certain equations of particular form have peculiar methods of solution. A Cauchy equation is a linear equation where each coefficient contains the factor xn, where n is the order of the derivative in the term. It is solved by the substitution x = ev, where v is a new independent variable, which makes it an equation with constant coefficients. Clairaut's equation is y = y'x + f(y'). Its general solution y = Cx + f(C) is found by substituting C, an arbitrary constant, for y'. The family of curves with C as parameter has an envelope that is the result of eliminating C between x + f'(C) = 0 and the general solution. Many second-order equations can be reduced to first-order equations. The equation y" = f(y) can be reduced to a first order equation in y' by multiplying each side by 2y'. Then, y'2 = 2∫f(y)dy. The square root is a separated equation that can be solved by quadrature. An equation f(x,y',y") = 0 can be reduced to f(x,p,p') = 0 and then solved. The equation f(y,y',y") = 0 can be expressed as f(y,p,p(dp/dy)) = 0. The equation for the catenary curve is of the form f(y',y") = 0, and can be solved by substituting p = y', solving first for p, and then finding y by quadrature. An equation of the first order, but not of the first degree can be integrated if it can be solved for p, y or x.

All our discussions have concerned a function y(x) of a single independent variable x. If we have a function f(x,y) of two or more variables, a differential equation for it will involve rates of change in both x and y, which are partial derivatives. This makes the equations even more difficult to solve, but leads to powerful techniques such as Fourier series (to superimpose solutions of linear equations) and eigenfunction expansions. One important method, separation of variables, leads to ordinary differential equations of the kind treated here.

Considering all the methods we have discussed, it is evident that only a few kinds of equations can be solved by general methods (and these are predominantly linear equations). Differential equations are so hard to solve because they give rise to a large number of special functions, and are not limited to the familiar elementary functions. Every equation is a new problem, that must be attacked with shrewdness and craft. The principal tool is the substitution, which transforms the equation into another form that may be more tractable. A differential equation may be thought of as expressing some general law, and its solution a concrete realization of that law. This is a very important connection that is at the heart of physics. Solving a differential equation is like knowing all the consequences of the law, so it is no wonder that it is so hard. For this reason, an understanding of differential equations is a valuable and satisfying mental acquisition.

References

A. L. Nelson, K. W. Folley and M. Coral, Differential Equations (Boston: D. C. Heath, 1952). An example of a good undergraduate text. There has been nothing new since then. There are many books on the partial differential equations of physics, which is an exensive field.

W. H. Press, et. al., Numerical Recipes in C, 2nd ed. (Cambridge: C. U. P., 1992). Chapter 16. Good numerical methods require care. Excellent advice is found here.


Return to Math Index

Composed by J. B. Calvert
Created 2 March 2001
Last revised