Complex Variables and Analytic Functions

A brief synopsis of the theory of analytic functions


Introduction

This page discusses a fascinating branch of mathematics called analysis, where limit processes of great power illuminate the mathematics which, among other things, is applied to investigations of the natural world. When we hear of mathematics these days, especially in connection with schools, what is really under discussion is logistic, the ancient branch of mathematics concerned with numerical calculations. It is mathematics, perhaps, but far removed from what we will be talking about here. It is as if working with machinery were called physics. Mathematics is not essentially about useful procedures, though mathematics can be very useful indeed (just like physics), but about understanding. Understanding is the most enjoyable and satisfying part of mathematics, and something hidden from all those who only know arithmetic and percentages.

A very similar situation occurs in geometry, where the essential thing is not knowing how to draw a circle, but how to prove theorems rigorously beginning from postulates. Algebra was added to geometry from, say, the 15th century, in the western mathematical tradition, and supplemented it very well. Infinitesimal processes were added in the 17th century through Newton and Leibniz, creating the new field of analysis, which proved unbelievably powerful in solving mathematical problems. Geometry, in many ways, still adds clarity and heuristic to the algebraic processes, so the two work hand-in-hand in our minds. Geometry and analysis have both vanished from our public schools, except in sterile and jejune forms.

Many of the important results and formulas of mathematics bear the names of famous mathematicians. These names honor the memory of these contributors, and usually express authorship and priority. However, in many cases prior discoveries were unappreciated or ignored until their importance became manifest. We should not forget these lonely precursors, but their supporters should not struggle for the recognition that also rightly belongs to those who brought the results to wide appreciation. Descarte's Law should be called Snel's Law in justice, because Descartes only named after himself the result of another that he knew quite well, but the complex plane should be the Gauss plane. J. R. Argand made a similar suggestion for representing what were then called "imaginary" numbers in 1806, not knowing that Gauss had already done this in 1797.

For information on the people that are mentioned here, the reader should consult a good history of mathematics. Here, I only want to put them in chronological perspective and hint at their contributions. Complex numbers were first conceived by Girolamo Cardano (Jérome Cardan) (1501-1576) so that algebraic equations would always have solutions. Isaac Newton (1642-1727) and G. W. Leibniz (1646-1716) conceived the infinitesimal processes of the calculus in algebraic form, finally bringing to fruition the methods so painfully employed by Archimedes long before. Brook Taylor (1685-1731) and Colin Maclaurin (1698-1746) of the next generation developed the new mathematics, together with many others who are not mentioned here. The 18th century was a time of great advances, by such as Leonhard Euler (1707-1783) and Jean LeRond d'Alembert (1717-1783), principally in Germany, France and Italy. The end of the 18th century and the first half of the 19th contained giants such as C. F. Gauss (1777-1855), Augustin Cauchy (1789-1857) and G. Dirichlet (1805-1859). Finally, the subject was topped off by the masterful work of Karl Weierstrass (1815-1897) and B. Riemann (1826-1866). As the classical music of this period has never been surpassed, neither has the mathematics.

At first, there were only rational numbers, of the form n/m, where n and m are integers. Most people today only work with such numbers, and find them completely satisfactory. However, the simple equation x2 = 2 has no solution in rational numbers. This was corrected in ancient times by the introduction of the irrational or real numbers, which could be approximated as closely as desired by rational numbers, but never exactly. Indeed, expressions containing irrational roots were called surds, from "absurd." However, it is possible to buy close to √2 gallons of beer. The equation x2 = -2 likewise has no solution in rational or real numbers. The subject did not come up in ancient times, since without algebra no one would conceive of such a problem. However, with the rise of algebra, even this equation demanded a solution, since it could be formulated by applying the rules consistently. Cardan called the solution x = √2 √-1 imaginary, since you could not buy √-1 gallons of beer. However, it worked well enough using the rules of algebra. These solutions had no practical value, but they pleased mathematicians.

Just what was going on here was not clear for many years. It was Gauss, about 1831, who finally saw the key. He renamed imaginaries as complex numbers, pairs of real numbers (x,y) that had a graphic representation in analytic geometry. The real part was x, the imaginary part was y, and they obeyed the same rules of arithmetic as the real numbers, with suitable definitions for sum, difference, product and quotient. These processes had a geometric interpretation in the complex or Gaussian plane, which is now so familiar to us. Gauss first used the complex plane about 1797, the same year as a Danish surveyor called Caspar Wessel. There was no real reason for keeping √-1, but because of its utility in algebraic manipulations, it became represented by "i", the imaginary unit, and a complex number was x + iy, just as it had been as a surd. Complex numbers were just another extension of the number concept, as reals had been before them. Hamilton's quaternions (which led to vector analysis) are a further extension.

An interesting result was discovered by de Moivre. Using the formulas for the trig functions of the sum and difference of two angles, it is easy to prove that (cos θ + i sin θ)(cos θ' + i sin θ') = cos(θ + θ') = i sin(θ + θ'). This shows that cos θ + i sin θ obeys the laws of exponents under multiplication, and suggests, in fact, that e = cos θ + i sin θ, which is easily verified by expansion in power series. Now it is possible to solve the equation xn = 1. De Moivre's formula can be generalized to (cos θ + i sin θ)n = cos nθ + i sin nθ. If nθ = 2π, or θ = 2π/n, then [cos(2π/n) + i sin(2π/n)]n = 1, so x = cos(2π/n) + i sin(2π/n) is a root of xn = 1. When complex numbers are admitted, every algebraic equation possesses a root.

Power Series

A polynomial of order n in the real variable x has the form a0 + a1x + a2x2 + ... + anxn. It is a continuous function of x and is finite for any finite value of x, and infinite only for the value x = ±∞. Its derivatives of all orders f(n)(x) exist and are continuous, though all but few (the first n) are identically zero. It is a perfect model of functions that are called entire, a very restricted class indeed.

Maclaurin showed that the polynomial could be generalized to a function represented by an infinite series a0 + a1x + a2x2 + ..., where an = f(n)(0)/n!. If the derivatives are bounded, then the series converges to f(x) everywhere, and this function has the same properties as the polynomials, in that it is an entire function. Remarkably, the values of the function everywhere are determined by the values of the function and its derivatives at the origin. Expand any polynomial in the Maclaurin series, and you will find that the polynomial itself is obtained. By convergence of an infinite series we mean that the sum of enough terms is as close to a fixed value A as we may require, and A is the sum of the series. The sum of the series approaches the value A as a limit.

An example of such a function is the one with f(0) = f(n)(0) = 1 for all n. This is, of course, the function ex that is equal to its derivative. The exponential is an excellent example of a function of this larger, more general class of functions that is represented by infinite power series. A power series is a very special series. In general, arbitrary infinite series of functions do not have all of its desirable properties. The fact that we are dealing with powers, whose derivatives and integrals are easy to express, makes all the difference. Such series can always be differentiated and integrated term by term, a property that certainly does not apply in general.

Taylor showed that the expansion could be about any point x = a, not just the origin, generalizing the Maclaurin series. So far as theory goes, both series are on the same footing, and one might as well work with the Maclaurin series where possible. Nothing new will be discovered with the more general Taylor series.

Another series is f(x) = 1 + x + x2 + x3 + ..., which may be recognized as the series resulting from the expansion of f(x) = 1/(1-x) by Newton's binomial theorem. The series does not terminate in this case, so we obtain an infinite power series. The derivatives are not bounded, so this series does not converge everywhere and does not represent an entire function. It converges on the interval -1 ≤ x < 1. Note that -1 is included, but +1 is not. The series is the geometric series, a very useful series in the theory of infinite series. We can compare the values of the terms of other series to the values of its terms, and if they are smaller, then the other series surely converges.

If ak is the general coefficient in a power series, then Cauchy showed that if lim |(ak)1/k| = 1/R as k approaches ∞, then the series converged on the interval -R < x < R. If R = 0, then we get an entire function like the exponential. Use Stirling's approximation to the factorial to prove this statement (n! = nn+1/2e-n√2π). For the geometric series, R = 1, as we have seen. Cauchy's test gives no information about the ends of the intervals.

So far we have worked with the real variable x, but is is clear that everything follows the same way if x is replaced by the complex variable z. Now R is the radius of a circle within which the series converges. It can be shown that if S < R, then within and on a circle of radius S the series converges uniformly to f(z), f(z) and all of its derivatives exist and are continuous, and the series can be integrated and differentiated term by term. Such a function is called analytic. The existence of a derivative f'(z) of a function of a complex variable says far more about the function than the existence of the derivative of a real function. The limit [f(z+dz)-f(z)]/dz must exist and have the same value for any dz, that is, for dz approaching zero from any direction. Weierstrass showed that a function represented by a power series has precisely this property. He based his rigorous theory of complex functions on the properties of power series, which is an obvious generalization of the theory for real functions.

We just mentioned the term uniform convergence. This means that if we require that |f(z) - Sn| < ε for all n > m, then one value of m will do for the whole region. This permits us to prove all the nice consequences of representation by a convergent power series that we have just mentioned. One way to show this is with the Weierstrass M-test, which is to find a convergent comparison series whose terms are greater than those of the given power series for all n. In the above case, we take Mk = Sk, so that the M-series is a convergent geometric series.

Line Integrals

Cauchy based his theory of analytic functions on representation by a line integral around a closed curve, instead of by power series. Here, there was no analogy with functions of a real variable to support the theory. The representation by line integrals leads directly and simply to many very useful applications, so Cauchy's theory is the more popular.

The elegant fundamental theorem of Cauchy's theory is ∫ f(z)dz = 0, where the integral is taken about a closed curve C surrounding a region where f'(z) exists, as shown in the diagram. The positive direction along the boundary keeps the region in question to the left. The function f(z) is said to be analytic within C, and exactly the same properties flow from this as in Weierstrass's theory, as we shall see. Originally, Cauchy had to assume that f'(z) not only existed, but was continuous, so he could use Green's theorem to prove that the integral was zero. This flaw was removed later by Goursat. Now the continuity of f'(z) followed from the theorem, instead of being a precondition, which put the theory on as good a basis as Weierstrass's.

The next step is to obtain an integral formula for f(z) and its derivatives, where z is any point in a region where f(z) is analytic, and the integration is carried out on the boundary of the region. The integral (1/2πi)∫ f(z')dz'/(z' - z) = f(z) if we remove z' = z from the region of integration by drawing a small circle around it and joining the circle with the periphery by a line, as shown at the left. We imagine the circle shrunk tightly around z' = z, and the joining lines all but coinciding. Now C includes the integral along the joining line, back and forth so it cancels, plus the integral around the small circle at z' = z which gives exactly 2πif(z) (dz'/(z' - z) = idθ). We have a formula that we can differentiate with respect to z as many times as we like, and all these derivatives not only exist, but are continuous. This is what Weierstrass proved from the uniform convergence of power series.

Since we now have expressions for all the derivatives, we can carry out a Taylor expansion of f(z) about any point z = a, and finally show equivalence of the Weierstrass and Cauchy theories of analytic functions. This is a very pretty and profound piece of reasoning, one of the pinnacles of analysis. To prove Taylor's theorem, we use the factoring xn - yn = (x - y)(xn-1 + xn-2y + xn-3y2 + ... + xyn-2 + yn-1). If you have forgotten this, it follows from synthetic division of xn - yn by x - y. We set x = z - a and y = z' - a, so x - y = z - z', and solve for 1/(z - z'), which we put in Cauchy's Integral Formula f(z)= (1/2πi)∫ f(z')dz'/(z - z'), getting n terms and a remainder term, which we can show approaches 0 as n approaches ∞. The result is just the Taylor series.

Singularities

A singularity of an analytic function is a point at which it is not analytic, which means that its derivative does not exist there. At such points, the modulus of the function is usually infinite, and the points are isolated singular points. Singular points that are not isolated are essential singularities (some essential singularities are isolated, however). Isolated singularities are excluded by small circles from the domain of analyticity so that Cauchy's theorems can be applied.

Laurent extended Taylor's theorem to the case where the domain of analyticity was the region between two concentric circles, a large one and a small one surrounding a point (the origin, for purposes of argument) which could be a singularity. Now f(z) can be expressed in terms of the line integrals around the two circles (the boundary of the region consists of these two circles, described in opposite directions), and the result is a power series of positive powers, just as in the Taylor series (from the outer circle) and a power series of negative powers (from the inner circle). If the series of negative powers terminates, the singularity is called a pole, and if it does not, it is called an essential singularity. The series of negative powers is called the principal part of the function at the singularity. A principal part consisting only of b/(z - a) is called a simple pole at z = a.

The coefficient b of 1/z (or 1/(z-a)) in the Laurent expansion is special because in integrating the series along a circle surrounding a pole, it is the only term that contributes to the integral, in an amount 2πib. The coefficient b is called the residue of f(z) at the pole. A function that is analytic in the finite plane except for a certain number of poles is called meromorphic. If we take the line integral of a meromorphic function around the boundary of a region, then the value of the integral is the sum of the values of 2πib at each of the poles within it. This is called the residue theorem, and is a valuable means of evaluating integrals.

Singularities are not just warts on a function, but are essential to it. A function with no singularities (even at infinity) is bounded in value everywhere, and Liouville showed that this implied that the function was a constant, and very boring. The exponential function, which is entire (analytic in the finite complex plane), has a shocking singularity at infinity, as if all misbehavior has been swept there from the complex plane. To find out how a function behaves at infinity, substitute z' = 1/z and examine the region around z' = 0. The exponential has an isolated essential singularity at infinity. Polynomials have a pole there.

A power series representing a function converges within a circle that extends to the nearest singularity, and we know how to find this radius of convergence with Cauchy's test. For example, the series 1 + z + z2 + z3 + ... converges for |z| < 1, because it has a singularity at z = 1. The series for the exponential function has an infinite radius of convergence, since the exponential has no singularities. The series for 1/(1 +x2) = 1 - x2 + x4 + ... converges only for -1 < x < 1, even though there are no singularities at x = 1 or x = -1. If we replace x with z, we can see the reason, for there is a singularity at z = i.

Two power series may represent the same function where their circles of convergence overlap--for example, two Taylor's series about different centers. They will then represent the same function wherever they converge. In this way we can extend the definition of a function from a limited region where one series may converge, to every part of the complex plane where the way is not barred by something extraordinary like a line of singularities. This is called analytic continuation.

These are the most important properties of analytic functions, but there are many more of perhaps less general application.

The Hypergeometric Function and Branch Points

The geometric series 1 + z + z2 + z3 + ... converges to the function 1/(1 - z) for |z| < 1, and can be continued analytically to the same function over the whole z-plane. Its only singularity is a pole of order 1 at z = 1. This series can be generalized to the hypergeometric series 1 + (ab/c)z + [a(a+1)b(b+1)/1.2.c(c+1)]z2 + ... whose properties were studied extensively by Gauss, and later by Kummer (who gave it its name). This is a considerably more complicated function which usually cannot be expressed in terms of elementary functions. The ratio of the (n+1)-st coefficient to the nth is (a+n)(b+n)/n(c+n), which → 1 as n → ∞. Therefore, the series converges for |z| < 1 to the function called F(a,b;c;z), the hypergeometric function, that can be analytically continued over the whole Gauss plane. F(a,b;c;0) = 1, and if (c - a - b) > 0, takes the value F(a,b:c;z) = Γ(c)Γ(c-a-b)/Γ(c-a)Γ(c-b) at z = 1. Otherwise, z = 1 is a singular point. The gamma function Γ(z) is the continuous factorial function, Γ(z) = (z - 1)!. It has simple poles at z = 0, -1, -2, .... In particular, Γ(0) = ∞.

If a or b are negative integers, the series terminates so the result is simply a polynomial. The familiar Legendre polynomials can be expressed as terminating hypergeometric series. In fact, Pn(z) = F(n+1,-n;1;(1-z)/2);. It is easier to study Legendre polynomials on their own, rather than as hypergeometric functions, a rather common occurrence. Nevertheless, such connctions should make hypergeometric functions more comfortable for one.

Although the series may be useful for computing values of the function, it is not suited to studying the properties of the function. Two ways of proceeding are to find a differential equation of which it is a solution, or to find a useful integral representation. The integral representation, due to Barnes, is given in Whittaker and Watson. It provides the useful result that Γ(a)Γ(b)Γ(c-a)Γ(c-b)F(a,b;c;z) = Γ(a)Γ(b)Γ(c)Γ(c-a-b)F(a,b;a+b-c+1;1-z) + Γ(c)Γ(c-a)Γ(c-b)Γ(a+b-c)(1-z)c-a-b F(c-a,c-b;c-a-b+1;1-z), expressing the function of z as a function of 1-z. We can find the value of F(a,b;c;1) from it easily by setting z = 1. Then, the second term on the r.h.s. vanishes if (c -a -b) > 0. Otherwise, F has a singularity at z = 1, and we see what it is like.

If you set b = c, then F(-n,b;b;-z) = (1 + z)n, a polynomial. Also, F(n,b;b;z) = 1/(1-z)n. Thus, the geometric series is F(1,b;b;z), which can be seen easily from the series. For (c-a-b) < 0, there is a singularity at z = 1, which in this neighborhood behaves as (1 - z) -(c-a-b). If (c-a-b) = -1, it is a simple pole. Let us consider the case when c-a-b = 1/2. Since (c-a-b) is positive, we do not have an infinity at z = 1, but nevertheless the derivative does not exist there, so it is a singular point. Such a point is called a branch point, and we now explain why it has that name.

Consider a path that is a small circle around z = 1. On this path, z - 1 = re, where r is a constant, and θ varies. If we go once around the path, say from 0 to 2π, the function (1 - z)1/2 does not return to its original value √r, but to √r e = -√r. We must go around twice before the original value is restored. If (c-a-b) has some other value, it may require many circuits before the original value recurs, and in some cases it may never recur at all. The different values of the function are called branches, and a point such as z = 1 that gives different branches when it is encircled is a branch point. The hypergeometric function has, in general, a branch point at z = 1. The usual example for introducing branch points is √z, but we have here a more exotic example that reduces to the same thing.

If we do not want multiple values for our function, we can make a branch cut from z = 1 to z = ∞ along, say, the real axis and agree to consider no path that crosses this cut. Then, the function will be single-valued for all practical purposes, though discontinuous on the branch cut.

Another way to study the hypergeometric function is to find the differential equation that it satisfies. To understand this, we need to review a bit of the theory of differential equations, in particular second-order linear differential equations, which are of the form u" + p(z)u' + q(z)u. Here, u' = du/dz and u" = d2u/dz2, and u(z) is the function sought. The equation is linear because u", u' and u appear alone and to the first power in each term, and second order because the highest derivative of u present is the second. Any point of the region where p(z) and q(z) are analytic is called an ordinary point of the equation, and there is no trouble in expanding u(z) in power series about any such point. In fact, finding a solution u(z) as a power series is always possible, though the power series is usually not very informative, but good for computing specific values.

A singularity of the equation is any point where p(z) and q(z) are not analytic, and is usually an isolated point. Satisfactory solutions can usually not be expanded about such points. However, if z = c is a singular point, and (z-c)p(z) and (z-c)2q(z) are analytic, solutions can be found. Such points are called regular points of the differential equation. When we are looking for singular points, we must include the point at ∞ in our search. To do this, make the substitution w = 1/z, and investigate the behavior of the coefficients of the transformed equation for w → 0. Use d/dz = -w2(d/dw) and d2/dz2 = w4(d2/dw2) - 2w2(d/dw) to transform the derivatives. At any ordinary or regular point, a second-order differential equation has two linearly independent solutions that can be expressed in power series. These solutions are analytic everywhere except at the singularities of the equation.

For example, consider the familiar equation u" - k2u, whose solutions are the entire functions e±kz. This equation has no singular points in the finite plane. The substitution w = 1/z yields u" - (2/w)u' - (k2/w4)u = 0, where the primes now represent derivatives with respect to w. While wp(w) is analytic (the constant 2), w2q(z) is not, so z = ∞ is an irregular singular point. Oh well, expanding ekz about z = ∞ does not seem very useful, anyway.

The equation satisfied by the hypergeometric function u = F(a,b;c;z) is z(1 - z)u" + [c - (a + b + 1)z]u' - abu = 0. Divide by z(1 - z) to put it in standard form, and it is seen that there are singular points at z = 0 and z = 1, and both are regular. The reader is encouraged to make the substitution w = 1/z and see that z = ∞ (w = 0) is also a regular singular point. The equation has three regular singular points, at the poles of the Riemann sphere, and on the equator at the intersection of the real meridian. Riemann studied equations with three regular singularities, and the hypergeometric equation is a special case of Riemann's equation where one of the singularities has gone to ∞. A lot has been found out about the hypergeometric equation in this way, with extremely tedious and involved algebra.

If we let another of the singular points go to infinity (making it an irregular singular point), we find a simpler equation but more complicated functions as the confluent hypergeometric functions. These are represented by W(a,b,z) and the series 1 + (a/b)z + [a(a+1)/2.b(b+1)]z2 + .... The exponential ez = W(a,a,z), where a is any positive constant. Many of the functions used in theoretical physics are special cases of the confluent hypergeometric function, M(k,m,z), which satisfies the equation u" + u' + [k/z + (1 - m2)/4z2]u = 0. There are several different definitions with the constants having different meanings, and only the general idea is presented here. It is clear that z = 0 is a regular singular point and z = ∞ is an irregular singular point. The Error function, incomplete gamma function, logarithmic-integral function, parabolic cylinder functions, circular cylinder functions, Bessel functions and others, are all special cases of the confluent hypergeometric function.

References

E. T. Whittaker and G. N. Watson, A Course of Modern Analysis, 4th ed. (Cambridge: Cambridge University Press, 1958). A classic and masterful account of analysis and its applications to transcendental functions.

K. Knopp, Elements of the Theory of Functions (New York: Dover, 1952). An excellent concise account based on power series. The same author treats the general theory of functions, based on integrals, in two further volumes.

There is a legion of elementary calculus texts, and most have merit, so that any one will be a good reference for review. Classic, however, and one of the best ever written, is R. Courant, Differential and Integral Calculus, 2nd ed. (London: Blackie and Son, 1937), which was popular into the 1950's, until it became too hard for the modern student.

D. V. Widder, Advanced Calculus, 2nd ed. (New York: Dover Publications, 1989). The traditional course in advanced calculus supplemented the introductory course with greater rigor and a more mature approach. This is the best Advanced Calculus book, well-regarded for many years.

E. G. Phillips, Functions of a Complex Variable With Applications, 8th ed. (Edinburgh: Oliver and Boyd, 1957). All the principal results, with proofs, in a compact volume.


Return to Mathematics Index

Composed by J. B. Calvert
Created 7 September 2002
Last revised 3 January 2005