An introduction to these mysterious functions

I recall seeing things like sn(u), cn(u) and similar in reference books at the beginning of my university studies, with the information that they were "elliptic functions," and assumed that I would make their acquaintance somewhere down the line. I had no idea how they might be applied, but possibly they were used for solving elliptical problems the way trigonometric functions solved circular problems. Well, I was disappointed in this expectation, since they came up absolutely nowhere in my mathematical, physical or chemical studies, and remained a mystery. This page reports on my recent efforts to enlighten this area of darkness for me, and I hope to render these functions less strange for the reader as they are now less strange for me. As always, errors and misconceptions may remain, but I hope the major ideas are correct.

The References show that elliptic functions appear in familiar works, largely those of handbooks of mathematical functions, but also in Whittaker and Watson where the theory is given in some detail, though the reader may find the presentation difficult. Elliptic functions do not, however, appear in texts on mathematics for engineers and physicists. They appear, unfortunately, to have no practical applications at all, though I understand they may be forced to materialize in certain problems in dynamics. The closely related elliptic integrals do have numerous practical applications, but only in the evaluation of certain integrals. Elliptic functions have provided a lot of entertainment for mathematicians, however, and are as fascinating as any useless knowledge can be.

Elliptic integrals came first, invented by the Bernoullis, and were studied by Maclaurin, Euler and Lagrange in the 18th century, and later by Legendre, when there was great interest in evaluating the integrals that appeared in scientific applications, after it was realized that most integrals could not be evaluated in terms of the elementary functions. Later, the brilliant and ingenious Gauss conceived of inverting the functions defined by incomplete elliptic integrals, unlocking a treasure chest of analytical investigations using the new methods of complex variables. Gauss had the admirable habit of not publishing his work, completely avoiding the conflicts and acrimony among the lesser investigators who were eager to have their work, however inconsequential, and their priority, however doubtful, recognized, a drive that has not disappeared and dominates modern science, creating a mountain of mediocrity. Gauss did not need to blow his own kazoo, however, and was not concerned about having things he discovered given other men's names. Scientific entrepreneurs kept posted on what Gauss was doing, so they could snatch bits and pieces for their own benefit. The invention of elliptic functions is shared with C. G. J. Jacobi and Abel, who published their investigations around 1827, though Gauss knew many of the results as early as 1809.

Carl Gustav Jacob Jacobi (1804-1851) was born in Potsdam, the son of a wealthy Jewish banker. He became a Christian, probably to avoid restrictions on holding certain university posts. His work on elliptic functions began when he was very young, and held his interest throughout his career. He went to Königsberg in 1826, where Bessel and Neumann worked, becoming associate professor in 1827 and remaining there for 18 years. Although he developed diabetes, he eventually died young of influenza and smallpox. He should not be confused with his elder brother Moritz Hermann von Jacobi (1801-1874), who went to St. Petersburg in 1837 and apparently assumed a "von" for distinction. He was the Jacobi who worked on electric motors, ran an electric boat on the Neva, developed Baron Pavel Schilling's telegraph, invented electrotyping, and enunciated Jacobi's Law of energy transfer (a maximum when source and load impedances are the same).

Only the Jacobian elliptic functions will be discussed here, which are the ones most closely related to the familiar three types of elliptic integrals. There are many more elliptic functions, for example the Weierstrassian, as well as the related theta functions, all of which are important in the theory, and which are explained in Whittaker and Watson. My principal purpose here is only to make the Jacobian elliptic functions more familiar to the reader. For the detailed theory, refer to Whittaker and Watson. First, we must mention some preliminaries that will help the understanding.

The reader is no doubt familiar with the concept of the indefinite integral, which is the inverse of differentiation. If F(x) = x^{2}, then f(x) = dF/dx = 2x, or F(x) = ∫ 2xdx = ∫ f(x)dx. The definite integral is taken between *limits*, say a and b, and is defined so that ∫(a,b) f(x)dx = F(b) - F(a), where we have used a notation more convenient in HTML that keeps things on one line. Here, b is the *upper limit* and a is the *lower limit*. The definition of the integral is actually in terms of the definite integral, with its geometric interpretation as the area under the curve y = f(x), while the indefinite integral is a generalization. In the definite integral, the variable of integration is a *dummy* variable, in that which letter is used to denote it is immaterial. ∫ (a,b) f(t)dt is exactly the same as ∫ f(u)du or ∫ f(x)dx. When we have such a dummy variable, it is best to make it different from any other variable appearing in the problem to avoid any confusion (though this is often not done).

An integral may define a function of the upper limit (the lower limit would work as well, but we will adopt the upper limit for simplicity) by simply making it a variable. ∫(a,x) f(t)dt = F(x) - F(a). Quite often a = 0, so we write ∫(0,x) f(t)dt = F(x) - F(0). The constant F(0) may actually be zero, but is inconsequential at any rate, so it is all right to write ∫(0,x) f(t)dt = F(x). F(x) is then a function *defined* by the integral, and does not have to be expressible in terms of elementary functions. From the definition of the derivative, we also have dF(x)/dx = f(x).

Now let's consider a concrete example, and take f(x) = 1/√(1 - x^{2}. We must give the resulting function a new name, so we choose F(x) = sin^{-1}(x) = ∫(0,x) dt/√(1 - t^{2}). It is best to keep the dummy variable t distinct from the running variable x as here. Dwight is a great offender in this regard, often using x both for the limit and the variable of integration, but it is obvious what he means. This new function, sin^{-1}(x), has real values for -1 ≤ x ≤ 1, and is a periodic function with period T = 2π. This means that sin^{-1}(x + 2nπ) = sin^{-1}(x) for any n. The *principal value* is denoted by Sin^{-1}(x), and -π/2 ≤ Sin^{-1}(x) ≤ π/2. This merely consists of identifying one complete cycle of values, and has no other significance. If we write K(0) for ∫(0,1) dt/√(1 - t^{2}), then K(0) = π/2. The period of the function sin^{-1}(x) is then 4K.

This function, which is one of the *cyclometric* functions, is just the inverse of an elementary function, sin φ. This means that if x = sin φ, then φ = sin^{-1} x. The inverse function is really the sine, and is much more important than arcsine, sin^{-1}. We are very familiar with the sine because of its geometric interpretations, and in particular the solution of triangles. We find it useful to define cos x = √(1 - sin^{2} x), tan x = sin x/cos x, sec x = 1/cos x, csc x = 1/sin x, and cot x = 1/tan x, and all these functions have their geometrical interpretations and other uses. More than that, we can find all kinds of algebraic relations, and expressions for things like sin (x + y), and even extend the definition into the complex plane, so that sin ix = i sinh x, and e^{iz} = cos z + i sin z.

Now we are prepared to face the function F(φ,k) = u = ∫(0,φ)dθ/√(1 - k^{2}sin^{2}θ). If we let t = sin θ, then u = ∫(0,x) dt/√(1 - t^{2})√(1 - k^{2}t^{2}), where x = sin φ. These are two forms of Legendre's standard elliptic integral of the first kind. The *modulus* k is sometimes expressed as k = sin θ (not the variable of integration!). The use of trigonometric functions here is merely a convenience for expressing values between -1 and +1, and means nothing else. We can work just a well with the expression in terms of simply u, k and t. The complementary modulus k' is defined by k'^{2} = 1 - k^{2}. Sometimes the term modulus refers to the square, k^{2}, and is written m.

Note that if k = 0, then u = ∫(0,x) dt/√(1 - t^{2}) is just sin^{-1} x, or x = sin u. Should k = 1, we find u = ∫(0,x) = dt/(1 - t^{2}) = tanh^{-1} x, or x = tanh u. At these limits, we find elementary functions, one periodic with period 4K(0) = 2π, the other aperiodic, since 4K(1) = ∞. A plot of sin u and tanh u against u clearly shows the limits of sn(u,k) for k = 0 and k = 1.

Now, at last, we can define the Jacobian elliptic functions. For 0 < k < 1, we invert u(x,k) = ∫(0,x) dt/√(1 - t^{2})√(1 - k^{2}t^{2}) as x = sn(u). We note first that sn(u) is a function of k as well, really sn(u,k), but the k is usually suppressed. I have never heard the function pronounced, but from what has been written, I suspect that it is pronounced as "ess enn u." The term reflects the analogy with the sine, and was coined by Gudermann. Jacobi wrote x = sin am(u), where am(u) is just our angle φ. The angle φ can be regarded as the inverse of u just as well as sn(u) can. By the analogy to the sine, we may expect sn(u) to be periodic, and indeed it is, with period 4K(k) along the real axis, and period 4K(k') along the imaginary axis. Note the analogy with the sine, where the complementary modulus is 1, and we found the aperiodic tanh, which arises from the complex connection. This *double periodicity* is one of the features of the elliptic functions, so much so that any doubly periodic function is called elliptic. sn(u) is a forest of poles in regular rows in the complex plane, as illustrated in Jahnke and Emde, and is a *meromorphic* function (that is, its only singularities in the finite complex plane are poles).

Near k = 0, sn(u) may be expected to look a lot like sin(u), and indeed it does, when the increase in the period is taken into account. It starts from zero, increases linearly, then bends over, and is finally horizontal at K(k) with value 1. If you have a program that calculates incomplete elliptic integrals of the first kind, enter a series of values for x = sn(u) and list the corresponding values for u. Then plot sn(u) vertically and u horizontally. This will demonstrate clearly what is going on and remove a lot of the mystery from sn(u). Try different values for k, which will clearly show that sn(u) is really sn(u,k). In practice, values of sn(u) and other Jacobian elliptic functions are found as ratios of theta functions. We won't get into that here, since we are not very interested in numerical evaluations, but the references will show how to do it.

So far we have just the one function x = sn(u). Just as with trigonometric functions, we can do some algebra, and define other functions. First, cn(u) = √(1 - sn^{2}u) = √(1 - x^{2}), analogously to the cosine. Then sc(u) = x/√(1 - x^{2}). We might expect this to be called tn(u) (as it is in Dwight) by analogy, and in fact it once was. However, the convention is now common that the two letters sc = s/c denote the ratio, and ns = 1/sn, nc = 1/cn as well. There is a new combination dn(u) = √(1 - k^{2}x^{2}). This is all just algebra, and allows us to write formulas in a more concise manner. In the limit k → 0, all of these become the analogous trigonmetric functions, except dn(u) → 1. Unfortunately, they do not have useful geometric interpretations, and so seem as useless as a pile of screws without threads. However, the amount of algebra created is glorious.

For small k, sn(u) and cn(u) can be expanded in power series about u = 0, with the results sn(u) = u - (1 - k^{2} )u^{3}/3! + (1 + 14k^{2} + k^{4})x^{5}/5! + ... and cn(u) = 1 - u^{2}/2! + (1 + 4k^{2})u^{4}/4! - (1 + 44k^{2} + 16k^{4})u^{6}/6! + ..., which reduce to the familiar series when k = 0. If k = 1, the series for sn(u) becomes the series for tanh u, another illustration of this connection. The elliptic functions aren't so mysterious after all!

We can differentiate the integral u = ∫(0,x) dt/√(1 - t^{2})√(1 - k^{2}t^{2}) with respect to the upper limit to find du/dx = 1/√(1 - x^{2})√(1-k^{2}x^{2}) = 1/cn(u)dn(u), or d(sn u)/du = cn(u)dn(u). From this we can get d(cn u)/du = -sn(u)dn(u) and d(dn u)/du = -k^{2}sn(u)cn(u). All these reduce to the trigonometric expressions when k = 0.

We won't derive the addition formula here, but merely state it. sn(u ± v) = [sn(u)cn(v)dn(v) ± cn(u)sn(v)dn(u)]/[1 - k^{2}sn^{2}(u)sn^{2}(v)], which also reduces to the familiar formula when k = 0.

The reader probably knows the relation sin(iu) = i sinh(u). A similar relation exists for elliptic functions, which is called *Jacobi's Integral Transformation*. This says that sn(iu,k) = i sc(u,k'). Note the change in modulus from k to k'. If k = 0, it becomes sin(iu) = i (tanh u / sech u) = i sinh(u), since sc(u,1) = sn(u,1)/cn(u,1) = tanh(u)/[1 - tanh^{2}(u)] = tanh(u)/sech(u) = sinh(u). We can prove the relation by changing variables in the defining integral u = ∫(0,y) dt/[(1 - t^{2})(1 - k^{2}t^{2})]. First, we write iu = ∫(0,iy) dt/[(1 - t^{2})(1 - k^{2}t^{2})]. This simply redefines u and y, so that iy = sn(iu,k). If we now let y = w/√(1 - w^{2}), we see that t will run from 0 to iw/√(1 - w^{2}). The further substitution t = iv/√(1 - v^{2}) makes v go from 0 to w. To transform the integral, we need dt = idv/(1 - v^{2})^{3/2}, (1 - t^{2}) = 1/(1 - v^{2}), and 1 - k^{2}t^{2} = (1 - k'^{2}v^{2})/(1 - v^{2}). All this can be found by easy algebra. Then, u = ∫(0,w) [(1 - v^{2})(1 - k'^{2}v^{2})]^{1/2}dv, which says that w = sn(u,k'). Now we go back and put this in the expression for y, which yields y = sn(u,k')/cn(u,k') = sc(u,k'). Since iy = sn(iu,k), we have sn(iu,k) = i sc(u,k'), which is the desired relation. As usual, proofs for elliptic functions are harder than for trigonometric functions. This was one case where we could proceed directly from the integral, but normally bigger guns are necessary, in particular the theory of theta functions.

I have now shown most of the elementary properties of the Jacobian elliptic functions and, I hope, made these functions a little less strange. I wish I could add a practical example of their use, but, as I have complained, there seems to be little in the cupboard along this line. If I happen to find some happy example, I will report on it here.

M. Abramowitz and I. A. Stegun, *Handbook of Mathematical Functions...* (Washington, DC: U.S. Department of Commerce, 1964). Chapter 16, pp 567-588.

H. B. Dwight, *Tables of Integrals and Other Mathematical Data* (New York: Macmillan, 1961). Chapter 9, pp 177-186.

E. Jahnke and F. Emde, *Funktiontafeln mit Formeln and Kurven* (New York: Dover, 1945). Chapter VI, pp 90-106.

W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery, *Numerical Recipes in C*, 2nd ed. (Cambridge: Cambridge University Press, 1992). Chapter 6, pp 261-271.

E. T. Whittaker and G. N. Watson, *A Course of Modern Analysis*, 4th ed. (Cambridge: Cambridge University Press, 1958). Chapter XXII, pp 491-535.

Return to Mathematics Index

Composed by J. B. Calvert

Created 13 September 2002

Last revised 13 October 2002