The Heine-Borel theorem is used in the theory of uniform continuity and uniform convergence. Most texts, proceeding in a logical order, present it first and then go on to uniform continuity. This makes it difficult for the student to grasp the meaning of the theorem. Since its results are easy to state, one usually passes on after acknowledging the theorem, and forgets one's uncertainties about it. This article is intended to overcome this difficulty, and make the meaning of the theorem clear.

Widder, for example, states that: (If) to each c, a ≤ c ≤ b (there) corresponds an interval I(c): c - δ(c) < x < c + δ(c), then there exist points c_{1}, c_{2}, ..., c_{n} of a ≤ x ≤ b such that every point of the interval a ≤ x ≤ b is in at least one of the intervals I(c_{i}). What he does not say is that δ(c) is not your choice, but is determined for you by the problem, and that δ(c) > 0. That is, the theorem is true for any such δ(c). The important thing is that the number n of intervals is *finite*. We also note that the interval involved is *closed*; that is, it includes its end-points.

If one could choose δ(c), it would be enough to let δ(c) be any positive number δ. Then, any number N of intervals I greater than (b - a)/δ would be sufficient to cover the interval. This shows that the mere fact of covering the interval for some δ(c) is not the gist of the theorem. This erroneous idea is responsible for the observation that most students find the theorem obvious.

A function f(x) is continuous at x = c if and only if for any ε there exists an δ such that when |x - c| < δ, |f(x) - f(c)| < ε. This gives us a δ(c) to which our theorem applies. Once we choose ε, the function is determined. Now, the theorem says that in any such case, the closed interval a < x < b can be covered by a finite number of finite intervals. If we choose δ_{0} to be the smallest of the finite number of values of δ(c_{i}) at the points c_{i}, then we have that |f(x) - f(c)| < ε, for any ε > 0, whenever |x - c| < δ_{0}, *independently* of c. This is called *uniform continuity*, and we have shown that a function continuous in a closed interval is uniformly continuous in that interval.

Now we can give a proof of the Heine-Borel theorem. Whittaker and Watson give the clearest proof. They prove a theorem called the modified Heine-Borel theorem, modified because of the special way we divide the interval, but it is completely equivalent to the theorem stated by Widder. We shall simply sketch the proof here; for a full account, refer to Whittaker and Watson. We divide the interval arbitrarily into subintervals. If, in any of these subintervals, we can find a point x = c such that the corresponding I(c): x - δ(c) < x < c + δ(c) covers the subinterval, then the interval is said to be *suitable*; that is, it satisfies the conditions of the theorem. If all the subintervals are suitable, then the interval can be covered by a finite number of subintervals, and the theorem follows.

If an interval is not suitable, then divide it in half and consider the resulting shorter subintervals. If this process terminates, then again we have a finite number of subintervals. If it does not terminate, then we must find shorter and shorter subintervals, each contained within the preceding, which are not suitable. However, by this process we can come as close as desired to some point Q. If the length of a subinterval becomes less than 2δ(Q), then the subinterval must be suitable. This violates the hypothesis, and so the process of dividing subintervals must terminate. Therefore, the theorem is proved.

It's always good to consider concrete examples in these investigations. For example, consider f(x) = 1/x. This function is continuous in any interval that does not contain the origin x = 0. Indeed, it is not defined at x = 0 by this expression. As x → 0 from positive values, f(x) → ∞. If x → 0 from negative values f(x) → -∞. No definition of f(0) can correct this behavior and make f(x) continuous there. We should remember that stating f(x) = ∞ does not mean that f(x) takes some value called ∞, but this is only shorthand for the statement that f(x) is unbounded. In the open interval 0 < x ≤ b, 1/x is continuous. It is not uniformly continuous in this interval, because δ → 0 as we approach the origin. For any ε we cannot find a nonzero δ that will do over the whole interval.

On the other hand, if a ≤ x ≤ b, with b > a > 0, it is clear that 1/x is uniformly continuous in any such interval. None of this says that a function cannot be uniformly continuous in an open interval. Indeed, 1/x is uniformly continuous in a < x < b, with b > a > 0. Uniform continuity does not necessarily follow from continuity over an open interval, however, as we have seen. If we consider f(x) = 1/x^{2}, the same conclusions can be drawn. For this function, f(x) approaches +∞ as x approaches 0 from either direction, and we can state that f(0) = +∞. This certainly does not mean that 1/x^{2} is continuous at x = 0, because ∞ is not an actual value. The function f(x) = x/x is equal to 1 when x ≠ 0, but is undefined at x = 0, when it becomes 0/0. If we define f(0) = 1, then f(x) is continuous at x = 0. Indeed, then we might as well say f(x) = 1. A constant is uniformly continuous over any interval.

Suppose f(x) is continuous in a ≤ x ≤ b, with b > a > 0. Then there is some number α such that α ≥ f(x) for all x in this interval. Since f(x) is continuous, for any δ, however small, there are values of f(x) < α - δ. α is called the *upper bound* of f(x) in the interval. For any δ, we can find values of x such that |f(x) - α|^{-1} > δ^{-1}, so this quantity is not bounded and is not continuous at some point x in the interval. However, |f(x) - α| is continuous in the interval, so its reciprocal is also continuous at all points except those at which f(x) = α. This conclusion requires that the interval be closed. Therefore, at some point x we must have f(x) = α. Therefore, a function continuous in a closed interval attains its maximum value (upper bound) in that interval. By considering -f(x), the same holds for the lower bound or minimum value.

Let M and m be the maximum and minimum values of f(x) in the interval. From what has just been said, there are two points x' and x" such that f(x') = M and f(x") = m in the interval. By the Heine-Borel theorem, for any ε we can cover the interval m ≤ x ≤ M with a finite number of intervals in which f(x) varies by less than ε. If μ is any value between m and M, then we can find an interval in which |f(x) - μ| < ε. Therefore, the lower bound of the function f(x) - μ is zero, and the function must assume this value, so that f(x) = μ. Therefore, a function continuous in a closed interval assumes all values between its maximum and minimum values in the interval.

These properties of a continuous function are intuitively obvious for curves that one can draw with a pencil. It is important for more advanced deductions that they can be proved from the definition of continuity. We have seen that the Heine-Borel theorem is a useful tool in this undertaking.

E. T. Whittaker and G. N. Watson, *A Course of Modern Analysis*, 4th ed. (Cambridge: Cambridge University Press, 1958). p. 53.

D. V. Widder, *Advanced Calculus*, 2nd ed. (New York: Dover, 1989). p. 170f.

R. Courant, *Differential and Integral Calculus*, Vol. II (London: Blackie and Son, 1936). p. 99

Return to Math Index

Composed by J. B. Calvert

Created 25 October 2004

Last revised