Boltzmann's Factor


Contents

  1. Introduction
  2. Entropy
  3. Partition Function
  4. The Ideal Gas
  5. Appendix: Some Results Used in the Text
  6. References

Introduction

Boltzmann's factor is e-E/kT, which expresses the "probability" of a state of energy E relative to the probability of a state of zero energy. This factor can be used to introduce temperature into a wide variety of physical problems, and is often taken as a starting point. In this article, the origin of this factor, which is surprisingly simple, is discussed. To do this, it is necessary to understand entropy on a fundamental basis, and this also is really fairly easy to do. Then the calculation of thermodynamic functions through the use of the partition function Z is explained, and finally this is applied to the "ideal gas" of wide applicability. The reader will follow the argument better if prepared with a basic knowledge of classical thermodynamics, which the statistical thermodynamics presented here greatly extends, especially in understanding.

Classical thermodynamics is a deductive theory of remarkable power and accuracy, which deals with the thermal behavior of macroscopic bodies. It is based on two "laws" or postulates, the first of which is that heat is a form of energy and energy is conserved, and the second is that heat cannot be abstracted from a body and converted into mechanical energy with no other effect. There are many equivalent ways to state these postulates, and it is entertaining to show that they are all equivalent. Temperature is considered a fundamental notion, abstracted from sensation, that indicates the direction of heat flow and the state of thermal equilibrium. It is often included as a "zeroth" postulate.

In working out the consequences of these postulates, it is very useful to define a function of the state of a system called the entropy, with dimensions of energy divided by temperature. This temperature is a very special one, the absolute temperature, for which classical thermodyamics can give a definition, which is a very great accomplishment. Entropy is of great use, and in fact is essential, in engineering and chemical problems. However, what entropy "is," and what are the limits of its application, are not explained at all by the classical theory. English-speaking physicists, in particular, have had a very difficult time understanding it, though they have been able to apply it with great power and ingenuity. Engineers have been very foggy on entropy, and to all appearances remain so, while engineering students have learned to just follow the rules and think of more pleasant things.

The application of classical mechanics to the explanation of thermodynamics resulted in a complex and disgusting mess that explains nothing, though great efforts were expended along this line. This is not surprising, since thermodynamics depends sensitively on mechanics at an atomic scale, for which classical mechanics fails spectacularly. Nevertheless, Ludwig Boltzmann (1844-1906) and Josiah Willard Gibbs (1839-1903) came very close to a reasonable theory, which was completely clarified with the appearance of quantum mechanics. Gibbs seems to be little remembered these days, but he was the most important physicist that the United States has ever produced, and of world quality. Since the introduction of quantum mechanics, it has been possible to understand entropy clearly and completely on a very simple basis, so that all the clouds of mystery dissolve into nothingness. This article will try to give such a short, concise presentation that can easily be comprehended on first principles with very little algebra. The works by Charles Kittel and Peter Atkins (see References) should be consulted by anyone interested in these matters. These two investigators understand the whole subject with utter and thorough clarity, and can explain it well.

There is a large body of "scholastic" argument on the foundations of classical thermodyamics and the meaning of entropy that contains nothing of value. Entropy and the Second Law have been applied outside of thermodynamics to such things as economics and sociology, but these applications reflect only the ignorance of those who propose them, and have no basis. Thermodynamics is very simple, resting only on the existence of quantum-mechanical stationary states that can be counted.

Entropy

Thermodynamics applies to systems with a very large number of possible states available to them, and in which transitions between the available states are continual and rapid. Not simply a large number of states is required, but a very large number, as argued by Kittel. Very large numbers have properties different from those of merely large numbers, and even their logarithms are large numbers. Many such systems, and particularly those used as examples, are composed of numerous subsystems, which may be called particles, though we do not mean the structureless dots usually called by this name. Our particles here may be molecules, for example, with internal degrees of freedom in addition to states of motion. Even simple mass points may have a large number of states available in a macroscopic volume.

Let's take a hydrogen molecule as such a particle, confined in a cube a millimetre on a side. Suppose it has available all the states of kinetic energy less than, say, 4kT, a reasonable thermal excitation energy, where k = 1.38 x 10-23 J/K is Boltzmann's constant. At room temperature, this is about 1.62 x 10-20 J. The quantum number n for a state with energy E is given by n2 = 8mL2E/h2. With a mass of 3.32 x 10-27 kg, we find n = 9.136 x 105. The total number of states with smaller energy is 1/8 the volume of a sphere of radius n, or 4.00 x 1017 states. Our particle will have at least this many available to it as it jostles with other hydrogen molecules in the cubic millimetre, since there will be about 1016 of these. We need have no concern about the rapidity at which transitions between states are made in this system.

Our single particle has more than 1017 states available to it. This is already a pretty large number. There are only about 107 seconds in a year, and the universe itself may be only about 1018 seconds old. Now we can make another leap and consider the number of states available to the system of hydrogen gas in a cubic millimetre at STP. This will be the product of the number of states available to one particle raised to the power of the total number of particles, or 10 to the power 17 x 1016. This is, indeed, a very large number. Stop and consider its breathtaking impudence for a minute. Write it out; 10 to the power of 17 followed by sixteen zeros. It is so large that multiplying it by, say, 10,000 hardly changes it. Let us denote the number of states available to a system as W, and to a particle as w, so that W = wN, and is a number like the one above. This counting of states is by no means arbitrary or indefinite, but very clearly defined. This is the thing that classical mechanics could not do, and it is fundamental. Matter has a very definite structure, determined by the sizes and properties of atoms, that is beyond sensory perception and the contemplation of fools.

It must be mentioned here that in simply multiplying the number of states of different, but identical, particles we are making an error. In fact, we are counting many states more than once, and have to divide by N!, where N is the number of particles. We shall take this up later, but at the moment only notice that it will make our estimate of the entropy somewhat too large. Also, hydrogen has rotational energy in addition to translational, and these degrees of freedom contribute to the entropy as well. In this article, we are concerned mainly with the translational degrees of freedom.

The entropy of a system is defined, following Boltzmann, as S = k ln W. This equation is carved on his monument in the Zentralfriedhof zu Wien, and is as remarkable as E = mc2. The "k" is simply an arbitrary constant that gives S its dimensions, which for temperature in K is more precisely 1.380658 x 10-23 J/K. Kittel represents the entropy as σ = ln W to show that it is essentially a dimensionless counting. This means that W is an exponential function of S, W = eS/k. For our cubic millimetre of hydrogen, S = k (2.3034)(17 x 1016) = 5.4 x 10-6 J/K. Since a mole of hydrogen occupies about 22.4 litre at STP, this gives S = 121 J/mol-K, a not unreasonable result. Of course, this is not an accurate evaluation of the entropy of hydrogen at STP, but shows the principles, and the sizes of numbers, involved. Note that if we had made an error in estimating W by a factor of 10,000, the answer for the cubic millimetre would only be changed by k ln 10,000 = 1.27 x 10-22 J/K, an unnoticable difference. Entropy is just a logarithmic measure of the number of states available to a system, nothing more mysterious than that.

Suppose now that we have two systems, 1 and 2, with available states W1 and W2. The number of states available to the system formed by putting 1 and 2 into thermal contact will then be W = W1W2. Therefore, the entropy of 1 + 2 will be S = k ln W = k ln W1 + k ln W2 = S1 + S2. The entropy, then, is an additive property, like mass. This certainly does not mean that entropy is any kind of substance, as we see from the definition. It cannot be thought of as a fluid (neither can energy, but people persist in thinking this way).

When we put 1 and 2 into contact, they may not be in equilibrium, and energy may pass from one to the other, which we call a flow of heat. Let's suppose that system 1 gives up energy dU to system 2. The loss of dU causes a restriction in the number of available states (some become energetically unreachable), while the gain of dU causes a gain in the number of available states. Since W is a function of U, so is S, and we can express the net change in entropy of 1 + 2 in the form dS = (∂S1/∂U)(-dU) + (∂S2/∂U)(dU). A more convenient way to write the partial derivatives is to set ∂S/∂U = 1/T. T will have the dimensions of a temperature because of the definition of k, and in fact is just the absolute temperature of the system. Now we have dS = (1/T2 - 1/T1)dU. Energy will pass back and forth between the two systems until they have thoroughly explored the available states, and they will come into equilibrium when the two systems jointly have the maximum number of states available to them. Then dS = 0 (since any energy interchange away from the optimum value will lead to fewer available states), and this can be guaranteed for any dU only when 1/T2 = 1/T1, or T1 = T2. More than this, if T2 > T1 initially, dS will be positive for a positive dU, or energy flow from 1 to 2. We have explained not only thermal equilibrium, but the direction of approach to equilibrium by considering numbers of available states. If we know the entropy of a system, we know the temperature of the system. Temperature exists whenever the derivative is well-defined (even for a system of two states only), not just for large systems that can be used as thermal reservoirs, as is important in this example.

We can now briefly indicate the role of time in entropy. A system can be prepared in a non-equilibrium state in which only a limited number of states are available to it. As time passes, more and more states become available, until finally all the states possible given the restrictions on the system (total volume, etc) are available to all the particles. In terms of entropy, this means that the entropy will increase steadily to a maximum value that is the equilibrium entropy of the system. Think of a system that is created with temperature differences between its parts that only slowly become equalized. Every step towards equalization of temperature gives the particles more and more freedom to occupy different states. The initial temperature differences could be used to drive a heat engine to extract useful work while this takes place. Alternatively, we can just let heat conduction take its course, and lose the possiblility of extracting useful work. The effect on the system is the same (provided we replace the energy extracted) in either case, and the entropy increases to its final maximum value.

Suppose now that the system consists of a large part at a temperature T, and a small part, perhaps one particle, on which we focus our attention. We are interested in the relative probabilities of the small part's being in some state 1 and another single state 2. To form these states, we have to extract energies E1 and E2, respectively, from the large part, which decreases the number of states available to the large part. Let W1 and W2 be the available states of the large part in the two cases. Then it is clear that the probability we want is just W1/W2. Now W1 = eS/k, and S1 = S' - E1/T, keeping only the first term in the series, with dU - -E1 and S' the entropy of the large part before any energy is taken away. Therefore, W1 = eS'/kexp(-E1/kT), and we have a similar expression for W2. Therefore, our desired probability is p1/p2 = exp[-(E1 - E2)/kT], which is the familiar Boltzmann factor.

The reason for the exponential in the Boltzmann factor should be clear. The ratio of probabilities that we want is the ratio of numbers of states, just as if we were working with dice in elementary probability, and the numbers of states are exponential functions of the entropy. Also, our ratio is for individual states; if they come in bunches (degeneracy), then the factors must be multiplied by the numbers of states in each level. The Boltzmann factor is perhaps the most useful result of statistical mechanics, applicable to many problems, and which can even be carried over to problems expressed in classical mechanics, though it is fundamentally a counting of states.

Kittel carries through this argument allowing the number of particles N in the system to be an independent variable as well. The entropy is then a function of N, and its partial derivative with respect to N is -μ/kT, where μ is another very important quantity, the chemical potential. In classical thermodynamics, this quantity is even more mysterious than the entropy, but quantum mechanics gives it a clear meaning, by means of the argument presented here. It is remarkable that the only quantum mechanical result we use directly is simply the existence of stationary states that can be counted. The thermal behavior of matter is, indeed, conclusive evidence of the existence of atoms. The Second Law is little more than a statement that more probable things are more probable than less probable things.

Partition Function

The probability that state Ei is occupied is Pi = Cexp(-Ei/kT), which is the meaning of the Boltzmann factor. The constant C can be found by summing over all Pi, which should give unity--the probability that some state is occupied. Thus, 1 = C ΣPi, or C = 1/Z, where Z = ΣPi is called the partition function. Each state of the system is represented in Z by its Boltzmann factor.

The internal energy U of the system can be expressed as U = ΣEiPi, the ensemble average of Ei. The ensemble is an (imaginary) collection of systems, with one in each of the possible states, associated with its corresponding probability. In this case, we are dealing with the Gibbs canonical ensemble. A change in U can be expressed as dU = ΣPidEi + ΣEidPi. The second term is the change in U due to a change in the probabilities, as, for example, caused by a change in temperature. This implies some transitions between particle states if the system is considered as composed of particles. We have already seen this when systems are placed in thermal contact, and heat flows between them. From the definition of temperature, dU = TdS in this case, so we identify this term with TdS.

The first term results from a change in the energy levels without a change in probabilities. For example, if the volume V occupied by the system changes, so do the translational energy levels. In fact, if the quantum number n remains constant, then the energy of a state is proportional to L-2, if it is confined in a cube of side L. If we consider the state as exerting an outward pressure pi on each face of its cube, then the work done by the state if L is increased by dL is 3piL2dL = pidV, so dEi = -pidV. The term ΣPidEi is the ensemble average of the work done, or -pdV. Now we have dU = -pdV + TdS, which is called the thermodynamic identity. Changes in U can take place due to changes in occupation of states, or by changes in the energies of the states, and these changes are called heat and work, respectively. We assume that the changes associated with pdV take place with no change of entropy--that is, they are "reversible" in the usual terminology of thermodynamics. Work expressed as pdV is only one kind of work that can be done by a system; the energy of a state may depend on other parameters besides the volume, such as the magnetic field, which can be treated analogously.

We have just seen that (∂U/∂V)S = -p. This is merely another way to write that the change in energy dU caused by a change in volume dV at constant entropy S is the negative of the pressure, or dU - -pdV (S = constant), which we have just discussed. A change in S can be expressed in terms of partial derivatives by dS = (∂S/∂U)V dU + (∂S/∂V)U dV, where S is considered as a function of U and V, as we have assumed above. Now set dS = 0, divide by dV, and replace dU/dV by (∂U/∂V)S by -p. Since (∂S/∂U)V = 1/T, we then have (∂S/∂V)U = -p/T. One might be tempted to use this expression as a definition of p, as we defined T by a partial derivative of S, but that would not be legal, since p is already defined in terms of the forces exerted by the system. What we have to do is show that this externally defined p also appears in the partial derivative expression, and we have just done that. Just as in the case of temperature, we can now show that p1 = p2 in equilibrium, and that if p1 > p2, V1 increases at the expense of V2, so that system 1 does work on system 2. The systems must also be in thermal equilibrium for this reasoning to be valid. If they are not, we must have recourse to mechanical arguments, since dS will be zero in any case.

Since Pi = exp(-Ei/kT)/Z, ln Pi = -Ei/kT - ln Z, or Ei = -kT ln Pi - kTln Z. Since ΣPi = 1, ΣdPi = 0, so multiplying the previous equation by dPi and summing over i, we find ΣEidPi = -kT ΣdPiln Pi. The sum on the left is, as we have seen, just TdS, and d[ΣPiln Pi] = Σ dPiln Pi since ΣdPi = 0, so dS = d[-kT ΣPi ln Pi], and thus S = -kT ΣPi ln Pi. The constant of integration is zero, as we can see by considering a system occupying a single state. This is Boltzmann's expression for the entropy in terms of the probabilities.

Now we go back to the first equation in the preceding paragraph, where -kT ln Z = Ei + kT ln Pi. Multiplying by Pi and summing, which is taking the ensemble average, we find -kT ln Z = U - TS, using the expression for S that we have just derived. The function U - TS is the free energy F. It has the property that dF = dU - TdS - SdT = -pdV - SdT, so that dF = 0 in a process where dV = dT = 0, or an isothermal process at constant volume, such as a chemical reaction taking place in a fixed volume at a fixed temperature. This means that F is an extremum (usually a minimum) at equilibrium, and can be used as a criterion of equilibrium in an isothermal process as entropy is used in a process at constant energy. The fact that F = -kT ln Z makes the partition function very useful for finding the thermodynamic functions of a system. From F, we can find p = -(∂F/∂V)T, and S = -(∂F/∂T)V by differentiation, and then U = F + TS, and so on. These properties come out as functions of T and V, which is much more convenient than the expressions in terms of U and V that we have found up to this point.

As an example, consider a linear harmonic oscillator whose states have energies (n + 1/2)hf, where n is the quantum number, f the frequency, and h Planck's constant. We have included the zero-point energy hf/2. Then, Z = Σ exp[-(n + 1/2)(hf/kT)] = Σ exp[-(n + 1/2)x], where x = hf/kT. This series can be summed to Z = exp(-x/2)/(1 - e-x) = ex/2/(ex - 1). Therefore, F = -kT[(x/2) - ln(ex - 1) = -hf/2 + kT ln(ex - 1). The entropy is S = -k ln (ex - 1) + (hf/T)ex/(ex - 1). Finally, U = F + TS = hf/(1 - e-x) - hf/2. For kT >> hf, U → kT, while for kT << hf, U → hf/2, which are the correct limits. The specific heat is C = dU/dT = kx2e-x/(1 - e-x)2, the Einstein expression. For kT >> hf (large x) C → k. The Einstein temperature is TE = hf/k, a measure of the quantum size. When T = TE, then C = 0.92k, only 8% below its high-temperature limit. When T = TE/10, C = 0.0045k, and the oscillator is "frozen out." This has an important application to the vibration of molecules. The Einstein temperature of O2 is 2260K, and of H2 6300K. The vibration of hydrogen is frozen out at room temperature, while that of oxygen is only a little excited.

If the zero-point energy is omitted, then Z = Σ e-nx = 1/(1 - e-x. We find F = kT ln (1 - e-x) and S = -k ln (1 - e-x) + (hf/T)/(ex - 1). Then, U = F + TS = hf/(ex - 1). This is appropriate for photons, which are not conserved and so cannot have a zero-point energy. We recognize here the Bose-Einstein distribution for zero chemical potential, which appears in the theory of black-body radiation. The spectrum of black-body radiation (the energy density) is this quantity times the number of modes (oscillators) in the frequency interval df.

A two-state system with a ground state of energy 0 and an excited state of energy E can also be easily handled, since the partition function is the simple expression Z = 1 + e-E/kT. Then, U = Ee-x/(1 + e-x) = E/(ex + 1), where x = E/kT. The specific heat is C = kx2ex/(ex + 1)2. This is the same as for the harmonic oscillator, except that a +1 replaces the -1 in the denominator, which makes a considerable difference. For x → ∞ (low temperatures), C → 0, while for x → 0 (high temperatures), C also approaches zero. The value of x for the maximum C can be found by setting the derivative of C with respect to x equal to zero. The result is the solution of 2/x = tanh(x/2), which is about x = 2.40. The peak of the C versus T curve is at T = 2.40E/k, from which E can be estimated experimentally. This peak is called a Schottky anomaly.

The entropy can be found in the usual way from F = -kT ln (1 + e-x). If we set E = 1 for simplicity, we find S = -k[U ln U + (1 - U)ln(1 - U)]. This is zero for U = 0 and U = 1 (note that lim(x→0) x ln x = 0), and symmetrical about a maximum at U = 1/2, for which S = k ln 2. The temperature is given by 1/T = dS/dU = k ln[(1 - U)/U]. We must replace k by k/E if the temperature is to come out in K. For U = 0, or the upper state unoccupied, we find T = 0, which is not unexpected. For U < 1/2, the temperature is positive, approaching +∞ for U = 1/2, where the two states are equally occupied on the average. For still greater U, the temperature becomes negative, approaching 0 from below as U→E, when the lower state is unoccupied. Negative temperatures are higher than positive temperatures, with -0K the highest temperature of all. Whenever the temperature is negative, the upper state is more populated than the lower, and there is a population inversion. If we put systems with temperatures +T and -T in contact, the result at equilibrium will be T = ∞, not zero. This is quite reasonable, since in the combined system the upper and lower levels will be equally occupied, which corresponds to infinite temperature. Population inversions are created in lasers so that the system will radiate spontaneously.

The Ideal Gas

An ideal gas is a system consisting of N identical particles in a volume V. The particles are assumed to move entirely independently. Of course, in any real case they collide very frequently, and these collisions are necessary to permit the system to occupy all of its available states. Nevertheless, the particles feel each other's presence very rarely, and are almost independent. We shall totally neglect the collisions, and trust that the gas can reach equilibrium quickly when disturbed. We will consider only the translational motion of the particles, and not any internal degrees of freedom they may have. This restriction is purely for simplicity.

Let's begin by considering one molecule in a volume V, thought of as a cube of side L, such that L3 = V. The states are labelled by three quantum numbers, and the energy E depends only on the sum of squares n2 of the quantum numbers. In fact, E = h2n2/8mL2, where m is the mass of the particle. In n-space, where the quantum numbers are plotted along three orthogonal axes, there is one state per unit volume, and only points in the first octant correspond to different states. This means that the number of states in (n,n+dn) is D(n)dn = π2n2dn/2. Now it is easy to find Z by replacing the sum with an integral. Setting x = (n/L)(h/8mkT)1/2, we have Z = [L(2mkT/h2)]3 (π/2) ∫(0,∞) x2exp(-x2)dx. The value of the definite integral is π1/2/4, so Z = V[2πmkT/h2]3/2. This is the partition function for a single particle.

The energy can be found from Z by the formula U = kT2∂(ln Z)/∂T (this is just -∂F/∂T). The only temperature-dependent part of ln Z is T3/2, so we find U = 3kT/2. This is an expression of the famous principle of equipartition, which states that to each degree of mechanical freedom an energy kT/2 corresponds in thermal equilibrium. In an ideal gas we have three degrees of translational freedom. A harmonic oscillator has two degrees of freedom, one for the kinetic energy and one for the potential energy, and we have already seen that its average energy is kT at a high enough temperature. The pressure is given by -∂F/∂V, or p = kT ∂(ln Z)/∂V. The only volume-dependent part of Z is simply ln V, so we find p = kT/V or, pV = kT, the ideal gas law for a single particle.

If we have N particles instead of just one, it would not take much argument to convince one that U = 3NkT/2, and pV = NkT, instead. These are, indeed, correct results, and would be obtained if we simply raised the single-particle Z to the Nth power. It happens, however, that if we do this, the free energy and the entropy do not come out correctly! The problem is quantum-mechanical. In the microscopic world, two electrons, or two hydrogen atoms, are not just similar, they are identical. If we have a state containing any number of them, interchanging any pair gives back exactly the same state, not a different one. That is, the number of states calculated on the basis that the particles are all distinguishable (raising Z to the power N, for example) is too large by a factor N!. This is the number of ways we can permute the particles in the state and get back the same state. It is actually a little worse than this. The states of particles like electrons, that have half-integral spins, must be antisymmetric on the exchange of any two particles (i.e., it must change sign), while states of particles of integral spin must be symmetric under exchange. We will not go into the complexities of state-counting that these considerations require, but stick with the simple N! factor.

The partition function for N particles is then Z = (VN/N!)[2πmkT/h2]3N/2. In handling the N!, the Stirling approximation ln N! = N ln N - N will be accurate enough. The internal energy U and the pressure p come out as above, but F and S = -∂F/∂T are different. Most important is the expression for the translational entropy, S = (3Nk/2)ln(2πmk/h2) + (5Nk/2) + Nk ln(V/N) + (3Nk/2)ln T, which differs from the naive result by -k(N ln N - N), the contribution of the N!. This is the famous Sakur-Tetrode (Tetrode has three syllables) equation. For the cubic millimetre of hydrogen we used as an example above, the first two terms, the constant terms, contribute +8.842 x 10-6 J/K to the entropy; the third term gives -7.63 x10-6 J/K. Part is volume-dependent, but the rest is constant. The final term, the temperature dependence, adds 1.180 x 10-6 J/K, for a total of 2.392 x 10-6 J/K. This is a much more accurate value than our early rough estimate, but it is of the same order of magnitude. The Sakur-Tetrode equation has been thorougly confirmed by experiment. For a monatomic gas, such as helium or argon, it gives the total entropy.

We have seen that the usual ideal gas constant R = NAk = 8.31451 J/mol-K. Note that this is for a gram-mole, for which NA = 6.02 x 1023. It is sometimes given for a kg-mole, a unit which is never used by the sane. The Boltzmann constant is just the gas constant per molecule, and is the same constant used to get entropy from probability in S = k ln W. Since 1 cal = 4.186 J, R = 1.986 cal/mol-K, quite close to the easily-remembered round value of 2 cal/mol-K. Measurements on ideal gases is one way to determine k experimentally. As a practical matter, most gases under normal conditions are very well approximated as ideal gases, consisting of independent, classical particles, and their thermodynamic properties are easily determined.

Linear molecules have an additional two rotational degrees of freedom, and nonlinear molecules have three. Rotational energies are small, so they are not frozen out at usual temperatures as vibration is. The principle of equipartition then applies, so the internal energy is increased by RT and 3RT/2 per mole, respectively, giving U = 5RT/2 and U = 3RT for the two cases. The specific heat at constant volume is dU/dT, so CV = 3R/2 for a monatomic gas, 5R/2 for a gas of linear molecules (such as a diatomic gas) and 3R for a gas of nonlinear molecules (such as a polyatomic gas). These predictions are well confirmed by experiment.

An adiabatic process is one that takes place without heat exchange. If it is reversible, then it will be isentropic. Such processes are usually fairly rapid ones, such as the compressions involved in the propagation of sound, or the convective rise of bodies of air in the atmosphere, and are of great practical importance. For a monatomic gas, S = (3R/2)ln T + R ln V + constant terms J/mol-K. Therefore, S = constant implies that T3R/2VR = constant, or VT3/2 = constant. Since pV = RT, this can be cast into the form V(pV/R)3/2 = constant, or p3/2V5/2 = constant. As CV = 3R/2 and Cp = CV + R = 5R/2 (this latter relation is proved in the Appendix; Cp is the specific heat at constant pressure, when the gas must be allowed to expand and do work). In view of this, an isentropic process for an ideal monatomic gas is characterized by pCvVCp = constant, or pVCp/Cv = constant. It is shown in the Appendix that this is a general relation. The ratio of specific heats Cp/Cv is usually denoted by γ, so finally we have the familiar pVγ = constant. For a monatomic gas, γ = 5/3 = 1.667, for a gas of linear molecules γ = 7/5 = 1.400 and for a gas of nonlinear molecules, γ = 8/6 = 1.333. The ratio of specific heats can be determined quite accurately by measuring the speed of sound, and these predictions are fully confirmed.

Appendix: Some Results Used in the Text

Difference in the specific heats

We begin with the thermodynamic identity, dU = TdS - pdV, and consider one mole of substance. In a change of temperature dT at constant pressure, dU = CVdT for an ideal gas, while the heat absorbed TdS = CpdT. Hence, Cp = CV + p(∂V/∂T)p, where the partial derivative is the ratio of dV to dT at constant p. Since the equation of state of one mole of an ideal gas is pV = RT, p(∂V/∂T)p = R. Therefore, Cp = CV + R. Q.E.D.

In general, the internal energy can depend on the volume as well as on the temperature. In fact, (∂U/∂V)T = T(∂p/∂T)V - p, which is zero for an ideal gas. In the more general case, Cp - CV = -T(∂V/∂T)2(∂p/∂V)T. The necessary derivatives can be found from the equation of state.

Relation between p and V in an isentropic process

The thermodynamic identity is again the starting point, dU = TdS - pdV. In an isentropic process, dS = 0, so 0 = CVdT + (RT/V)dV, where we have used the equation of state for one mole of an ideal gas, and have made the same assumption about the internal energy as above. Then CV(dT/T) + R(dV/V) = 0. This integrates easily to CVln T + R ln V = constant, or TCvVR = constant, or TCv/RV = constant. From the equation of state, T = pV/R, so pCv/RVCv/R + 1 = constant. Raising to the power R/Cv, we find pVγ = constant, where γ = (Cv/R + 1)/Cv = Cp/Cv. this result is exact for an ideal gas only. Q.E.D.

Relation between adiabatic and isothermal bulk moduli

The bulk modulus is defined by k = -V(dp/dV), where an increase in pressure dp causes a decrease in volume -dV. If the temperature is held constant, then pV = RT = constant, and the isothermal bulk modulus is kT = p for an ideal gas. If the process is adiabatic, then pVγ = constant. The derivative with respect to p is Vγ + pγVγ - 1(dV/dp)s. The adiabatic (isentropic) bulk modulus is then ks = γp. The speed of sound c is given by c2 = k/ρ. For an ideal gas, this becomes c2 = γp/ρ = γRT/M. The adiabatic process is "stiffer" than the isothermal. Although we have proved it only for an ideal gas, the relation ks = γkT

is true in general.

The Gas Constant in Other Units

We gave two values for the gas constant above, 8.314 J/mol-K and 2 cal/mol-K, which are convenient in some problems, but not in all. The gas constant for one mole in any units can be found from R = pV/T, where corresponding values for p, V and T are inserted. For example, p = 1 atm, V = 22.4 l and T = 273.15K are well-known corresponding values, which give R = (1)(22.4)/(273.15) = 0.082006 l-atm/mol-K. Note that 0.0820 is a value good to better than 1 part in 10,000, and not too hard to remember. If you are converting to U.S. Engineering Units, you need to know that a pound-mole is 453.59237 times more than a gram-mole, and a °R (rankine) is 9/5 of a kelvin (0°C is 491.67R). At STP the pound-molar volume is 358.8 cubic feet, so R = 0.73 cuft-atm/lb-mol-R. Since 1 atm is 14.696 psi, R = 10.73 psi-cuft/lb-mol-R as well.

References

C. Kittel, Thermal Physics (New York: John Wiley & Sons, 1969). Chapters 1-6.

P. W. Atkins, The Second Law (New York: W. H. Freeman & Co., 1984).

M. W. Zemansky, Heat and Thermodynamics, 4th ed. (New York: McGraw-Hill, 1957). Thermodynamics texts are legion, some designed for physics students, some for engineering students, all generally pretty foggy on entropy. Thermodynamics for chemists is best avoided. Zemansky is a classical reference, with much information on the many practical aspects of thermodynamics. More modern texts are no better, and in most cases considerably worse, with bad proofreading and outright errors. Physics students now have the advantage of texts with the general outlook of Atkins and Kittel, but engineering students still struggle with mystery.


Return to Physics Index

Composed by J. B. Calvert
Created 3 April 2003
Last revised