A great deal of complicated algebra, but it cannot be avoided.

- Introduction
- The Schrödinger Equation
- The Uncertainty Relation
- Propagation of the Minimum Wave Packet
- The S-Matrix
- Transition Probabilities and Cross Sections
- The Born Approximation
- References

The quantum theory of scattering was, after atomic spectroscopy, the earliest and most significant application of the new quantum mechanics. It has been studied and developed with constant effort since then, since it is central not only to atomic and molecular physics, but also to the study of elementary particles. It reached a climax around 1950 with the successful incorporation of relativity, field theory and second quantization, the S-matrix and the Feynman perturbation series. Quantum electrodynamics rested on scattering theory, and is still the most accurate and satisfactory of fundamental physical theories. There have been no fundamental advances since the electroweak unification in the 1960's, only the elaboration of known theories and improved experiments. The famous Standard Model is a rather shabby structure along traditional lines that certainly does not appear to be fundamental, but is the best that can be done so far.

This article sketches the formal developments of nonrelativistic potential theory, beginning with the time-dependent Schrödinger equation and its solution with Green's functions, and ending with Fermi's Golden Rules and the Born Approximation. This theory is very mathematical and algebraically demanding, even with the restriction to nonrelativistic processes, but the reasoning behind the algebra can be understood without a complete grasp of the results. I have pared the algebra to a minimum, but it is still frightening. For convenience in writing the HTML, vectors are often represented by ordinary type, but boldface is used where the chance of confusion is great. Also, h' is used for h/2π. It is presumed that the reader is familiar with the elements of quantum mechanics (some of which are introduced in other articles on this site).

No way has yet been found to find exact solutions for most of the problems in the more recent physical theories, so reliance is placed on symmetry relations and first-order approximations in nearly all cases, which depend on formal methods like those outlined here. Nuclear structure, and far less, the quark structure of elementary particles, cannot be described with exact solutions. Any impression that modern physical theory is well known and understood is far from the truth. Many of those who know little like to impress those who know nothing (like politicians and newspaper writers) with their mystifications and promises of sunbeams from cucumbers. In fact, much physics is now concerned with measuring the nearly unmeasurable and speculating about what can never be known, rather than the solution of the prodigious problems before us that emphasize our ignorance. I hope that I can shed some light on matters that are mysterious even to most physicists, and some of which are even in danger of being forgotten.

The literature of this subject is huge. Only one textbook is given in the References, since I do not feel able to construct a reasonable bibliography, and would want to include field theory as well, since S-matrix theory is central to this subject. Past issues of the Reviews of Modern Physics should be searched for more complete references. To be truly proficient in this field requires a prodigious amount of familiarity with a wide range of things and years of study. There is no easy, level royal road.

This article is not yet complete; in particular, a few diagrams may be added, and applications may be discussed.

We'll base our discussion on the nonrelativistic Schrödinger equation:

ih'∂Ψ/∂t = H(p,q,t)Ψ

Ψ is the Schrödinger wave function, a function of the coordinates q_{i}, i = 1, 2, ..., N, and the time t. It contains complete information about the dynamics of the system under consideration. H(p,q,t) is the *Hamiltonian*, a function of the *operators* p and q for each of the degrees of freedom, which obey the commutation relations [p,q] = ih', and the time. We write h' for h/2π instead of h-bar because of the limitations of the HTML character set. Given an inital value of Ψ, the equation determines uniquely the subsequent behavior of the system. This will be the central topic of the present discussion.

We put to one side the questions of how this equation was discovered, its range of validity, and the macroscopic interpretation of Ψ, except that its absolute value squared is proportional to the probability of finding the coordinates of the system in the N-dimensional volume d^{N}q, and that operators return the values of the corresponding dynamical variables in eigenstates of those operators. Any statistical or change results are purely the consequences of these interpretative principles. There is no statistical nature to the "motion" of the system as described by the wave function. The Schrödinger equation should be the limit of a correct relativistic general theory, but the connection is very difficult to establish, and is of little help in the interpretation of the Schrödinger equation. Relativistic theories do reduce to the Schrödinger equation at low velocities, with important additions such as the electron spin and magnetic moment.

Therefore, we shall regard Ψ as a probability amplitude. The appearance of the imaginary unit i shows that *phase* is an important property, equal in importance to the absolute magnitude of the amplitude. The phase, operating in the superposition of amplitudes, is what gives the familiar wave behavior to quantum descriptions of nature, such as electron diffraction. Amplitudes superimpose linearly.

Problems in which N > 1 are difficult to solve; in fact, they are almost impossible to solve in terms of a wave function that is a direct solution of the Schrödinger equation. The wave function must be built up as a linear combination of products of wave functions for single particles, and this makes it difficult to describe many important features, such as correlations. Nevertheless, much progress has resulted from this approach. By "particle" we only mean a system describable by three degrees of freedom q, such as x, y and z. There is no intimation that anything is actually "at" the point so specified. It is so easy to picture a hard little dot that is found here and there with the probability given by the wave function, but this is a very unsatisfactory and misleading picture. A particle is a very definite thing, with its discrete coordinates, but these coordinates figure in the theory, and are not the locations of anything in the macroscopic sense. The microscopic world simply does not look like the macroscopic world, and cannot be properly viewed as a macroscopic world with funny rules.

A wave function that is a function of x,y,z and t looks very much like an ordinary (macroscopic) wave. The continual use of one-particle wave functions in many-particle problems gives them a great familiarity, and one might infer that a particle *is* the wave function, as photons are the electromagnetic field. Chemists enjoy making fuzzy pictures of electron clouds with slow shutter speeds and rotating models, and drawing pictures of chemical bonds with electron orbitals pointing in various directions. These activities are no danger to society, and at least the latter exercises are very useful in organizing facts, but generally all these excursions obscure the reality and must be interpreted with care. The wave function for two electrons is a function of six variables and time, not a function of three variables and time made by superimposing one-electron orbitals. A proper theory of many electrons can be constructed in which there is such a field by the methods of *second quantization*, but the field is not the Schrödinger wave function or amplitude.

For a single particle analogous to a single particle of classical mechanics, the Hamiltonian is **p**^{2}/2m + V(**r**,t). The first term is the kinetic energy, the second is the scattering potential. Position operators are the coordinates themselves, while **p** = ih' grad. (HTML has no del). It is easy to check that the commutation relations are obeyed. The Schrödinger equation is then ih'∂Ψ/∂t = [-(h'^{2}/2m)div grad + V(**r**,t)]Ψ. A *free particle*, with V = 0, obeys the simpler equation ih'∂Ψ/∂t = -(h'^{2}/2m) div grad Ψ. The time and space variations are easily separated by the substitution Ψ = f(t)u(**r**). ih'[1/f(t)]df(t)/dt = -(h'^{2}/2m)[1/u(**r**)] div grad u(**r**) = E, the separation constant. Therefore, f(t) = exp(-iEt/h') and div grad u + 2mE/h'^{2}u = 0. u satisfies Helmholtz's equation (div grad + k^{2})u = 0, so u is a product of three factors like exp(±ik_{x}x) for each coordinate, and k_{x}^{2} + k_{y}^{2} + k_{z}^{2} = k^{2}. This is but one choice of many solutions, but a very convenient one.

For simplicity, let us now specialize to one dimension, x. It will be easy to generalize again to three coordinates when necessary. The function exp(-ikx) is an eigenstate of momentum belonging to p = h'k, since p exp(-ikx) = ih'(d/dx) exp(-ikx) = h'k exp(-ikx). exp(ikx) belongs to p = -h'k. Suppose for a moment that the particle is restricted to the interval x = -L/2 to L/2 of total length L. perhaps by infinite potential barriers at each end. The wave functions we have just found will not do in this case, because they are never zero, and so cannot vanish at the ends of the interval as required. In this case we can take linear combinations of functions belonging to +k and -k, sines and cosines, which do have zeros and can be made to fit. We can still use the interval of length L if we change the boundary conditions, and the interpretation. If we require that the wave function have the same value at the two ends of the interval, or exp(-ikx) = exp[-ik(x + L)], we can imagine that when the particle reaches one end of the interval, it immediately reappears at the other end. This condition, called a *cyclic boundary condition*, is kL = ±2nπ, where n is an integer. This gives us discrete states, two states for each positive integer, that are easier to work with than a continuum. It will happen that L will cancel out in the final results, so the limit L → ∞ is easy to take.

The purpose of this artifice is to make *normalization* straightforward. A wave function ψ(x) must usually be chosen so that ∫(-∞,+∞)|ψ(x)|dx = 1. With ψ = exp(-ikx), this integral is infinite, so it cannot be used in the usual way to find a multiplicative normalizing constant since it would give zero. If you set up this integral for wave functions exp(-ikx) and exp(-ik'x), you will find that ∫(-∞,+∞)exp[i(k - k')x dx = 2πδ(k - k'). This shows that the infinity is the infinity of the delta function, and that any two momentum wave functions are orthogonal. It is quite possible to handle a continuous spectrum of states in this way, but it is often simpler to consider box normalization and take the limit L → 0. In that case, we extend the normalization integral only over the box of length L, and get ∫(-L/2,+L/2)(1)dx = L. Then, the box-normalized wave functions are (1/√L)exp(-ikx). We also see that two wave functions belonging to different values of k (which means different values of n) are orthogonal.

The momentum eigenfunctions u_{k}(x) = (1/√L)exp(-ikx) are also energy eigenfunctions belonging to E = h^{2}k^{2}/2m, or k = (2mE/h^{2})^{1/2}, where k = 2nπ/L. Thus, E = n^{2}h^{2}/2mL^{2}, with two states corresponding to each positive integral value of n. The energy eigenfunctions constitute a complete set of states for expanding any wave function, at any time. Suppose, then, that ψ(x',t') = Σa_{n}(t')u_{n}(x'), where the coefficients a_{n}(t') are functions of time. The Schrödinger equation gives ih'(da_{n}/dt') = E_{n}a_{n}, using the orthonormality of the u_{n}. Therefore, a_{n}(t') = a_{n0}exp(-iE_{n}t'/h'), and ψ(x',t') = Σa_{n0}exp(-iE_{n}t'/h')u_{n}(x'). If the initial wave function at t = 0 was ψ(x,t) = Σ a_{n0}u_{n}(x), we have a_{n0} = ∫(-L/2,+L/2)u_{n}*&psi(x,t)dx, so finally ψ(x',t') = ∫(-L/2,+L/2){Σu_{n}(x)*u_{n}(x')exp[-iE_{n}(t' - t)/h']} ψ(x,t)dx.

This important equation gives ψ(x',t') at t' in terms of ψ(x,t) at a different time t. The expression in curly brackets does the job. It is called a *propagator* because it gives the later state when integrated over the initial state. When we go to the limit L → ∞, the sum over energy states can be replaced by an integral over the energy E, which will give us a closed expression if we can do the integral. Since E = n^{2}h^{2}/2mL^{2}, dE = (h^{2}/mL^{2})n dn, and 2dn will be the number of energy states in the interval dE. We find 2dn = [√(2m)L/h] dE/√E. Substituting the u's in the summation, and replacing the summation by the integral from zero to infinity over [√(2m)/h] dE/√E, we get [√(2m)/h]∫(0,∞) exp[ik(x' - x)] exp[-iE(t' - t)/h'] dE/√E. It is convenient to make a change of variable to u = √E, so du = dE/2√E and k = √(2m)u/h'. The integral becomes [√(2m)/h]∫(0,∞) exp{(-i/h')(t' - t)[u^{2} + &radic(2m)(x' - x)u] du. This integral is easy if we complete the square on u, and find [√(2m)/h]∫(0,∞) exp{-[i(t' - t)/h'][(u + √(2m)(x' - x)/2(t' - t))^{2} - (m/2)(x' - x)^{2}/(t' - t)^{2}]}du. Note that the L disappeared promptly from the result, the L in the number of states cancelled by the L in the normalization.

The u^{2} part integrates immediately to (√π/2)h'/i(t' - t)] ^{1/2}. The result can then be written iG_{o}(x,t';x t) = √[m/2πih(t' - t)] exp {-[im(x' - x)^{2}]/[2h'(t' - t)]}. Then, ψ(x',t') = i∫G_{o}(x,t';x,t)ψ(x,t)dx.

Let us consider one-dimensional states with energy eigenfunctions ψ(x) normalized on (-∞,+∞). The results will hold for box or delta-function normalization as well. Unless otherwise noted, the limits on all integrals will be -∞ and +∞. A *hermitian* operator h has the property that ∫(hψ)*ψdx = ∫ψ*(hψ)dx. The *average value* of the operator h in a state ψ is <h> = ∫ψ*hψdx. The complex conjugate of the average value is <h>* = ∫ψ(hψ)*dx = ∫(hψ)*ψdx = <h> if h is hermitian, so the average value is real. It is easy to show that the operators x and p = -ih'd/dx are hermitian. This is obvious for x because x is real. For p we have ∫(pψ)*ψdx = ∫(-ih'dψ/dx)*ψdx = ih'∫(dψ*/dx)ψdx. Integrate by parts with u = ψ, dv = (dψ*/dx)dx, so that du = (dψ/dx)dx and v = ψ*. The integrated part vanishes because ψ, ψ* both vanish at x = ±∞, and the result is -ih'∫ψ*(dψ/dx)dx = ∫ψ*pψdx. Therefore, p is also hermitian, as will be any function of x and p.

Let P = p - <p> and X = x - <x> be the operators for the differences of p and x from their average values. To make the algebra simple, we can often assume the average values to be zero. The commutator [X,P] = [x,p] = ih', as can be verified explicitly. The average values of the squares of these operators will be what in statistics is called the *variance*, Δx^{2} = <X^{2}> and Δp^{2} = <P^{2}>. The <> means putting the expression between ψ* and ψ and integrating from -∞ to +∞. As in statistics, <X^{2}> = <(x^{2} - 2x<x> + <x>^{2})> = <x^{2}> - <x>^{2}, and similarly for P. The *mean square fluctuation* in a variable is the difference between the average of the square and the square of the average.

*Schwartz's Inequality* for integrals is ∫|f|^{2}dx ∫|g|^{2}dx ≥ |∫f*gdx|^{2}. We apply this inequality with f = Xψ and g = Pψ. The left-hand side becomes the product of the mean square fluctuations of x and p, so that Δx^{2}Δp^{2} ≥ |∫(Xψ)*(Pψ)dx|^{2} = |∫ψ*XPψdx|^{2}. XP can be written XP = (XP - PX)/2 + (XP + PX)/2 = ih'/2 + (XP + PX)/2, so that Δx^{2}Δp^{2} ≥ |ih'/2 + (1/2)∫ψ*(XP + PX)ψdx = h'^{2}/4 + (1/4)|∫ψ*(XP + PX)ψdx|^{2}, plus the cross terms (ih'/4)[I - I*], where I is the integral ∫ψ*(XP + PX)ψdx. The operator XP + PX is hermitian, so its average value is real, and so I* = I, which causes the cross terms to vanish.

If g = λf the inequality becomes an equality. Here, this means that Pψ = λXψ. The product of the mean square fluctuations assumes its minimum possible value if, in addition, the average value of XP + PX is zero in the state ψ. In this very limited case, ΔxΔp = h'/2. In general, then ΔxΔp ≥ h'/2. This inequality is called the Heisenberg uncertainty relation (though it has nothing to do with uncertainty). h'/2 is a very small number macroscopically, 5.27 x 10^{-35} kg-m^{2}/s, so a macroscopic state with mean square fluctuations in x and p that are undetectably small is quite possible. Where the mass is around 10^{-31} kg, the distance around 10^{-9} m, and the time around 10^{-15} s, we must compare this with 10^{-34}, and this property of quantum states will become evident.

What kind of state has the minimum uncertainty h'/2? The condition Pψ = λXψ gives a first-order differential equation for ψ. Substituting the expressions for P and X, and simplifying a little, we find that dψ/dx = [i<p>/h' + (iλ/h')(x - <x>)]ψ. This integrates easily to ψ = C exp[(iλ/2h')(x - <x>)^{2} + i<p>x/h']. In all this algebra, remember that the average values of x and p are just numbers. The second condition is that 0 = ∫ψ*(XP + PX)ψdx = ∫(λXψ)*(Xψ)dx + ∫ψ*XλXψdx = (λ* + λ)∫(Xψ)*(Xψ)dx. Since the integral is Δx^{2}, it is certainly not zero, and so λ* = - λ. Set λ = i/μ, where μ is a real constant. Now we have ψ = C exp[-(x - <x>)^{2}/2μh' + i<p>x/h'], where C is a normalizing constant.

The normalizing constant is easily found to be given in terms of μ by |C|^{2} = 1/√(πμh'). Then we can determine μ so that the mean square fluctuation in x is Δx^{2}. The result is Δx^{2} = μh'/2. These integrals are easy to do with Dwight 860.11 and 860.13, and it is convenient to put <x> = <p> = 0. Our final result is ψ = (2πΔx^{2})^{-1/4} exp[-(x - <x>)^{2}/4Δx^{2} + i<p>/h']. This is a Gaussian envelope centred at <x> and moving with velocity <p>/m on a complex exponential wave of wave vector k = <p>/h'. If k = 0 and <x> = 0, it is a Gaussian curve centred at the origin, and |ψ*ψ|^{2} has a total width at the e^{-1/2} point of 2Δx. It is not a stationary state, so it does not persist in this form. We will see that it spreads with time, while remaining a state of minimum uncertainty, so that ΔxΔp = h'/2 at all times. There is no macroscopic state like this ideal state, but it shows very well how quantum states behave.

As an exercise, calculate <p^{2}> using the operators and the wave function we have derived. The result is Δp^{2} = h'^{2}/4Δx^{2}, which shows that for this state, ΔpΔx = h'/2.

There exist states in which Δp and Δx oscillate in time, while always keeping the product ΔpΔx constant and never less than h'/2. These are called *squeezed states* and have been proposed as a method of escaping the Heisenberg uncertainty relation at a practical level by making position measurements when Δx is at its minimum. There was some excitement over this a few years ago, but no recent news has arrived.

Now suppose that we have our minimum uncertainty state at t = 0, and want to find out what it becomes at a later time. The time-dependent Schrödinger equation gives the answer to this question, but proceeding directly is impossible. One method is to expand the state in energy eigenfunctions with time-dependent coefficients, find the coefficients at a later time, and sum the series again. This expansion is equivalent to a Fourier transformation, and we know that the Fourier transform of a Gaussian is another Gaussian. In fact, the expansion coefficients for the minimum state are A_{k} = (8πΔx^{2}/L^{2}) exp (-k^{2}Δx^{2}), using box normalization. When L → ∞, we get the Fourier transform as the summation is replaced by integration. Because the minimum state is a superposition of states of a range of momenta, it is called a *wave packet*. The range of k values included is on the order of 1/Δx, centered on k = 0, so the sharper the state, the more quickly it will spread.

Though the summation is normally not possible with a more general wave packet, in this case it does work. However, we have already found a more general solution in terms of the free-particle propagator. This was ψ(x',t') = i∫G_{o}(x,t';x,t)ψ(x,t)dx, where iG_{o} = √[m/ih(t' - t)] exp {-[im(x' - x)^{2}]/[2h'(t' - t)]}. All we have to do is plug in ψ(x,0) = (2πΔx^{2})^{-1/4} exp[-x^{2}/4Δx^{2}], where setting <x> = <p> = 0 involves no loss of generality, and integrate over x.

Then, &psi(x',t') = √[m/2πiht'](2πΔx^{2})^{-1/4}∫exp[-im(x' - x)^{2}]/(2h't')] exp[-x^{2}/4Δx^{2}]dx. The integral over x can be done by completing the square on x, since x appears only in the exponents. The exponent is (-im/2h't')(x'^{2} - 2xx' + x^{2}) - x^{2}/4Δx^{2} = -x^{2}(1/4Δx^{2} + m/2ih't') - 2xx'(m/2ih't') + (m/2ih't')x'^{2}. If the square on x is completed, we have -(1/4Δx^{2} + m/2ih't')(x - ...)^{2} + [(m/2ih't')^{2}/(1/4Δx^{2} + m/2ih't') - (m/2ih't')]x'^{2}. The last term becomes -[mx'^{2}/2ih't')/(1 + m4Δx^{2}/2ih't')] = x'^{2}/(2ih't'/m + 4Δx^{2}). The integral on x can now be performed, and is equal to √π/√(1/4Δx^{2} + m/2ih't'). Putting it all together, we have ψ(x',t') = (2π)^{-1/4}(Δx + ih't'/2mΔx)^{-1/2} exp[-x'^{2}/(4Δx^{2} + 2ih't'/m)]. This is a normalized minimum-uncertainty state with an increased width.

The probability density is |ψ|^{2} = (2π)^{-1/2}[Δx^{2} + h^{2}t^{2}/4m^{2}Δx^{2}]^{-1/2} exp {-x^{2}/2[Δx^{2} + h^{2}t^{2}/4m^{2}Δx^{2}]}. This is a Gaussian with a variance Δx^{2} + h^{2}t^{2}/4m^{2}Δx^{2} that increases quadratically with time. At large times, the "standard deviation" is about ht/2mΔx, so the width of the packet increases with a speed of h/mΔx. This is greater the narrower the packet originally was, and the smaller the mass. For macroscopic masses and packet widths, the spreading will be very slow, but on a microscopic scale it may be rapid. An electron wave packet initially confined within 1 nm spreads at the rate of 7.27 x 10^{7} cm/s. A proton packet spreads at only about 400 m/s, however. A 1 μg speck localized within 1 μm spreads at 6.6 x 10^{-25} cm/s. This is one excellent reason for the difference between atomic and macroscopic behavior of things.

Let us now study scattering in three dimensions in the light of the time-dependent Schrödinger equation. We consider either the relative motion, or the motion of one particle relative to the centre of mass, making the necessary modifications of either the mass, to the reduced mass, or the distance in the interaction energy. The incident particle could be described by a wave packet with wave vectors **k** in a small range of magnitudes and angles, moving freely toward the region of interaction. As a result of the interaction, the wave packet may be deflected toward a direction specified by **k**', still consisting of plane waves in a small range of magnitudes and directions, and moving freely after leaving the interaction region. This behavior will approach the classical description in terms of trajectories and impact parameter. This analysis, which is rather difficult, has been carried out with the expected results.

At an atomic level, wave packets spread rapidly, giving the problem a different appearance. The incident wave packet might be much larger than the interaction region, approximating a plane wave of sharp wave vector **k** and definite energy. The state leaving the interaction region might still be mainly a plane wave of the initial energy, but accompanied by a spherical wave proceeding from the interaction region whose amplitude depends on direction, and representing the scattering. The statistical interpretation of this process, that the incident particles are uniformly and randomly distributed across the incident plane wave, and that the probability of detecting a scattered particle at a large distance is proportional to the square of the absolute value of the scattered wave, agrees with the quantum-mechanical interpretation of the experiment. In fact, the partial-wave analysis of scattering uses exactly this picture.

Let us, therefore, define free-wave states φ(x,t) that are solutions of the Schrödinger equation ih∂φ/∂t = H_{o}φ, where H_{o} = p^{2}/2m = -(h'^{2}/2m) div grad. We are quite familiar with these states, which are the momentum eigenfunctions L^{-3/2}e^{i[ET/h' - k·r]}, with E = h'^{2}k^{2}/2m. These states can be normalized in a cube of side L, as shown, or to a delta-function. They will be used to describe the initial and final states of a scattering experiment.

The Schrödinger equation ih'∂ψ/∂t = Hψ, where H = H_{o} + V(**r**,t) can be solved by means of the Green's function to give ψ(r',t') = i∫G(r',t';r,t)&psi(r,t)dr, in terms of the initial state at time t, ψ(r,t). Here r and r' are actually the vectors, but are written in normal type to simplify the HTML, and dr = d^{3}r. The "i" in front of the integral has the same function as the "i" before a Kirchhoff diffraction integral in optics, to make the phases come out reasonably. We have seen the explicit use of a Green's function in one dimensional free propagation above, so its appearance here should be easy to understand.

If the initial state is φ(x,t), then the final state after the scattering encounter will be ψ(x't') = i∫G(r',t';r,t)&phi(r,t)dr. This wave function can now be expanded in terms of the φ(r',t') (which are exactly the same functions as the φ(r,t)) ψ(x't') = (φ_{k'},ψ)φ_{k'}(x't'). The expansion coefficients are written *S-matrix*, and k, k' represent the states before and after scattering. We have then that _{k'}*(r',t')G^{+}(r',t';r,t)φ_{k}dr'dr. The S-matrix element is essentially the matrix element of G^{+} between final and initial plane-wave states.

A large flaw in our reasoning must now be corrected. The states φ extend over all space and time; they are not the limited wave-packet states we think of in scattering. They are not eigenfunctions of H, but of H_{o}, and so have no place in the Green's function. However, we are certainly not interested in the effect of V on the φ. This difficulty can be escaped by the artifice of considering V to be "turned on" at some time after an early time T_{1} and "turned off" again before some later time T_{2}. Then, our expression for the scattered state and the S-matrix will be correct so long as t < T_{1} and t' > T_{2}. This will, of course, necessitiate some fiddling when the results are interpreted, but it will turn out all right in the end.

This, of course, greatly complicates the determination of the Green's function for the Hamiltonian H. However, we are not going to ever find this Green's function, but only to approximate it using the free propagator G_{o} arising from H_{o}, and the interaction V(r). When we do this, it will be necessary to ensure that every propagator expression only propagates from an earlier time t to a later time t', giving zero otherwise. Therefore, θ(t' - t)ψ(r't') = i∫G^{+}(r't';r,t)dr. G^{+} is the *retarded Green's function*, the actual propagator. The term "retarded" is understood in an equivalent sense as in the retarded potential in electromagnetic theory. It will permit us to extend time integrals all the way from t to t' without worrying about directions in time.

The method of finding G^{+} from G_{o} and V(r) was worked out by Feynman in 1949. The time interval from t to t' was subdivided into many short intervals at the instants t_{i}. At each t_{i}, the interaction V(r) was turned on for a brief interval Δt_{i}. Except for these brief intervals, there was free propagation with G_{o}, and within these intervals, the effects of H_{o} and V(r) could be simply added, neglecting higher terms. The Schrödinger equation gave the effect of V(r) as 1 - (i/h')V(r)Δt_{i}. Now, all we have to do is write this in one huge string of propagators and the expressions for the effect of V(r), and multiply it out, changing summations to integrations over time.

The result of this effort is θ(t' - t)ψ(r',t') = i∫G_{o}^{+}(r't';r,t)ψ(r,t)dr + (i/h')∫dt"∫∫G_{o}^{+}(r',t';r",t") V(r,t") G_{o}^{+}(r",t";r,t)dr"dr + ... . Each succeeding term has one more time integration and one more (1/h')V(r,t)G_{o}

It is very difficult to prove the convergence of the perturbation series, but it has given useful results. It is perhaps even easier to use in a relativistic formulation, since the space and time coordinates are treated on an equal footing. It is the basis of the method of *Feynman diagrams* in which a directed line represents a propagator, and the nodes interactions.

The Hermitian adjoint of the Hamiltonian is H†. For a matrix, the Hermitian conjugate is the complex conjugate transposed. If H† = H, then the Hamiltonian is Hermitian and its eigenvalues are real and its eigenfunctions can be made orthonormal. The Hermitian adjoint O† of an operator O has the property that = **. If O† = O ^{-1}, that is OO† = O†O = 1, then O is said to be unitary. It can be found by direct calculation with the forms we have for the S-matrix that SS† = S†S = 1, or the S-matrix is unitary. If U is a unitary symmetry operator of the Hamiltonian, it commutes with the Hamiltonian, or [U,H] = 0. Then, using the states Uφ, and representing them by K',K, we have **

Let's now go back and write the S-matrix element in terms of ψ^{+}(r,t) from the Feynman perturbation expansion θ(t' - t)ψ ^{+} (r',t') = i∫G_{o}^{+}(r't';r,t)ψ(r,t)dr + (i/h')∫dt"∫∫G_{o}^{+}(r',t';r",t") V(r,t") G_{o}^{+}(r",t";r,t)dr"dr + ... . The + has been attached to ψ to show that the retarded propagator is used. Then, since _{k'}*(r,t)V(r,t)[φ_{k}dtdr - (i/h'^{2})∫∫∫∫ φ_{k'}*(r',t')V(r',t')G_{o}^{+}(r't';r,t)V(r,t)φ _{k}(r,t)dt'dr'dtdr - ...], we can replace the series in brackets with ψ^{+}(r,t), to find that _{k'}*(r,t)V(r,t)ψ_{k}^{+}(r,t)dtdr.

Now we write V(r,t) as V(r)g(t), where g(t) rises quickly but continuously from 0 to 1 at some time after T_{1}, remains at 1 for a rather long time t_{o}, then decreases quickly but continuously from 1 to 0 at some time before T_{2}. The free-particle states will be energy eigenfuctions of the Hamiltonian H_{o}, so they can be written φ(r,t) = u(r)e^{-iωt}. The final states arising from φ will be energy eigenfunctions of H, and can be expressed as ψ^{+}(r,t) = v(r)e^{-iωt}. Of course, h'ω = E, and r stands for the position vector.

The S-matrix element becomes _{k'}*(r)V(r)v_{k}(r)dr ∫(-∞,+∞)g(t)e^{iΔωt}dt, where Δω = ω_{k'} - ω_{k}. We have allowed the final and initial energies to be different. The time integral is the Fourier transform of g(t), which will be peaked strongly at Δω = 0, but will extend a small distance to either side on the order of the reciprocal of the rise and fall times of g(t). This artifice will have no consequences in the final results when we go to the limit, but will be a help in the derivations. The space integral is usually written *transition matrix*, depending only on the spatial coordinates and independent of time.

The absolute value squared of the S-matrix element, |^{2}, is the probability that the initial state k will turn into the final state k' as a result of the interaction. (Normalization ensures that it is the probability, not just proportional to the probability.) The final states k' will be distributed quasi-continuously with a density ρ(k')dE', so we ask what the probability will be for scattering into such a group of states, not into one particular state. This probability is ∫|^{2}ρ(k')dE' = (1/h'^{2})∫|^{2}|∫g(t)e^{iΔωt}dt|^{2}ρ(k')h'dΔω. Because of the strong peaking of the Fourier transform of g(t), we can pull outside the integral everything that varies slowly with k'. |∫g(t)e^{iΔωt}dt|^{2} can be written ∫∫g*(t')e^{-iΔωt'}g(t) e^{iΔωt}dt'dt. The exponentials can then be pulled out and integrated over dΔω. They give 2πδ(t' - t'), which allows one of the time integrations to be performed at once, with the result ∫|g(t)|^{2}dt. Since g(t) is usually unity, this integral is very close to the time that V(r) is on, t_{o}. The final result of all this is that the probability of a transition from k to a group of states around k' is equal to (2π/h')|^{2}ρ(k')t_{o}.

It is quite reasonable for this probability to blow up as t_{o} → ∞, since we are then looking at the result of scattering things for an infinite length of time, which scatters an infinite number of things. What we really want is the probability that a scattering occurs in a small time interval dt, which is the probability w of scattering per unit time times dt, or wdt. The probability of scattering per unit time is just the probability of scattering in time t_{o} divided by t_{o}, which is w = (2π/h')|^{2}ρ(k'). Now t_{o} has disappeared, and we couldn't care less about it. It is well to note that the initial energy of the k state cannot be precisely specified in any practical scattering experiment, so the idea of scattering into a range of energies, even with elastic scattering, is perfectly acceptable. In fact, this formula is well supported by experimental results.

Since the T-matrix element is taken between u_{k} and the actual scattered state v_{k'}, the formula for w is exact. The difficulty is, of course, that we do not know the scattered state, and must approximate it using the Feynman series or an equivalent. A first-order approximation to the transition probability w is made by replacing v with u, so that we have w = (2π/h')ρ(E)|∫u_{k'}*V(r)u_{k}dr, and the matrix element is the matrix element of the interaction potential between free initial and final states. Fermi called this formula *Golden Rule No. 2* for its utility in practice. It can be derived in other ways, but our current derivation is much more general and satisfactory than most of the others, and the assumptions that have been made are clearer.

The scattering cross section can be found from the formula for w. Using box normalization, u(r) = L^{-3/2}e^{ik·r} (k and r are vectors). Then, ρ(k) = (mL^{3}/2&pih'^{2})|k|dω. The density of states is found from the particle-in-a-box problem, with one state per quantum number. The incident flux is v/L^{3} and the final flux is v'/L^{3}, so σ(k',k)dω' = w/(v/L^{3}), or σ(k',k) = (v'/v)(mL^{3}/2&pih'^{2})^{2}|^{2}. This applies to inelastic as well as to elastic scattering. In elastic scattering, k and k' have the same magnitude, but different directions, so v' = v. L will cancel in the final result, since the T-matrix element will contain L^{-3}.

The Born approximation for the scattering cross section is obtained by replacing v(r) by u(r) in the T-matrix element, so that it becomes the matrix element of V(r) between plane-wave states. The result, for elastic scattering, is σ(k',k) = (m/2&pih'^{2})^{2}|∫e^{-ik'r}V(r)e^{ikr}dr|^{2}. The matrix element is ∫V(r)e^{-i(k' - k)r}dr, which is recognized as the three-dimensional Fourier transform of V(**r**) with respect to the wave vector (momentum) change **q** = **k'** - **k** in the scattering. Since the cross section is the absolute value squared of the scattering amplitude f(θ,φ), we can identify f(θ,φ) = (m/2πh'^{2})∫e^{i(k' - k)r}V(r)dr, up to a factor of magnitude unity, which happens to be -1.

In the very frequent case that V is spherically symmetric, and is a function of |r| only, the radial and angular variables can be separated, and we can integrate over φ. Then the matrix element is 2π∫∫r^{2}V(r)e^{-iqrcosθ} sinθ dθdr = (4π/q)∫rV(r)sin(qr)dr. Hence f(θ) = -(2m/qh'^{2})∫(0,∞)rV(r)sin(qr)dr, and q = 2k sin(θ/2). The integral is easily evaluated numerically for any reasonable potential.

The Born approximation was obtained by replacing the scattered wave v(r) with the plane wave u(r). The scattered wave can be expressed as u(r) + w(r), where w(r) is the result of scattering. The Born approximation will be valid when w(r) << u(r). Since w(r) will usually be greatest near the origin, an estimate of its magnitude can be made there. The result is w(0) is approximately given by -(m/h'^{2}k)∫(0,∞)(e^{2ikr} - 1)V(r)dr. Here, k and r are scalars. For details, see Schiff, p. 326.

For the scattering of electrons by atoms, the interaction is about the screened Coulomb potential V(r) = -(Ze^{2}/r)e^{-r/a}, where a is the screening distance, roughly h'^{2}/me^{2}Z^{1/3} for moderately heavy atoms, as given by the Thomas-Fermi theory. Then, f(θ) = 2mZe^{2}/[(h'^{2})( q^{2} + 1/a^{2})]. For hard collisions, when q >> 1/a, this gives the expected Rutherford Coulomb scattering result.

It is doubly remarkable that we find the correct result for Coulomb scattering with the Born approximation, when it would appear to be beyond its range of validity, as well as that classical mechanics also gives the correct result in an impact-parameter analysis. For this analysis, see Coulomb Scattering.

For the screened Coulomb potential, the estimate of w(0) is (2mZe^{2}/kh'^{2})|∫e^{ikr-r/a}sin(kr)(dr/r). For slow electrons, the condition is 2mZe^{2}a/h'^{2} << 1. For the Thomas-Fermi value of a, this is Z^{2/3} << 1, which is not satisfied. The Born approximation is not valid for the scattering of slow electrons by atoms. An exact solution can be found in the article Scattering, which treats this important problem. For fast electrons, the condition is (Ze^{2}/h'v)ln(ka) << 1. Because of relativity, e^{2}/h'v has a minimum value of 1/137 (the fine-structure constant). For ka = 100, and this minimum value, the condtion is (Z/137)(4.6) = Z/30 << 1, so the Born approximation should be valid for lighter atoms.

The Born approximation was probably first used by Rayleigh in 1881, but was applied by Born to quantum-mechanical scattering in 1926. It has been of great value because of the difficulty of finding exact solutions.

L. I. Schiff, *Quantum Mechanics*, 3rd ed. (New York: McGraw-Hill, 1968). Especially Chapter 9.

Return to Physics Index

Composed by J. B. Calvert

Created 20 May 2003

Last revised 21 May 2003