Here are the tools you need to understand alternating currents for power and electronics
Circuit methods are the electrical engineer's secret weapon, and they are more useful than duct tape. The main idea is that simple parameters are used to describe complicated states, replacing many degrees of freedom by a few degrees of freedom. It is remarkable that so many systems can be treated like this. One example is the simple direct-current (DC) circuit. The battery, a complicated chemical system, is replaced by its terminal voltage E, and perhaps its internal resistance as well. The resistor, a polycrystalline substance with mobile electrons and complex interactions between the electrons and the lattice, is replaced by its resistance R. Then, the current that flows when they are connected is simply I = E/R, and the power dissipated in the resistor is just I2R. DC circuits should be taught in grade school, and I assume that the reader is familiar with them.
Direct currents are docile and reliable, and do not trouble the mind. The quantities used to describe them are ordinary numbers, with at most the complications of plus and minus signs, and we know how to do arithmetic with them. However, alternating currents (AC) have wiggled their way into all branches of electrical engineering because of two advantages direct currents do not enjoy. First, the thoroughly excellent device of the iron-core transformer allows voltages to be changed extremely economically, so that high voltages can be used for transmission, while the necessary low voltages are available for applications. Second, alternating currents can radiate, putting their energy into waves that can move independently through space. The ubiquity of alternating currents in electrical engineering needs no other explanations than these.
It is quite possible to work with the instantaneous values of varying quantities, and often it is necessary to do so. This is, however, not easy to do even if the quantities can be expressed simply. Suppose, for example, that we have two currents, expressed as i1(t) = 10.4 sin 377t and i2(t) = 4.5 cos (377t + 28°). All right, find an expression for i1 + i2! Of course you can do it, but it takes a while and is tedious and unenlightening. There does not have to be a better way, but there is. A much better way, one that actually gives you insight.
The way is AC circuit analysis, which rests on the assumptions that the quantities vary with time in a certain way, sinusoidally, and that they are all of the same frequency. These are very strict requirements, but are characteristic of the majority of applications. One must be on guard against times when these requirements are not met, of course. A sinusoidal variation means like a sine or cosine function of time. Sines and cosines are the same shape, merely displaced in time. A sinusoidal variation can, indeed, occur with any relation to an arbitrary zero of time. A sinusoidal variation is characterized not only by its amplitude (maximum value) but also by its relation to time, called its phase. Phase is the new thing in AC circuit analysis, and it is extremely important. To specify things like AC current and voltage, two quantities are required, not one.
The way to do this is suggested by de Moivre's Theorem, ejωt = cos ωt + j sin ω t. The imaginary unit is expressed by j here, since electrical engineers use i for the instantaneous current. j2 = -1, which is an odd thing, but a useful one. ω is the angular velocity in radians per second. If the frequency is f Hz, or f complete repetitions in a second, then ω = 2πf, since there are 2π radians in a complete circle. For f = 60 Hz, ω = 377 radians per second, very closely. For f = 50 Hz, we have ω = 314, not quite so closely. These are the most popular power frequencies. We can associate a real sinusoidal variation with the complex exponential by agreeing that it is the real part, the cosine part. Therefore, i(t) = 10 cos (377t + 30°) would be represented by 10ej(377t + 30°), or (10ej30°) e377t. The factor e377t will be common to all currents and voltages in the problem, so we can forget about it, and say that i(t) is represented by the quantity 10ej30°. Such a quantity is a complex number, but since it represents an alternating quantity, it is called a phasor.
Engineers sometimes represent a phasor like this: 10/_30%deg; (10 at an angle of 30°). The use of degrees in phases, while expressing angular frequency in radians per second, is common, and causes no confusion. As a complex number, a phasor can be represented in the complex plane with real and imaginary axes, and there it looks like a vector. Our phasor is now 8.66 + j5.00. Phasors will add and subtract like complex numbers, or 2-dimensional vectors. Now we can do the problem that was set above. i1 = j10.4, and i2 = 4.5/_28.00° = 3.97 + j2.11. The sum of these two currents is then 3.97 + j12.51, or 13.12/_72.39°. The only new thing we have to know is converting between the polar to the rectangular forms. Scientific calculators have this process built-in, so it is very easy to do. Look for it on your own calculator, and review how to use it.
Phasors cannot be multiplied together, since this mixes the real and imaginary parts. However, the analog of things like I2 in DC circuits does exist, and we will look at it below. Phasors can, however, be multiplied by complex numbers. Multiplying by a real number changes the length of a phasor, but not its phase. Multiplying by j rotates a phasor 90° anticlockwise. This is easy to see: if you have a number, say, 3, then j3 is an equal length along the positive imaginary axis--the phasor 3 + j0 has been rotated to 0 + j3. Multiplication by -j rotates 90° clockwise. Multiplication by 2/_30° doubles the length of a vector and rotates it 30° anticlockwise. All of this follows from the rules of exponents, and that j = ejπ/2. Play with this a while if it is not familiar. It is easy to get used to.
The best thing about the exponential form is that d/dt ejωt = jω ejωt. Since we forget the ejωt, differentiation with respect to time simply multiplies a phasor by jω. This makes calculus very easy. If the phasor 1 corresponds to cos ωt, then jω corresponds to -ω sin ωt (work it out, taking real parts).
In DC circuits, we had the circuit element R, resistance. In AC circuits, we pick up two new ones where the rate of change is important. There is inductance, L, where e/L = di/dt. Here, e is the instantaneous voltage applied to the inductance, and di/dt is the rate of rise of the current. L is a constant, whose unit is henry = volt-sec/ampere = tesla/ampere. If E and I are phasors that represent the time functions e and i, then E = jωL I. There is also capacitance, C, where i/C = de/dt. A constant current flowing into the capacitor results in a constant rate of increase of voltage. The units of the constant C are farad = amp-sec/volt or coul/volt. Here, we have I = jωC E.
Suppose we have a series circuit, where the same current i flows through resistance R, inductance L and capacitance C. We add up the individual voltages to find the total voltage. In phasors, E = (R + jωL + 1/jωC) I. The quantity multiplying I is called the impedance Z. Z = R + j(ωL - C/ω) = R + jX. The additional term X is frequency-dependent, and is called the reactance. Impedance is not a phasor, simply a complex number. It is easy to see how we find it by adding and subtracting other complex quantities. We see that the series RLC circuit becomes an open circuit at zero and infinite frequencies, and it offers the minimum opposition to current at some intermediate frequency, where the reactance is zero. This condition is called series resonance.
Now let there be a parallel circuit, with the same voltage impressed across a resistance R, a capacitance C and an inductance L. We add up the currents in each branch to find the total current. In phasors, I = (1/R + jωC + 1/jωL) E. The complex number Y = 1/R + j(ωC - 1/ωL) is called the admittance. Y is also written Y = G + jB, where G is the conductance and B the susceptance. The parallel RLC circuit becomes a short circuit at zero and infinite frequencies, and offers maximum opposition to current at some intermediate frequency, where the susceptance becomes zero. This condition is called parallel resonance. Do not be dismayed by all the names; they are all arbitrary names, and only make it clear what you are talking about, no more.
There is a pleasant duality between series and parallel circuits. They are treated in detail in textbooks, and the properties of resonance are closely examined. Useful circuit theorems (like Thévenin's Theorem) and various equivalent circuits are presented. One must be wary of equivalent circuits, however. They usually apply only to the frequency considered, and cannot be used to draw general conclusions. For example the series RLC circuit can be replaced by an equivalent parallel RLC circuit by taking the reciprocal Y = 1/Z of the impedance. The R', L' and C' in this circuit are frequency-dependent values, and cannot be imagined as replaced by real circuit elements. They hold only for the particular frequency for which they were calculated. The general formulas, however, are quite valid.
Now we have a method of describing AC circuits that is analogous to DC circuit analysis. The important quantities are now complex numbers, which must be combined by the appropriate methods. With the aid of a calculator, these methods are little harder than ordinary real-number arithmetic. E and I become phasors, while R becomes Z. Many of the circuit theorems are exactly the same, only replacing R by Z. In particular, networks can be analyzed in the same ways. In series-parallel circuits, Z behaves just as R did. The box at the right shows the Wye-Delta Transformation that is often useful. In a DC circuit, wye-delta equivalent circuits can be constructed from actual resistors, but in AC circuits the equivalent components are usually frequency-dependent, so this cannot be done. The equivalence is for a single frequency.
Real circuit elements only approximately represent the ideal R, L and C. Any actual component acts as what it looks like only over a certain frequency range. A wire-wound resistor may have inductance, and at high frequencies may even behave like a capacitor. An iron-core inductor becomes a good capacitor at not very high frequencies. A capacitor has inductance, and may actually resonate at some frequency, acting as an inductor above this frequency and blocking the signals it was intended to bypass, as electrolytic capacitors can do. Ideally, the parameters R, L and C are supposed to be independent of frequency and signal amplitude. A nonlinear component, such as an iron-cored inductor and many others, may create signals at new and different frequencies. Sometimes this is what is wanted, sometimes it is not. AC circuit analysis depends on the linearity of the components, which allows addition and subtraction of currents and voltages, and the superposition of different frequencies.
Suppose you want to find the power, which is expressed as P = EI in DC circuits. The instantaneous power is still p = ei, but it varies at twice the frequency. (P is in watts if E is in volts and I in amperes). What we want is the average value, which is the integral of ei over a complete cycle, divided by the duration of the cycle. We can go right back to the beginning, and express e and i as sinusoids, and actually carry out the integration. If e = E cos (ωt), and i = I cos (ωt + θ), we find that P = (EI/2) cos θ. The relative phase angle is θ, and the cosine of it is called the power factor for obvious reasons. So far as the power is concerned, it does not matter whether E leads I, or I leads E (as here). We can get this result from the phasors by the rule P = Re(E*I)/2 = Re(EI*)/2. Re means take the real part, and the * means to take the complex conjugate, i.e., to change the sign of the imaginary part or the phase angle. It does not matter which factor is complex conjugated.
The E or I in what we have just said are the peak values, or amplitudes, of the waves. Should we multiply all such amplitudes by 1/√2 = 0.707, then the power would be P = EI* = IE*, just as for a DC circuit. The reduced amplitudes are called effective values, or rms values, (root-mean-square) and are used in the actual calculations instead of the peak values. They are the values read on ammeters and voltmeters, and which appear in the specifications of electrical equipment. The usual supply voltage of 120 V in the US is an effective value. The peak value is actually 120√2 = 170 V. The power is also P = EI cos θ in terms of ordinary numbers, of course.
With respect to the voltage V, the current can be resolved into components in phase with V and in quadrature with V. V times the in-phase or energy component gives power in watts. V times the quadrature component does not represent an average flow of power, but energy surging back and forth. The product of V and the quadrature current is called reactive volt-amperes or vars, to make the very necessary distinction with watts. Apparatus handling quadrature currents is rated in kVA or kilovars instead of in kW.
In a series RC circuit, Z = R - j/ωC. The relative phase is tan-1(-1/ωRC). This means that the E phasor is behind the I phasor, or that the current leads the voltage. This is reasonable; we have to put charge on the capacitor to make its voltage rise, and this takes time. Some people remember ICE for I leads E with C. Capacitive reactance is negative (but capacitive susceptance is positive). RC is the capacitive time constant in seconds. The circuit and phasor diagram are shown at the right. The circuit diagram is necessary to show the positive directions of the phasors on the phasor diagram. The phasor diagram is a vivid graphical display of what is going on in the circuit, and aids the reasoning greatly.
In a series RL circuit, Z = R + jωL. The relative phase is tan-1(ωL/R). This means that the E phasor is ahead of the I phasor, or that the voltage leads the current. Again, this is reasonable, since we have to put a voltage across the inductor to get the current to increase, and it takes time to put energy in the magnetic field. Some people remember ELI for E leads I with L. Inductive reactance is positive, but inductive susceptance is negative. L/R is the inductive time constant in seconds. Circuit and phasor diagrams are shown at the left. In the problems, we introduce the dimensionless ratio Q, the ratio of the inductive reactance to the resistance of an RL branch. For a given physical inductor, Q is more constant as the frequency varies than is R. At higher frequencies, R is not simply the DC resistance of the inductor.
This has been a very short introduction to AC circuits, but it includes the fundamentals and will do most of the job for you, if you don't attempt too much. The basic idea is that of complex quantities and phasors, and that is 75% of what matters. It is a tool to be used, not admired hanging on the wall.
I cannot actually recommend a good general introduction to AC circuits, but there are many textbooks going around that purport to include it. The difficulty is that most seem to be written by authors whose life work appears to have been teaching circuits to beginning engineering students, without ever having used circuits for anything profitable or amusing. A long time ago, DC circuits was a separate course, and AC circuits was treated in detail later. Students saw rotating machines in laboratories, and power engineering was dominant. Then, with the rise of electronics after World War II, DC circuits and electromagnetism were presented in a unified Introduction to Electrical Engineering course. Some of these texts (not all) were quite good, since their authors knew about the subject, having been trained in the old tradition. AC circuits was still a separate course, and still largely oriented towards power applications. Solid state electronics caused another upheaval in the 1970's, when DC and AC circuits, together with miscellanea from courses such as Transient Phenomena, and Laplace transform methods, were packed into a general circuits course. Much of the AC circuits and power applications vanished, and the people who taught the new courses never knew much about these things anyway. This explains the current lack of good texts that have gone through many editions. Today's typical graduating EE knows very little of AC circuits, and nothing at all about power factor, three-phase or machines. Faculty, not usually composed of real engineers, now work on solid state, signal analysis and computers, worrying little about circuits. This is not good, but that is how it is.
Books from the early days can still be studied with profit. For example, there is C. L. Dawes, A Course In Electrical Engineering (New York: McGraw-Hill, 1937); 2 vols.: Vol. I, Direct Currents, Vol. II, Alternating Currents. Dawes, associate professor at Harvard, had only an S.B. degree, but he knew what he was talking about and tries to explain it to you, unlike modern authors who are trying to bluff it out. The descriptions of real apparatus are enjoyable.
R. M. Kerchner and G. F. Corcoran, Alternating-Current Circuits, 3rd ed. (New York: John Wiley & Sons, 1951) is a late example of a classic text. About half the book is fundamentals, the rest applications for power (transmission line calculations, symmetrical components, short-circuit calculaitons) and electronics (wave filters).
P. Horowitz and W. Hill, The Art of Electronics (Cambridge: Cambridge University Press, 1980) includes the circuit theory necessary for electronics, which is a good introduction.
Composed by J. B. Calvert
Created 1 February 2001
Last revised 5 February 2001