Tensors in Euclidean spaces, and a powerful way to work with them using indices

- Introduction
- Coordinates, Components and Tensors
- Coordinate Rotations
- Index Notation
- The Antisymmetric Tensor Density
- Integral Theorems
- Conclusion
- Summary of Rules for Working With Indices
- Exercises
- References

This paper explains the concept of tensor under rotations in a Euclidean space, and the methods of calculating with indices. The reader should already know something about vectors, perhaps as introduced in a course in mechanics or electromagnetism. An acquaintance with matrices and determinants will also be helpful. The method of indices is more powerful and transparent than vector analysis, but not as well-known. These methods can be extended to tensors under Lorentz transformations in relativity, where the index method is essential for theoretical work. There are a few exercises at the end, and the reader is encouraged to work out the examples suggested in the text.

When one wishes to describe the change of position of a body in space, it is necessary to give not only the amount of displacement, but also its direction. One way to do this is to set up a rectangular coordinate system, and to specify the displacement in the three mutually orthogonal directions, often called x, y and z. Let's use numbers instead, 1, 2 and 3, since we will want to save letters for magnitudes. These numbers are just tags for the different directions. The term *vector* is assigned to an abstract object representing the displacement. The word comes from Latin, meaning "carrier," and it is thought of as carrying one point into another. We can give it a symbol, such as **x**, but we still need some way of expressing it quantitatively. This is done, of course, by giving the three orthogonal displacements, which we shall denote by x_{1}, x_{2} and x_{3}. These are the *rectangular components* of the vector. They are only one way to represent the vector concretely. It is clear geometrically that the vector also has a *magnitude* equal to the square root of the sum of the squares of the components, and that each component is the product of the magnitude and the cosine of the angle between the vector and the corresponding coordinate axis. This method of specification, by the magnitude and the *direction cosines*, is just as good as by the components. Although there are three direction cosines, the sum of their squares is unity, so there are only two independent specifications of direction. In either case, three quantities must be given to specify a vector, which is a general rule.

The fundamental property of a vector is that it can be projected along any line, as shown in the diagram on the left. The *magnitude* of the vector, written |**v**|, is the maximum value of this projection, onto a line with the same direction as the vector. The projection, or component along the line, is the product of the magnitude and the cosine of the angle between the vector and the line, as is very well known. This projection has a physical meaning for displacements, forces and other physical vectors, and is not simply a mathematical definition. In the preceding paragraph, we used the projection of a vector on the three orthogonal coordinate axes to describe the vector. It is not necessary that the three directions be orthogonal, simply that small displacements along the three directions are the edges of a parallelepiped of nonzero volume.

The diagram at the right illustrates how the components are related to the angles betweent the vector and the coordinate axes. Direction cosines are just the cosines of these angles. It is easy to verify that the sum of the squares of the direction cosines is unity by finding the magnitude in terms of the expressions for the components. Now, if we happen to choose different axes, still orthonormal but in different directions, we will get different components and different direction cosines. However, they still represent the same vector, and are related in a perfectly definite way. This is a profound generalization. The vector **v** is an object represented in many different, equivalent ways. In any explicit case, we must work with a representation, but which one we work with is not significant; we can always choose it to make the problem as easy as possible. Any object like this, which can be considered abstractly as an individual, but which has representations that are found according to a definite rule in any coordinate system, is called a *tensor*.

The word tensor is also from Latin, and means a "stretcher." It first arose as the name of an object that describes how a body is strained when it is distorted. Such a quantity is more complicated than a vector, requiring nine components for its specification. As in the case of a vector, the values of the components depend on the choice of coordinate system, but also in a regular, systematic way. The name, however, has now been extended to distinguish any object of however many components (including vectors) that are related by a certain rule based on the rule for vector components. What we are going to present here is a method of working with any sort of tensor, not just a vector, in a symbolic, algebraic way. The tensors will be called Euclidean tensors, since they are closely related to the properties of Euclidean space (not necessarily three-dimensional). Another term is Cartesian tensor.

From the vector displacement, we can derive the vector velocity, the vector acceleration by differentiation with respect to time, and then by Newton's Laws the vector force. In electromagnetism, the fields are force fields, so we have the vector electric field and the vector magnetic field. Therefore, we meet vectors in a wide area of theoretical physics, including mechanics, statics, elasticity, hydrodynamics, and electromagnetism. Vector methods are widely used in presenting these subjects. Every physics major and engineering major is well acquainted with them. Vector analysis, however, is only a means of notation that makes the relations more transparent and reduces the amount of writing, in comparison with the use of components. For actual calculation, reference must be made to numerous rules and formulas, and often to components. In elasticity, hydrodynamics and electromagnetism, other tensors with more components than three are met with quite frequently, and they are usually treated by introducing coordinates explicitly. The advantages of vector notation are then lost, and the student is enveloped in coordinate algebra that makes it difficult to understand the subject clearly. Euclidean tensors restore the clarity and generality of presentation, and make it easy to find general relations.

A method of extending vector notation to quantities with more than three components was the use of *dyadics*, sums of terms consisting of *dyads*, or vectors written side by side with no product intended. For example, the unit tensor was represented by I = **i****i** + **j****j** + **k****k**. The dot product of I with any vector on either side gave back the vector, as one can easily see. This method was cumbersome and saved little writing in most cases. It is seldom encountered these days, but is an interesting detail.

It is good to remember the definition of a vector: **A vector is a quantity with components v _{i} that transforms under coordinate rotations like the position vector x_{i}**. There is a weaker, but more general, definition of a vector as an n-tuple of numbers for which addition, subtraction and multiplication by a scalar are defined. When we add definitions for the modulus or norm of a vector, or the scalar product, we are giving the vectors

It is very important for the understanding of what follows to be quite clear on how the relation between two orthogonal coordinate systems, called a *rotation*, can be described. In the diagram at the left, **i**, **j** and **k** are unit vectors, that is, vectors of unit magnitude, along the coordinate axes 1, 2 and 3, respectively. Similarly, **i'**, **j'** and **k'** are unit vectors along the coordinated axes 1', 2' and 3'. There are nine angles between the coordinate directions of the two systems. Only three are shown to avoid cluttering the figure. These angles are between 0 and 180°, so their cosines are between 1 and -1. These cosines are the components of each of the unit vectors in the other system. In their own system their components are (1, 0, 0), (0, 1, 0) and (0, 0, 1). Although there are nine cosines, only three parameters can completely specify the rotation. For example, two parameters can give the direction of the new 3-axis, and the third parameter the rotation about the new 3-axis that brings the 1- and 2-axes into the desired position. The nine cosines are functions of the three parameters used to define the rotation.

i | j | k | |
---|---|---|---|

i' | a_{11} |
a_{12} |
a_{13} |

j' | a_{21} |
a_{22} |
a_{23} |

k' | a_{31} |
a_{32} |
a_{33} |

Since the vectors are unit vectors, the sum of the squares of the elements in any row or column must equal 1. Since the vectors are orthogonal, the sum of the products of the elements in any two rows or any two columns must be zero. Considering the rows, we have six conditions connecting nine elements, leaving three independent values, which checks with the three parameters that specify a rotation. It is the same considering the columns. It is clear that none of the elements can be greater than unity, and that they cannot all have the same sign (or else the products could not sum to zero).

As a concrete example, consider a rotation through an angle θ about the 3-axis, as shown in the diagram on the right, that brings the 1- and 2-axes into new positions. The sine function comes from the cosine of 90° - θ, and the minus sign results from an angle greater than 90°. As an exercise, write the matrices for rotations about the 1- and 2-axes. A positive rotation about the 3-axis is taken as one that would advance a right-handed screw along the 3-axis.

Let us represent the vector **x** as the vector sum x_{1}**i** + x_{2}**j** + x_{3}**k**. Using the table, replace each unit vector by its vector sum in the primed system, and collect the coefficients of **i'**, **j'** and **k'**. these coefficients are then the components of the vector in the primed system. Actually write this down, so that it is clear how the elements of the rotation matrix are used. There is a much neater way to write this down, using *indices*. The vector **x** is represented by x_{i}, where the index i takes the values 1, 2 and 3. The rotation matrix **A** is represented by a_{ij}, where i is the row index and j is the column index (as in the table above). Then, we write simply x'_{i} = a_{ij}x_{j}. When an index is repeated, as j is here, we assume a sum of terms for each possible value of j. Explicitly, then, we have x'_{1} = a_{11}x_{1} + a_{12}x_{2} + a_{13}x_{3}. An index that is summed over on the right-hand side of an equation cannot appear on the left-hand side. The letter used can be replaced by any letter you wish that is not already used in the expression. Such an index is called a *dummy* index.

Now let's go the other way, starting with x'_{i} (when we use an index like this, we are thinking of the set of three components, one for each value of the index) and ending with x_{i} (not necessarily the same i). When this is done, you will have x_{i} = a_{ji}x'_{j}. Note carefully the order of the indices i and j in a_{ji}. a_{ji} is not the same as a_{ij}! when i and j are unequal, they are never the same, in fact, unless they are zero. When i and j are equal, they are, of course, equal--simply the diagonal elements of the matrix. The transformation from the primed to the unprimed system is the *inverse* of the transformation from the unprimed to the primed system. We have just shown that the matrix of the inverse rotation is just the *transpose* of the matrix of the direct rotation. Any such matrix is called an *orthogonal* matrix, and describes some rotation. We have also learned an important rule of index manipulation: we cannot arbitrarily change the order of indices. We can change a dummy index, and we can choose a single index that appears on both sides of an equation arbitrarily, however.

More general coordinate transformations can also be considered. If the transformation is not orthogonal, stretches and inclinations of the coordinate axes may be involved. Tensors still have the same fundamental behavior, but now it is necessary to distinguish between *contravariant* indices, which transform like the coordinates x_{i} and *covariant* indices, which transform like the derivatives ∂ _{i}. Contraction can occur only with one contravariant and one covariant index, and it is necessary to define *metric tensors* g_{ij} and g^{ij} to change indices from contravariant to covariant and vice versa. These complications are not necessary except in relativity, so we will not mention them further.

One extension that is often useful, however, is to *inversions*, so that the more general rotary inversions can be included in our transformations. Any rotary inversion is the combination of a rotation followed by an inversion in the origin, **r** → -**r**, where the transformation matrix is a_{ij} = -δ_{ij}. All the diagonal elements are -1, all others zero. The behavior of tensors under inversion is either to change sign or to remain unchanged. A rank-n ordinary tensor changes by a factor (-1)^{n} under inversion. A *pseudotensor* changes by the opposite sign upon inversion.

If we have three associated quantities that we assert represent a vector in some coordinate system, then we represent them abstractly by v_{i}, and they transform like v'_{i} = a_{ij}v_{j}. These are not just any old three quantities--they must respond to rotations in the way specified, and since they do, they are analogous to displacements in some sense. If we have two vectors v_{i} and u_{i}, the nine products v_{i}u_{j} transform like v'_{i}u'_{j} = a_{ik}a_{jl}v_{k}u_{l}. We can have nine quantities that transform like this that do not have to be the products of two vector components, represented by t_{ij}. Then, t'_{ij} = a_{ik}a_{jl}t_{kl}. Such a quantity was the original *tensor*, with two indices. This concept can be extended to any number of indices, and the sets of quantities are said to represent a tensor of rank n, where n is the number of indices. A vector is of rank 1. The objects we have just considered are of rank 2. A quantity that does not change at all under rotation is called a *scalar*, and is of rank 0.

A matrix has elements tagged by two indices, but it is necessarily no more a tensor than three arbitrary elements are a vector. A tensor is defined by its transformation properties, not by how it looks. A rank-2 tensor is often represented by a matrix, and matrices have interesting properties and algebra, but this relates solely to representation and manipulation. Matrices are, in fact, used to represent rank-2 tensors and to work with them. Many matrix operations are much more easily and unambiguously carried out by indices, however. For example the matrix product C = AB can be written c_{ij} = a_{ik}b_{kj}, with a sum over the dummy index k. If these are tensors, then the rank-4 tensor a_{ik}b_{lj} is said to be *contracted* to the rank-2 tensor c_{ij} by setting k = l and summing. If v_{i} and u_{i} are vectors, then the *outer product* u_{i}v_{j} is a rank-2 tensor, while the contraction u_{i}v_{i} is rank 0, a scalar, called the *inner* or *scalar* product. Contraction is one of the primary means for producing one tensor from another.

The Kronecker delta is a rank-2 tensor whose components are shown at the left. To show that it is a tensor, we transform it: δ'_{ij} = a_{ik}a_{jl}δ _{kl} = a_{ik}a_{jk} = δ _{ij}, from the properties of the rotation matrix elements that we already know. It has the same components in every coordinate system; we can say that it is spherically symmetric. The pressure tensor in a liquid is a multiple of this tensor, expressing the property that the pressure is independent of direction. The tensor is not affected by rotation, but it is not a scalar. A scalar is a *single* element unaffected by rotation. Here we have nine elements. The Kronecker delta, therefore, has a right to call itself a tensor. The matrix that represents it is the unit matrix, of course.

As an exercise, let us show that the magnitude of a vector is preserved, or invariant, under rotation. The square of the magnitude of a vector x_{i} can be expressed as the contraction x_{i}x_{i}, which, of course, implies this. However, let's show it explicitly using the transformation. Then, x'_{i}x'_{i} = a_{ir}a_{is}x_{r}x_{s}. If the magnitude is to be preserved, then a_{ir}a_{is} must equal δ _{rs}. Conversely, if the sum is δ _{rs}, then the magnitude is preserved. By working with x_{i}x_{i} instead, we can prove the same thing for a sum over the columns.

The sum of the diagonal elements of a rank-2 tensor is called its *trace* or *spur*. It is immediately obvious that since the trace is t_{ii}, it is invariant under rotation, because this expression is a rank-0 scalar. As an exercise, you may want to prove this explicitly, using the rotation matrix elements. Since the trace is a scalar, it is *invariant* under rotations. Another scalar invariant of a rank-2 tensor is its *determinant*, which will be defined below. These are the only two invariants of a rank-2 tensor.

If a_{ij} is a tensor, so is its *transpose* a_{ji}. So, then, are the combinations a_{ij} + a_{ji} and a_{ij} - a_{ji}. The first combination has six independent elements, and is *symmetrical* across the diagonal. The second has three independent elements, the diagonal elements are zero, and elements similarly placed on each side of the diagonal are opposite in sign. Any rank-2 tensor can be resolved into *symmetrical* and *antisymmetrical* parts in this way. Many physical quantities are symmetrical tensors, while the three elements of an antisymmetric tensor can be identified with the three components of an associated vector, as we shall see.

If a tensor is a function of position in space, the result is a tensor field. For example, the temperature T(x_{i}) in a region is a rank-0 or scalar field, since it does not depend on the orientation of the coordinate axes. The velocity **v**(x_{i}) in a liquid is a rank-1 or vector field. The stress S_{ij}(x_{i}) is a rank-2 or tensor field. Here, x_{i} stands for the usual x,y,z. We can get a vector by taking the space derivative of a scalar field, for example ∂T/∂x_{i}, which is the temperature gradient, abbreviated grad T. The partial derivative is used because there are three independent variables, and we are taking the derivative with respect to only one of them. For convenience, we usually write ∂/∂x_{i} as simply ∂_{i}. Using ∂_{i}'φ = a_{ij}∂_{j}φ, where φ is a scalar function of position, write down how these components transform under rotation, and show that it is exactly the same as a vector. Differentiation can, therefore, create a tensor index. We can write this concisely by putting the new index after a comma, as v_{i,j}, which would be ∂_{i}v_{j}. The temperature gradient would then be grad T = T_{,i}. In most cases, however, we will use the partial sign instead of the comma convention.

If you are familiar with vector fields, you will recognize many old friends expressed in index notation. The gradient of a function f is ∂_{i}f = f_{,i}. The divergence of a vector **v** is div **v** = ∂_{i}v_{i} = v_{i,i} (note the sum). The Laplacian, div grad f, is ∂_{i}∂_{i}f = f_{,ii}. Again, note the sum. We can express a partial derivative with respect to time, if these quantities are considered to vary with time, by a subscript t. Time differentiation has no effect on the tensor properties. The wave equation can then be concisely written f_{tt} = c^{2}f_{,ii}.

We emphasize that there are two kinds of indices in an expression: "free" indices that identify, and "dummy" indices that are summed over. No index should appear more than twice, and if it appears twice, it is a dummy or summation index. Free indices may appear only once. A particular component is identified by an index 1 to 3. Anything that looks like a tensor index but is not, should be put in parentheses. Dummy indices can be chosen at will so they do not conflict with any other indices. The letters chosen for free indices are, of course, arbitrary, but when they appear on opposite sides of an equation they must be changed simultaneously. An index always implies tensor transformation properties. The range of an index must always be agreed, and is 1 to 3 unless otherwise specified.

Consider the quantities ε _{ijk} that are nonzero only if i, j and k are all different, and +1 or -1 as ijk is an even or odd permutation of 123. That is, 123, 231 and 312 give +1, and 213, 132 and 321 give -1. Of the 27 possible elements, only six are nonzero. This object is called the *antisymmetric tensor density*, or the *alternating tensor*. Like the Kronecker delta, it is indeed a tensor, which can be discovered by transforming it. One finds ε _{ijk} = a_{il}a_{jm}a_{kn}ε_{lmn}. For given values of i,j,k the right-hand side is just the determinant of the matrix a_{ij}. For any rotation, this determinant is +1, so ε_{ijk} transforms into itself, as the Kronecker delta does.

The most-used property of the antisymmetric tensor density is the result of contracting a product of antisymmetric tensor densities on one index. The product of alternating tensors, contracted on i, is ε_{ijk}ε_{irs} = δ_{jr}δ_{ks} - δ_{js}δ_{kr}. Note that the Kronecker deltas are the product of those with corresponding indices (j,s and k,r) minus the product with alternate indices (j,r and k,s). It may help to begin by permuting the indices cyclically so that the summation index is first in both tensors. This result is proved by finding all the components of the two rank-4 tensors on each side of the equality, and showing that they are the same. These components are either +1, -1, or zero, and there are 81 of them (mostly zero). Look for the nonzero components, and observe that they are the same on both sides.

The contraction of tensor densities is called *Lagrange's Identity* in vector analysis, where it appears in the form (**r** x **s**)·( **t** x **u**) = (**r**·**t**)(**s**·**u**) - (**r**·**u**)(**s**·**t**). This is not as general as the index expression, which can also be used to find (**r** x **s**) x **t** directly. This can also be done using Lagrange's Identity and some creative algebra.

Perhaps this is a good place to review a little about determinants, which are closely connected with the antisymmetric tensor density. The determinant of a matrix is the sum of all products of elements that come from different rows and columns times +1 or -1, depending on which elements are used. For a 3x3 matrix, the sum is that given above using the tensor density. If two rows or columns are the same, or proportional, the determinant is zero. If two rows or columns are interchanged, the determinant changes sign. Interchanging rows and columns does not change the determinant. The determinant of the product of matrices is the product of the determinants of the matrices. Refer to an algebra book for further properties, and for the proofs of the assertions made here.

If the rotation matrix is denoted by A, then the inverse rotation matrix A^{-1} equals the transposed matrix, A'. Thus, AA' = I, where I is the identity matrix, with determinant +1. Now, det (AA') = det A x det A' = (det A)^{2} = 1, so det A = ±1. Since det I = +1, det A = +1 as well for any rotation derivable continuously from the identity. If we reverse the direction of a coordinate, we have a discontinuous transformation that is, however, still orthogonal. In this case, the determinant will become -1. The determinant of the rotation matrix is just the volume of the small cube whose sides are the unit vectors, which is, of course, 1. The tensor density is called a density because of the determinant appearing in its transformation. For rotations this is just +1, but in more general transformations, it is a volume element multiplying the tensor density. The determinant of an inversion is -1, so tensor densities change sign on inversion, while ordinary tensors do not. Tensor densities are sometimes called *pseudotensors*.

Consider the rank-1 tensor c_{i} = ε_{ijk}u_{j}v_{k}. It is a vector, or, more precisely, a vector density or pseudovector because of the presence of the tensor density. This means that it will not change sign on an inversion of axes, while an ordinary vector would be reversed. It changes sign if the vectors u and v are taken in the opposite order, and is zero if u and v are in the same direction. It is closely related to the antisymmetric tensor u_{j}v_{k} - u_{k}v_{j}. The 23 component of the antisymmetric tensor is the same as the 1 component of c, for example. This tensor is the usual *vector product* of two vectors.

A vector a_{k} can be associated with a rank-2 tensor A_{ij} by using the antisymmetric tensor density: A_{ij} = ε_{ijk}a_{k}. Write out this tensor as a 3x3 matrix to see how the components are associated. Note that it is antisymmetric and has only three independent components. It is a *pseudotensor* because of the presence of ε_{ijk}. Under an inversion, the vector **a** becomes -**a**, and so the pseudotensor changes sign. On the other hand, if it were a normal rank-2 antisymmetric tensor, then the vector it defines would be a *pseudovector*. The property of being a pseudotensor has nothing to do with the property of being antisymmetric. If the components of an antisymmetric tensor are the infinitesimal velocities of points of a fluid relative to a certain point, then the associated vector is twice the infinitesimal angle of angular velocity of rotation, the *vorticity*, and is a pseudovector. In vectors, this is expressed by curl **v** = 2**ω**.

The curl of a vector field **v** is (curl **v**)_{i} = ε_{ijk}v_{k,j}. Sometimes, instead of v_{k,j} one writes ∂_{j}v_{k}, where ∂_{i} = ∂/∂x_{i}. This looks more complicated than the usual vector expression using del, but it is explicit and can be used immediately for calculations, unlike the del expression that must be expanded in components first. The most important use of the antisymmetric tensor density is to express the cross product of vector analysis. Let's prove that div curl **v** = 0, a familar result. With indices, this is just div curl **v** = ε_{ijk}v_{k,ji} = ε_{ijk}v_{k,ij} = -ε_{ijk}v_{k,ji} = 0. The reason is that the derivative does not change sign when the order of differentiation is reversed, but the tensor density does.

This conclusion deserves to be emphasized, since it quite often arises in tensor analysis. If we have the product of a symmetric quantity s_{ij} and an antisymmetric quantity a_{kl} contracted on both indices, we have the scalar s_{ij}a_{ij}, and this scalar is identically zero. The reason is that if we permute the indices on the antisymmetric factor, the sign of the scalar changes. If we then permute the indices on the symmetric factor, there is no change--the minus sign remains. But now we can put i for j and j for i, since they are dummy indices, and we have the original scalar equal to its negative. The only way this can happen is that the scalar be zero. Contemplate this until it is clear, since it is a useful result.

In vector analysis, one meets the scalar product of three vectors **a·bxc**. In index notation, it becomes ε _{ijk}a_{i}b_{j}c_{k}, a very symmetrical result that is evidently a scalar. It changes sign if we interchange any two vectors (why?), and is zero if any two vectors are proportional (why? hint: the product of the two is a symmetric quantity). It is, in fact, just the determinant of the matrix whose rows are the components of the three vectors, in order.

There is also the vector triple product, for example **ax(bxc**. In index notation, it is ε_{ijk}a_{j}(ε _{krs}b_{r}c_{s}), or (ε_{ijk}ε_{krs})a_{j}b_{r}c_{s}. The contraction of the two antisymmetric tensor densities is evaluated in terms of Kronecker deltas as explained above. With this result, the vector triple product reduces to b_{i}a_{j}c_{j} - c_{i}a_{j}b_{j}, or **b(a·c) - c(a·b)**, the familiar result, but obtained by straightforward algebra.

The divergence theorem, or Gauss's Theorem, is expressed in index notation by ∫(V)∂_{i}v_{i}dV = ∫(S)v_{i}dS_{i}. Here, dV = dx_{1}dx_{2}dx_{3} is an element of the volume V, and dS_{i} = n_{i}dS, where dS is an element of area, and n_{i} is the outward normal to the surface S enclosing the volume V. The term (∂_{1}v_{1}dx_{1})dx_{2}dx_{3} can be integrated at once to give the values of v_{1} at each end of a parallelepiped of cross section dx_{2}dx_{3}. The change in sign takes care of the opposite directions of the normals. When we integrate over the remaining two variables, we get the surface integral for this term. Doing the same for the other two contributions, the theorem is proved. This shows the general method for treating integrals like this. Gauss's Theorem, applied to electric and magnetic fields, is called Gauss's Law.

Stokes's Theorem reads ∫(S)ε_{ijk}∂_{j}v_{k}dS_{i} = ∫(C)v_{i}dx_{i}. The second integral is taken around a curve C bounding the area A in the direction that keeps the area to the left, while the positive direction of d**S** is in the direction a right-handed screw would advance if rotated in the direction of curve C. If S is a closed surface, then its boundary vanishes, and this integral is zero. Hence, the value of the integral depends only on its boundary C and not on the particular surface S bounded by C. This permits us to take certain easy special cases in proving the theorem.

For example, take S as lying in the 2,3-plane, so dS_{i} has only a 1-component. Then, the only terms that contribute are (∂_{2}v_{3} - ∂_{3}v_{2}) dx_{2}dx_{3}. The first term can be integrated once in strips parallel to the 2-axis, and the second in strips parallel to the 1-axis. Then each can be integrated over the remaining variable. The sign difference takes the difference in directions of integration along C at the two ends into account, and the result is obtained.

These are the most-used theorems, but there are similar ones dealing with a scalar φ instead of the vector v_{i}, which can be proved the same way. In particular, ∫(V)∂_{i}φdV = ∫(S)φdS_{i} and ∫(S)ε_{ijk}dS_{j}∂_{k}φ = ∫(C)φdx_{i}.

Green's Theorem is also easily derived using indices. We first have ∂_{i}(φ∂_{i}ψ) = ∂_{i}φ∂_{i}ψ + φ∂_{i}∂_{i}ψ. We subtract from this equation a similar one found by interchanging φ and ψ, and then apply Gauss's Theorem to get ∫(V)(φ∂_{i}∂_{i} - ψ&part:_{i}∂_{i}ψ)dV = ∫(S)(φ∂_{i}ψ - ψ∂_{i}φ)dS_{i}. This theorem is very useful in solving potential problems, where ψ is a known potential (the Green's function) and φ is the potential desired.

We have now shown practically all of the tools used in Euclidean tensor analysis, and how the index notation relates to the more familiar, but less powerful, vector notation. When you need to prove a vector identity, it is most easily done by expressing the identity in index notation. Then, the method of proof is almost always evident and straightforward. Euclidean tensors are of special help in describing crystal properties; here, they are practically essential, since vector methods are of little aid. Tensor methods are practically indispensable in studying relativity. Because of the noneuclidean structure of space-time, these tensors are a little more complicated than the Euclidean tensors presented here, but a knowledge of Euclidean tensors will be found a great aid to understanding relativity.

For an interesting, but not very enlightening, discussion of the definition of a vector, see Hoffman (Ref. 2). A vector is best defined for analytical purposes as an ordered n-tuple of numbers, with the properties of addition and multiplication by a scalar. The parallelogram law of addition follows at once from this definition. Its ordinary properties express the Euclidean nature of space, meaning that a vector is not changed by parallel displacement or by rotation, as revealed by ordinary experience. These are the same properties expressed by the Euclidean postulates on parallels and that all right angles are equal in ordinary plane geometry. These are postulates, not axioms, incidentally. One must distinguish the abstract mathematical concept of vector from the physical concept in any case. Vectors only express certain properties of physical vector quantities.

Vector analysis was developed early in the 20th century by J. Willard Gibbs and O. Heaviside, independently, on the foundation of Hamilton's quaternions. Many vector concepts had been around for some time, however, such as the scalar and vector products. Tensor analysis was developed by Ricci and others to work with Riemannian spaces, and was applied to the theory of relativity, especially the General Theory.

- Indices are represented by lower-case subscripts, and are considered to take the values 1, 2 and 3 unless otherwise agreed.
- Repeated (dummy) indices are summed over the range of values of the indices.
- Indices imply tensor transformation of the components. The rank of the tensor is the number of indices.
- A tensor may be
*contracted*on two indices by setting them equal and summing. The result is also a tensor. - The Kronecker Delta δ
_{ij}is equal to 1 if the indices are equal, 0 otherwise. - The antisymmetric tensor density ε
_{ijk}is +1 if ijk is an even permutation of 123, -1 if an odd permutation, and 0 otherwise. - The determinant of a
_{ij}is a = ε_{ijk}a_{1i}a_{2j}a_{3k}. - ε
_{ijk}ε_{irs}= δ_{jr}δ_{ks}- δ_{js}δ_{kr}. - The transformation x
_{i}' = a_{ij}x_{j}with inverse x_{i}= a^{-1}_{ij}x_{j}' is a rotation or rotary inversion if a^{-1}_{ij}= a_{ji}, and a_{ij}a_{ik}= a_{ji}a_{ki}= δ_{jk}. The matrix a_{ij}is called*orthogonal*, and its determinant is 1 for a rotation, -1 for a rotary inversion.

1. Prove that curl grad f is identically zero.

2. Find an expression for the product of four vectors **(axb)·(cxd)** that involves only the scalar products of the vectors.

3. Prove that ε_{iks}ε_{jks} = 2δ_{ij}.

4. Prove that the trace of the product of a symmetric matrix and an antisymmetric matrix is zero.

5. Write Maxwell's equations in index notation (again, if you know what they are). Find the wave equation for waves in free space using indices.

- H. Jeffreys,
*Cartesian Tensors*(Cambridge: Cambridge Univ. Press, 1961). This small volume contains not only the fundamentals, but also applications to several fields of theoretical physics. - B. Hoffman,
*About Vectors*(New York: Prentice-Hall, 1966; republished Dover, 1975). The final chapter treats tensors, using upper and lower indices, and the metric tensor, a slight extension of the treatment here. A curious treatment of vectors, with extensive quibbling that at least gives food for thought. - H. B. Phillips,
*Vector Analysis*(New York: John Wiley & Sons, 1933). Classic text on vectors and vector fields. Includes dyadics, a way to represent second-rank tensors. - J. L. Synge and A. Schild,
*Tensor Calculus*. (Toronto: Univ. of Toronto Press, 1949). Classic text on tensor analysis in curved spaces.

Return to Math Index

Composed by J. B. Calvert

Created 18 January 2001

Last revised 17 January 2005