Calculating Matrices with the HP-48


Contents

  1. Introduction
  2. Theory of Matrices and Vectors
  3. Creating and Storing Matrices
  4. Working With Matrices
  5. Euclidean Vectors
  6. References

Introduction

The scientific pocket calculator succeeded the slide rule in the 1980's. Not only could it multiply and divide, find trig functions and logarithms, and raise to powers like the slide rule, but it could add and subtract as well, reducing the tedium of calculations and, most importantly, eliminating a great deal of human error. As the power of pocket calculators increased, features like statistical calculations and the solution of simultaneous linear equations were added, and even some vector and matrix operations. However, limited storage severely restricted the scope of these features, and input was tedious.

The HP-48, which appeared in 1993, overcame these restrictions, and is a remarkably easy-to-use and powerful matrix calculator. The two main advances were: (1) menus gave the equivalent of an expanded keyboard; and (2) the stack could hold not only numbers, but also compound objects like matrices. It is easy to define matrices and access them with a single keypress, and to work with them on the stack like individual numbers. All of this is handled quite differently than in preceding calculators, and some learning is necessary. The more I work with the HP-48, the more I admire and appreciate its features.

My recent acquisition of an HP-48 made necessary some exploration of its features and how to use them. This was a good excuse to review certain matters, among them vectors and matrices. I will give a concise review of the theory, and then describe the practical use of the computer in its application. The reader should already be familiar in general with vectors, matrices and determinants, though perhaps somewhat rusty. It is not necessary to understand the theory thoroughly to use the calculator, so the following section can be used for reference as needed. It does not contain any HP-48 details.

Theory of Matrices and Vectors

The fundamental idea of a matrix is that of a compound object that represents a linear transformation of variables. The linear transformation yi = aijxj, where i takes values 1, 2, ..., n and a summation is implied over the repeated index j, taking values 1, 2, ..., m expresses the n numbers yi in terms of the m numbers xi. When working with such expressions algebraically, the use of indices is very powerful and always indicates exactly what is to be done without ambiguity. Symbolically, we may write the transformation as y = Ax, where A = [aij] is a rectangular matrix, and x,y are column vectors, matrices with one column. If the dimensions of A are n x m, then x must have m rows, or be a m x 1 matrix, while y must have n rows, and be an n x 1 matrix. The fact that the number of rows in x must match the number of columns in A is simply an expression of the summation to which it is equivalent, called conformability. Expressions like y = Ax mean nothing if the matrices are not conformable. Errors cannot be made if indices are used, but when you use symbols, care is required so that they make sense.

All the fundamental properties of matrices, that is, all the properties that every matrix possesses, come from the definition as an array of transformation coefficients. We may decide to confer further properties on certain matrices, and this introduces behavior beyond that of an ur-matrix. For example, we call matrices with one row or one column vectors, but this is only a convenience and does not confer any extra powers. Row or column matrices with three components (1 x n or n x 1) can represent displacements in three-dimensional space. If they do, they acquire important extra properties and become objects we also call vectors. We might write the first type as a vector, and the second as a VECTOR, to distinguish them from each other. In the first part of this article, we shall deal with vectors, but later on talk about VECTORS. The HP-48 has different ways of treating these different kinds of quantities, which it is instructive to compare. The HP-48 is concerned with the explicit, concrete, numerical handling of matrices, of course. The algebra of matrices is something else, but something that should always be kept in mind.

Consider two variables yi and zi of the same dimension n. As column vectors, they are written as a column of n numbers, the components. Suppose yi = aijxj and zi = bijxj connect them with xi. The dimension of xi is usuallly n as well, but there is no need to restrict it at this point. Let λ1 and λ2 be numerical coefficients: real, complex or whatever you please, just so they have the arithmetic properties of numbers. Such a thing in this context is called a scalar. Let the quantity wi be a linear combination of yi and zi: wi = λ1yi + λ2zi. Then, it is clear that if y = Ax and z = Bx, then w = Cx, where C is given by [cij], cij = λ1aij + λ2bij, which we can write C = λ1A + λ2B. This defines the operations of addition, subtraction and multiplication by a scalar for matrices. This is the important property of linearity, which is shared by vectors because they are also matrices. We also note that A + B = B + A, or that addition is commutative, and (A + B) + C = A + (B + C) = A + B + C, so it is also associative.

If z = Ay and y = Bx, then we naturally write z = ABx = Cx. If x,y,z are all one-dimensional, then we would have z = ay and y = bx, and so z = (ab)x. Since the matrix operation is the analogue of the multiplication in one dimension, the operation C = AB is called multiplication, though it is more complicated than that. Introducing indices, cij = aikbkj, where we sum over k. If A is n x m, then B must be m x p in order to be conformable, which is completely expressed in the index notation. It is important for the reader to be able to visualize this operation. If you are foggy, write down some matrices and do some multiplication. In practical cases, the HP-48 will do this for you, but it is essential to appreciate what is going on. It is now easy to derive some further properties, such as C(A + B) = CA + CB, ABC = (AB)C = A(BC). These properties are expressed by saying that matrix multiplication is distributive with respect to addition, and is associative.

If you write down the sums for the elements of the two product matrices AB and BA, it will be found that they are not the same. This means that the two product may be different, and so matrix multiplication is not commutative. First of all, it should be noted that if the matrices are n x m and m x p, then only one product is possible, since the matrices are not conformable in the other order, so the question of commutability is moot. Nearly all the matrices found in applications will be square, that is, n x n, where n is the dimension. Such matrices are conformable both ways, so that both AB and BA exist. It is quite possible, and even common, for AB = BA. Such matrices usually have special structures or symmetries, such as diagonal matrices or (commuting) hermitian matrices. Nevertheless, usually AB ≠ BA. Up to this point, matrices have been seen to behave pretty much like numbers in algebra, but here is a distinct contrast. Noncommutativity allows matrices to represent more complicated quantities than mere numbers can, and it is a fundamental property of matrices.

Square matrices can have powers, such as A, A2, ..., which makes them even more like algebraic quantities. However, observe that (A + B)2 = A2 + AB + BA + B2. Matrices of the same dimension are always conformable for multiplication. The matrix I = [δij], where δij = 0 if i ≠ j, and 1 if i = j (Kronecker delta) has the property that IA = AI = I, as is easily seen by using index notation. A diagonal matrix has nonzero elements only along the diagonal: D = [d(i)δij]. There is no sum on i, or the result would be a vector, not an n-dimensional matrix. Diagonal matrices commute, and the elements are the products of the elements at corresponding positions.

For any square matrix A, there may exist another square matrix A-1 such that A-1A = I. It can then be shown that AA-1 = I as well. A-1 is the inverse of A. The inverse always exists provided the determinant of A is nonzero, so you can divide by it. The determinant is the sum of products formed taking one element from each row and column with the proper sign attached. The elements of the inverse matrix are the cofactors of the corresponding elements of the matrix (the cofactor is the determinant of rank n - 1 formed by crossing out the row and column of the element under consideration, with a certain sign), divided by the determinant of the matrix. This is a very tedious calculation that the HP-48 will do for you at the press of the 1/x key. Symbolically, if y = Ax, then A-1y = A-1Ax = x, or x = A-1y. The inverse is a powerful thing to know, and the ability of the HP-48 to find it so easily suggests a libation to Hermes.

Since the product of any two matrices of dimension n is again a matrix of dimension n, and every matrix A of nonvanishing determinant has an inverse A-1, and the identity matrix I exists, and finally that matrix multiplication is associative, means that any collection of matrices closed under multiplication (that is, the product of any two matrices is included) forms a set called a group, which has very interesting properties. Any finite group is homomorphic (has the same properties as) to many finite groups of matrices, each of which is a concrete representation of the group. This is one way in which matrices may assume properties in addition to the basic ones, and is one of the most important applications of matrices.

So far we have written y = Ax, and considered y and x as column vectors. If we write them as row vectors instead, denoting the row vectors by y' and x', the same transformation is given by y' = x'A', where A' is the transpose of A, in which rows and columns have been interchanged. Note carefully that since x' is 1 x n, it must precede A' to be conformable, while x must follow A. The rule for matrix products is (AB)' = B'A'. The possibilities of row or column vectors, and transposes, often introduce confusion into symbolic expressions and their proper interpretation. Of course, if done correctly, all will work out, but misunderstandings are so common that recourse to index notation is often necessary. Index notation will always give the correct answer in such cases. We might stress that the product AB means that the transformation B is done first, then transformation A. If y = ABx, then y' = x'(AB)' = x'B'A', which says the same thing. Taking the transpose apparently reverses the order of the matrices, but this is misleading.

If A' = A, A is said to be symmetric, and if A' = -A, A is said to be antisymmetric. If A* = A, where * represents complex conjugation, A is said to be real. If A* = -A, then A is imaginary. The operation of complex conjugate may be combined with transposition to form the hermitian conjugate A'*. If A'* = A, then A is called hermitian. If A' = A-1, A is called orthogonal (usually for a real matrix A), while if A'* = A-1, A is said to be unitary. Matrices of all these types are important in different applications. The matrix A + A' is symmetric, while A - A' is antisymmetric. The matrix A + A*' is hermitian, the matrix A - A*' is anti-hermitian.

We may think of a matrix A as an operator turning a vector x into another vector y, y = Ax. We can also think of a matrix S as representing a change of coordinates, x" = Sx (using " to denote the vector in the new system instead of ', which we use for transpose here). Then, y" = Sy = SAx = (SAS-1)x". The matrix A" = SAS-1 represents the effect of the operator in the new system. This is called a similarity transformation of A, since one may expect that A" is quite similar to A, only differing in the way its operands are specified. If A satisfies certain conditions, then we can find a transformation S such that SAS-1 = D, a diagonal matrix. In this frame of reference, the effect of A is only extensions or compressions along different axes, so it can easily be comprehended. A real symmetric matrix can be diagonalized, as can a hermitian matrix. The diagonal elements of D are the eigenvalues of A, and unit vectors in the corresponding directions are the eigenvectors. The HP-48 can find eigenvalues and eigenvectors with the push of a button. Another libation to Hermes is indicated.

The determinant of a matrix, symbolically written detA, has the properties that det(AB) = detA detB, detA' = detA, and detA-1 = 1/detA. Similarity transformation does not change the determinant: det(SAS-1) = detS detA detS-1 = detA (detS/detS) = detA. The determinant is said to be an invariant under similarity transformation. Another invariant is the trace, trA = Σd(i), the sum of the diagonal elements, which is equal to the sum of the eigenvalues. Use indices to show that tr(A + B) = trA + trB, and tr(AB) = tr(BA). Then, tr(SAS-1) = tr(S-1SA) = tr(A). For a 2 x 2 matrix, the determinant is the product of the two eigenvalues, and the trace is the sum, so the eigenvalues can be found easily by solving a quadratic equation. Of course, the HP-48 can compute the determinant and trace with button pushes.

If O is an orthogonal matrix, O' = O-1, then OO' = I, so detO = ±1. A similar thing holds for a unitary matrix, but here we know only that |detU| = 1, since we find that (detU)*detU = 1. Orthogonal and unitary matrices have eigenvalues whose absolute value is unity. An orthogonal matrix is a transformation from one orthonormal system to another, and its rows or columns are the direction cosines between the directions. The trace of an orthogonal matrix is the number of dimensions.

Creating and Storing Matrices

The HP-48 stores arrays with the delimiters [ ]. Pressing ← [ ] gives you the delimiter pair on the stack, with the input cursor just following the initial [. In general, a matrix has double brackets for delimiters, [[ ]], so press ←[ ] twice to start entering a matrix. Each [ ] inside the outer delimiters represents a row. All rows must be the same length. A row vector is made by simply typing in the components, separating them with SPC. When all have been typed in, press ENTER to put the row vector on the stack. Save the vector by pressing ', then an identifier (use the α keyboard for this. An identifier cannot begin with a digit. When the identifier has been typed in, press STO. It will appear in the current menu. If you have some other menu displayed, press VAR to get the current menu. Clear the stack, then press the menu key corresponding to the row vector. It will be pushed on the stack, ready for use. When you are finished with it, enter the name using ' as you did when defining it, but press ← PURG instead, and it will disappear. This process is probably familiar to you from storing and recalling ordinary numbers, but be aware that it works with matrices, too.

To create a matrix with more than one row, move the cursor beyond the ] with the → cursor movement key. Now just enter the rest of the elements, separating them by SPC and not worrying about rows or columns or anything. Enter them in teletype order (called row-major) until you are done. When you are finished, press ENTER and the matrix will appear on the stack, all properly formatted. The matrix can be named and saved at this time. Fantastic! There can be no easier way to type in a matrix. In some cases, matrix operations will accept an array with single [ ] delimiters.

Only part of the matrix will be displayed, just enough that you can remember which one it is. To see what the whole matrix looks like, press the ↓ cursor key. What comes up is the Matrix Writer form, showing column numbers across the top and row numbers to the left. The cursor will be on element (1,1), as shown in the lower left-hand corner by 1-1. To the right of the colon is the full value of that element, in the active numerical display mode. As you move the cursor around to select different elements, the values will be displayed on this line. This is the way to examine a matrix.

If you press EDIT, the display will change and show a prompt on the last line. Select any element, and it will be shown in this line. Make any changes you want, deleting with the DEL key or the backspace, and typing in any new value. When finished, press ENTER and you will go back to the stack, with the changed matrix there. To throw away any changes, press CANCEL. It is clear that a matrix can be entered with this form from the beginning. From the stack display, press → MATRIX to get here immediately, with a clean slate and the cursor on 1-1. Type in the 1-1 element and press ENTER. The cursor will go to cell 1-2, where entry can be repeated. When you reach the end of the row, press the ↓ cursor key and the cursor will move to the first column. The form now knows the number of columns in your matrix, and will move the cursor accordingly. If you press ENTER after entering no number, you will go to the stack display, where your matrix will be seen. Use CANCEL to throw the entries away if you want. Practice entering and editing matrices until it seems easy, as the skills will often be needed. Matrices can be moved around on the stack using any of the stack commands, and they occupy only one level.

The command → MATRIX only gets you to the Matrix Writer and its associated functions, which are mainly used for creating and modifying matrices. MTH MATR gets you to the menu with many interesting matrix functions. Under MAKE you will find choices that automatically create certain kinds of matrices. SIZE pushes on the stack a list with the number of rows and columns in the matrix that was on level 1, {n m}. Not that you can't see what this is, but certain commands require the size of a matrix in this form, and this is an easy way to get it with a button press rather than a lot of typing. To create an n-dimensional identity matrix, push n on the stack and press IDN. (If you have a SIZE output like {3 3} on the stack, you will get a list of two 3-dimensional identity matrices, which is not usually what you want.) If you push the dimension and a constant on the stack (must be on different levels, not separated by a space on the command line), then CON creates an n-dimensional matrix filled with the constant. If you push a row vector on the stack, then the number of dimensions, then DIAG→ will create an n-dimensional diagonal matrix with the vector's components as the diagonal elements. If you put the dimension on the stack and press RANM, you will get a square matrix filled with random integers from -9 to 9 with 0's having twice the probability of the other digits. Heaven knows what this is good for. Finally, you can assemble a matrix from row vectors or column vectors by pushing them on the stack and then using the commands ROW ROW→ or COL COL→. The hardest thing here is just finding the proper menus. If a matrix is on level 1 when you use any of these matrix creation commands instead of an integer, the result will be of the same size, and the matrix will be redefined. If you want a matrix that is not square, use a list like {3 2} instead of a single integer. This will make a 3 x 2 matrix.

Yet another way to access and change matrix elements is with the GET and PUT functions in MTH MATR MAKE. Push the matrix on the stack, and then enter either the list { i j }, where i and j are the indices of the element, or the number of the element in row-major order. For example, number 4 is element 21, and number 8 is element 32. GET puts the element on the stack, while PUT puts it back. Whole rows or columns may extracted as vectors using MTH MATR ROW ROW- and MTH MATR COL COL-. Logically, ROW+ and COL+ will put them back. Before performing any of these operations, push the row or column index on the stack. ROW SWAP and COL SWAP will swap rows or columns. Put the indices of the rows or columns to swap on level 1 and level 2 of the stack. We saw above how to assemble a matrix from row or column vectors. We can disassemble a matrix the same way, using ROW →ROW or COL →COL. Note that we have just switched ends on the arrows. The +/- key changes the sign of all matrix elements of the matrix on level 1. To change individual signs, EDIT the matrix in Matrix Writer.

Working With Matrices

First, let's look at what can be done with a single matrix. Under NORM in the MATR menu will be found RANK, DET and TRACE (press NXT). DET finds the determinant, of course. The matrix is eaten up in the process, so to get it back, use → UNDO. TRACE finds the trace, not surprisingly. RANK finds the dimension of the largest submatrix that is not singular; that is, that has a nonzero determinant. If your matrix is not singular, RANK will just return the direction. Make a 3 x 3 matrix with two rows identical and one different, and then use RANK; the result should be 2. Various other norms can be calculated, but they are not often used.

Also in the MATR menu is EGV, which creates a row vector of the eigenvalues of the matrix. One can see that their product is the determinant of the matrix, and the sum is the trace. A diagonal matrix can be created if the dimension is pushed on the stack, and the DIAG→ function is used. This is the diagonal form of the matrix you started with. If you need the eigenvectors as well, use the function EGVL instead. This pushes the matrix of the eigenvectors and the eigenvalue vector on the stack. If the matrix is symmetric, then the eigenvalues will all be real, and the eigenvectors will be orthonormal. The matrix of the eigenvectors will then be orthogonal (determinant +1, trace = dimension). See that this is true by working out an example. The amount of laborious arithmetic that is avoided by using the HP-48 is astounding.

The inverse of a matrix is found by pushing the matrix on level 1 of the stack and pressing 1/x. The transpose is found by executing the function MTH MATR MAKE TRN. In both cases, the result is left on the stack and the original matrix disappears. If the elements of the matrix are complex, TRN finds the hermitian conjugate (transpose complex conjugate). Each element of a matrix can be multiplied or divided by a scalar by pushing the matrix and then the scalar on the stack, and then pressing X or ./.. The diagonal elements of a matrix can be extracted using →DIAG from menu MTH MATR NXT.

Now let's work with two matrices, A and B. If they are of the same dimensions, with A on level 2 and B on level 1, then + and - will add or subtract them, replacing them by A + B or A - B, respectively. Now we can make linear combinations with scalar coefficients. To form the product AB, push A on the stack, then B, and press X. The matrices must be conformable in this order, but other than that can be any shape or size. The HP-48 knows whether to matrix multiply or multiply by a scalar by the type of the data on level 1 or the command line. If you press divide, you will get B-1A for A/B (The Owner's Guide has the wrong information). B, of course, must be square so that it has an inverse. There is no actual matrix division, of course. The complex variable function CONJ in MTH CMPL NXT will take the complex conjugate of a matrix. RE and IM will get the real and imaginary matrices.

An often-performed task is solving a system of simultaneous linear equations. The system can be expressed in the form Ax = b, where x is the n-dimensional column vector of unknowns, A the m x n matrix of the coefficients, and b the m-dimensional column vector of the "constant terms." In most cases, A is a nonsingular square matrix, and m = n. Then the vector x is uniquely determined. There are pathological cases, and the HP-48 has many features to aid in that case, but we will consider only the determinate case. A solution is easily found using the inverse: x = A-1b. To do this, just push b on the stack, and then the coefficient matrix A. When you press divide, the solution vector will appear on the stack. This is so easy, it might be magic! If you haven't done it before, try x - 2y = 4, 3x + y = 5. Push [4 5] on the stack, then [[1 -2]3 1], and press divide. The vector [2 -1] that appears means x = 2, y = -1 is the solution. If you have pushed b, A and the solution vector on the stack, in that order, the function RSD will show you the result of calculating Ax - b, called the residual vector, which for problems of this kind is always zero. Note that after you have found the solution vector x, →ARG will give you the original matrices back, but then you will need the stack operation ROT to put them above x.

A matrix must, in general, be delimited with [[ ]], not single square brackets. In the preceding paragraph, the HP-48 would accept an array as a column matrix, but in general a 1 x n (row) matrix must be entered as [[a b ...]] and an n x 1 (column) matrix as [[a][b][c][...]] to be acceptable for multiplication and the other matrix operations, such as transposing. The 1 x n goes on level 2, and the n x 1 on level 1, as prefactor and postfactor. If you come across this without prior warning, it is likely to be very confusing. If you get "invalid argument" try using double square brackets. This is a good idea whenever working with matrices anyway.

Gaussian Elimination works with a rectangular matrix formed from adding b as an additional row to A. It's easy to do this using COL+ to add the vector b. If we swap rows, or add any multiple of one row to another row, the solution vector is not changed. The MTH MATR ROW menu contains functions RSWP that swaps two rows whose numbers are on the stack; RCI that multiplies the row whose number is in level 1 by the constant in level 2; and RCIJ that multiplies the row whose number is in level 2 by the constant in level 3 and adds it to the row whose number is in level 1. These operations allow you to sequentially reduce certain elements to zero to get a triangular array of zeros that permit the unknowns to be evaluated one by one. However, in MTH MATR FACTR there is a function RREF that does all this automatically, leaving you with a fully reduced matrix from which the unknowns are easily found. The FACTR menu includes several functions to factor a matrix in special forms used in certain problems. You won't need these unless you are deep into linear algebra.

The quadratic form in three variables ax2 + by2 + cz2 + 2fxy + 2hyz + 2gzx can be numerically evaluated for known coefficients a,b,c,f,g,h and values of x,y,z by matrix multiplication. First enter the row vector [[x y z]], then the symmetrical coefficient matrix [[a f g][f b h][g h c]]. Copy the vector to level 1 with OVER, and then transpose it with MTH MATR MAKE TRN. Two multiplications then evaluate the quadratic form. Any number of variables can be handled this way. A quadratic form can also be diagonalized by diagonalizing its coefficient matrix, which removes the cross terms. Linear terms (which must also be transformed) can then be removed by shifting the origin. The general quadratic form can be represented in the simple form ax2 + by2 + cz2 + C by such transformations.

Euclidean Vectors

Euclidean vectors are vectors in 2 or 3 dimensions that represent displacements in a Euclidean space, and any analgous quantities, such as velocities, accelerations, forces, impulses, momenta, electric and magnetic fields, and so on. Their components refer to directions in space, and may be referred to coordinate unit vectors i =(1,0,0), j = (0,1,0) and k = (0,0,1). The vector a may be represented by (a1, a2, a3) or a1i + a2j + a3k. These vectors have all the properties of row or column matrices, plus the special ones due to their significance. Some of the matrix properties are rarely used, but linearity is a very important property of Euclidean vectors. The matrix product a'b = c of a row vector and a column vector, yielding a scalar, is called the scalar or dot product of two vectors. Sometimes a distinction is made between covariant vectors represented as a row vector, and contravariant vectors represented as a column vector. In most cases, however, we make no such distinction. The scalar product is then the sum of the products of the components of the two vectors. The scalar product of a vector with itself, a'a, is the norm of the vector, ||a||, and its square root is its length, |a|. The scalar product of two vectors a and b is interpreted as |a||b|cos θ, where θ is the angle between the positive directions of the vectors. Euclidean vectors can always be represented as directed line segments, and this is why they are called "Euclidean." Addition of vectors is expressed in the parallelogram law.

Euclidean vectors are entered as 3-dimensional arrays delimited by [ ], the components separated by spaces. However, in addition to rectangular components, components in cylindrical coordinates (r,θ,z) and spherical coordinates (r, θ,&phi) can also be used. The angle sign /_ is used in front of angles to indicate that they are angles, and a special key is provided (→SPC). A vector can be entered as [1/_45] and after entering it will appear in the display mode in effect. For vectors in a plane, z is omitted in the rectangular or cylindrical display. For spherical coordinates, the radius and two angles are used. Note that what the HP-48 likes to call θ and φ are interchanged from the normal convention. Here, φ is the polar angle, and θ is the angle from x towards y. The vector display mode is selected in →MODES. Move the cursor to COORD SYSTEM: and press CHOOS. Move the cursor to the one you want, and press OK. When you press →POLAR on the main keyboard, the display will toggle between rectangular and whatever angle mode seems appropriate, usually R/_Z (cylindrical). All [ ] objects on the stack will display this way.

Because of the complex number facility, rectangular to polar conversion is not as necessary as it once was, since the HP-48 will do complex arithmetic directly. However, it can still be done when required, and it is very easy. Just enter the 2-dimensional vector in any mode you prefer, either rectangular or polar, and press ENTER. Then, →POLAR will toggle the display mode between rectangular and polar, converting the number as necessary. There is the additional convenience that the HP-48 will handle 3-dimensional vectors as well, converting them between rectangular, cylindrical and spherical as desired.

An easy way to enter vectors is to type the components separated by SPC in the command line, and then use MTH VECTR →V2 or →V3 to assemble a two- or three-dimensional vector. The function V→ will disassemble a vector to coordinates pushed on the stack, while the assembly functions will take two or three arguments from the stack as well as from the command line. The display mode can also be easily changed in this menu after NXT. A square shows the active mode. Just press any button to change it. A unit vector in any direction can be made by creating a spherical or cylindrical vector with magnitude 1. It can then be converted to rectangular for use. The rectangular components of a spherical unit vector are direction cosines. The arc cos of the dot product of two unit vectors is the angle between them. The component of any vector in the direction specified by a unit vector is the dot product of the vector with the unit vector.

The function ABS in the MTH VECTR menu finds the length of a vector. If two vectors are on levels 1 and 2 of the stack, DOT will find their scalar product a·b. This is the sum of the products of the components, and so is commutative, and distributive with respect to addition. CROSS will find their vector, or cross, product axb, which is again a vector. This is a strange quantity defined only for two 3-dimensional vectors, that is really an antisymmetric 3 x 3 matrix (tensor) with 3 independent components, which transform like the components of a vector. The magnitude of the cross product is |a||b||sinθ|, the area of the parallelogram of which a and b are two sides, and θ the angle between them. It is normal to the plane of the two vectors, in the right-handed screw direction when rotating from a to b. It changes sign if the order of the vectors is reversed. It is given by the well-known expansion of a determinant in which the first row is the rectangular unit vectors, and the second and third rows the components of the vectors. The triple scalar product of three vectors [abc] = a·bxc is the determinant of the matrix of which the three vectors form the rows or columns in the same order. Its absolute value is the volume of the parallelepiped on the three vectors. It changes sign if any two vectors are interchanged, and is unchanged if the dot and the cross are interchanged. However, it and any other combination of dot and cross products can be found using the elementary operations and the stack.

If a and b are non-null vectors (length > 0), then a·b = 0 implies that the vectors are perpendicular. axb = 0 implies that the vectors are parallel.

References

A. C. Aitken, Determinants and Matrices, 9th ed. (Edinburgh: Oliver and Boyd, 1958). Chapters I and II. This is but one example of a wide variety of texts on vectors, matrices and linear algebra.

N. V. Yefimov, Quadratic Forms and Matrices (New York: Academic Press, 1964). Appendices I and II have brief reviews of Euclidean vectors and determinants.

HP 48G Series User's Guide (Corvallis, OR: Hewlett-Packard Co., 1993). Part No. 00048-90126. Especially sections 13 and 14.


Return to Tech Index

Composed by J. B. Calvert
Created 1 May 2003
Last revised