Rotation Matrices 

In two dimensions the general rotation can be expressed in terms of Cartesian coordinates by a matrix of the form 

_{} 

for any constants a and b. There is only one degree of freedom, and we can normalize by setting a^{2} + b^{2} = 1. Thus there is a constant q such that a = cos(q/2) and b = sin(q/2), and so the transformation can be written in the familiar form 

_{} 

Since the determinant of a rotation is unity, equation (1) relies on the Pythagorean sumofsquares formula 

_{} 

Thus any rotation based on integer values of a and b corresponds to a Pythagorean triple. In fact, it corresponds to a 2 x 2 orthogonal orthomagic square of squares. (The term orthogonal means the row vectors are mutually perpendicular, as are the column vectors, and the term orthomagic means that each row and each column of the squared elements sums to the same constant.) In general, the component R_{ij} of a rotation matrix equals the cosine of the angle between the ith axis of the original coordinate system and the jth axis of the rotated coordinate system. In other words, the elements of a rotation matrix represent the projections of the rotated coordinates onto the original axes. Naturally this relation is reciprocal, so the inverse of a rotation matrix is simply its transpose, i.e., R^{1} = R^{T}. The eigenvalues of (1) are 

_{} 

with the corresponding eigenvectors 

_{} 

Of course, any complex multiples of these eigenvectors are also eigenvectors. Notice that we can present two multiples of each eigenvector as the rows and columns respectively of a 2 x 2 matrix as shown below 

_{} 

Letting R denote an arbitrary (twodimensional) rotation matrix as in (1), and letting I denote the identity matrix 

_{} 

it’s easy to show that 
_{} 

We also have the interesting identity 

_{} 

Noting that the determinant I + R equals the scalar quantity 4a^{2} / (a^{2} + b^{2}), the above can also be written in the form 

_{} 

We also note that the determinant of (I  R) is 2(1cos(q)), which is nonzero for any q other than a multiple of 2p, so except in those cases we can multiply on the right by the inverse (I  R)^{1} to give the expression 

_{} 

This is a (scaled) similarity transformation, so any twodimensional rotation matrix R is similar to (I + R)^{2} in this sense. In fact, since R commutes with I – R, we have 

_{} 

for twodimensional rotations. We’ll see below that closely analogous (but not identical) formulas apply to rotation matrices in three and even (in a restricted way) four dimensions as well. 

Extrapolating the form of equation (2), we consider a 3 x 3 matrix R such that 

_{} 

for arbitrary constants a,b,c,d. Solving this for R gives 

_{} 

There are only three degrees of freedom in this rotation, so we can normalize by setting a^{2} + b^{2} + c^{2} + d^{2} equal to unity. The determinant of the matrix inside the brackets (without the leading factor) is simply the sum a^{2} + b^{2} + c^{2} + d^{2}, and since determinants are multiplicative, it isn’t surprising that the determinant of the product of two such matrices is given in terms of the determinants of the original matrices by the “sumoffoursquares” formula. This is analogous to Fibonacci formula for the product of two sums of two squares. The real eigenvalue of the transformation is l_{1} = 1, and the corresponding eigenvector has components proportional to (b,c,d), so this vector points along the axis of rotation. The only remaining parameter is a (divided by the square root of the determinant, which we have normalized to 1), which must therefore represent the amount of rotation about that axis. Just as in the twodimensional case, we find that a = cos(q/2) for a rotation through the angle q. It follows that b^{2} + c^{2} + d^{2} = sin(q/2)^{2}, again assuming normalized values for a,b,c,d. 

The three dimensional rotation matrix also has two complex eigenvalues, given by 

_{} 

In terms of the parameters 

_{} 

the eigenvector corresponding to l_{2} is proportional to each of the columns of the matrix 

_{} 

and the eigenvector corresponding to l_{3} is proportional to each of the rows. The barred variables are complex conjugates. The columns of this matrix are mutually orthogonal, as are the rows (so to this extent the matrix has the properties of a rotation). The sum of the squares of the components of each vector (row and column) is zero, whereas the sums of the squared norms of the components of the rows (and of the columns) are 2LD, 2MD, and 2ND respectively, where D = (b^{2} + c^{2} + d^{2}). The determinant of this matrix is zero, and the eigenvalues are 0, 0, 2D. The eigenvalues of the symmetrical part are 0, D, D, and the eigenvalues of the antisymmetrical part are 0, D, D. 

Any three nonparallel eigenvectors comprise a basis. Taking one representative from each of the three eigenvectors as the columns, we can form the matrix 

_{} 

Then by the usual similarity transformation we can diagonalize the rotation matrix as follows 

_{} 

It’s interesting to consider the physical significance of the complex eigenvectors for rotations in three dimensions. Since the set of rotation matrices is closed under multiplication, it follows that any reorientation in space can be expressed as a pure rotation about some fixed axis. If we choose our coordinate system so that the axis of rotation coincides with one of the coordinates axes, then the rotation matrix degenerates to essentially a twodimensional rotation. For example, for a rotation about the x axis we have c = d = 0, and the threedimensional rotation matrix reduces to 

_{} 

In this case the real eigenvector is just (b,0,0) and the two complex eigenvectors are represented by the rows and columns (respectively) of the matrix 

_{} 

which of course represents just the two complex eigenvectors from the twodimensional case. 

Incidentally, this rotation can also be expressed in terms of quaternions. In general, letting v’ denote the vector given by applying to the vector v a rotation through an angle q about the axis defined by the vector (b,c,d). The quaternion operation for determining v’ is by means of the formula 

_{} 

This is an elegant formulation, but the multiplications involved in the quaternion products actually require more elementary operations than the matrix multiplication. 

The analog of equation (3) is 

_{} 

We also have the determinant 

_{} 

which shows that the analog of equation (4) for threedimensional rotations is 

_{} 

However, for threedimensional rotations, the determinant of I – R is identically zero, so in this case we cannot multiply through by the inverse (as we did in the twodimensional case) to give an analog of equation (4’). We also note that (6) differs from equation (4) by the factor of 2 on the right hand side. 

Given the diagonal elements of R, we can compute the normalized parameters by means of the relations 

_{} 

Solving these equations gives 

_{} 

We could define a rational rotation as one for which the values of a, b, c, and d are rational. These can be generated by taking integer values of the parameters, without normalizing them. For example, setting the parameters to 1,2,4,6 or to 1,3,5,6 gives the two rational rotation matrices 

_{} 

Incidentally, squaring each of the elements in each bracketed matrix gives a 3 x 3 orthomagic square of squares. The product of these two matrices is another rational rotation 

_{} 

corresponding to the parameters 61, 1, 15, 10, which are related to the parameters of the A and B matrices by the sumoffoursquares formula. 

It’s tempting to extrapolate from equations (2) and (5) to higher dimensions. This would lead us next to define a 4 x 4 matrix R by the relation 

_{} 

for constants a, b, c, d, e, f, g. By analogy with the previous cases, we might hope the determinant of this matrix would be the sum of the squares of the seven parameters divided by 16a^{2}, but this is the case only if we impose the condition 

_{} 

With this condition, we can indeed carry through the derivation of the fourdimensional rotation matrix 

_{} 

where 
_{} 

This matrix satisfies all the usual requirements of a rotation matrix, such as the fact that the rows are mutually orthogonal, as are the columns, and the sum of the squares of each row and of each column is unity. The seven parameters are constrained by two conditions (the normalizing condition and the special condition bg – cf + de = 0), so there are five degrees of freedom. Analogous to equations (4) and (6), we have 

_{} 

which is identical to the earlier equations except for the coefficient 4. It appears that the coefficient for Ndimensional rotations (restricted as necessary) equals 2^{N}^{2}. As in the threedimensional case, the determinant of I – R in the fourdimensional case is identically zero, so again we cannot multiply through by the inverse to get an analog of (4’). (For more on this topic, see the note on Rotations and AntiSymmetric Tensors.) 

As an example of a fourdimensional rotation matrix, let the parameters a through g have the values 1, 2, 4, 6, 9, 20, and 13 respectively. Notice that we have put f = (bg + de)/2 so these parameters satisfy the requirement bg – cf + de = 0. The resulting fourdimensional rotation matrix is 

_{} 

The squares of the elements inside the brackets constitute a 4 x 4 orthomagic square of squares, with the common sum equal to the square 499849 = 707^{2}. 
