Orthogonal matrices are matrices such that . Skew symmetric matrices correpond to infinitesimal orthogonal matrices (Lie algebra). That is
If , .
Question: How do we parametrize orthogonal matrices over rational numbers?
The standard method over reals is to pick orthonormal basis of columns We have to just pick any unit vector, and then move to the orthogonal plane and pick two orthogonal vectors. But over rationals to start with unit vectors with rational coordinates already put constraints on what kind of vectors we can choose. By Gauss theorem about numbers of the form , we get congruence constraints on the common denominators of the coordinates. But then it’s not clear if the orthogonal space has an orthonormal basis with rational coordinates. (The orthogonal space will have rational basis, but it’s not obvious see that we can choose unit vectors, that if the sphere restricted to the space has any rational points)
Before we describe a way to parametrize the orthogonal vectors, here is a way to see that there are some orthogonal matrices with a given rational vector as the first column: Consider reflection which takes a standard unit vector to the chosen unit rational vector. This is a rational matrix and hence the image of the standard basis gives us a orthonormal basis with a chosen unit rational vector in the first column. This also implies by induction that there is an orthogonal matrix with any given subset of unit orthonormal vectors as first few columns. That is we can extend a rational orthonormal basis of a subspace to a rational orthonormal basis of the whole space.
Now we describe the parametrization of the rational orthonormal basis:
In 3-dimensions rational unit vectors exactly corresponds to the solutions to integer vectors with integer length. That is by clearing denominators we get the equation
We can parametrize these solutions, alternatively the rational solutions to by drawing lines from a point on the sphere. (This is similar to the construction for Pythagorean triplets and rational points on the circle).
The result is a parametrization
for the integer points with integer lengths and
for rational points on the sphere .
So start with any rational unit vector (just pick some values in the above parametrization), and then substitute the above parametrization in the equation , to get
When we get the solution . So assume
When we view the equation as a quadratic over , we need the discriminant to be a square.
This is a circle with a rational point , we can parametrize the rational points on the using Pythagorean triples (drawing lines from a rational point.) to get
From these relations, we can solve for and hence find a one parameter family of solutions to unit rational vector orthogonal to the given vector. The third vector orthogonal to both the vector is then defined uniquely up to sign.
To summarize:
a) Start with any rational unit vector- we can do this by choosing special values for the parameters in the parameterization .
b) Now use the process shown above to find the parametrization for unit vector orthogonal to the first vector.
c) Finally the third vector can be obtained as the cross product . (This can be written in terms of determinants with coordinates of and , hence it is rational.)
This is for 3-dimensional orthogonal matrices. We can similarly carry out the process for – dimensional matrices and we will have parameters for the first column, for the second column and so on, to get the dimensional orthogonal group. We explicitly did this computation for 3-dimensions, but it’s not clear that the lower dimensional spheres obtained when we put orthogonality constraints have rational points to be able to use them to draw lines and get a rational parametrization. But like mentioned before the reflection construction helps us to create at least one orthogonal matrix with a few given choice of orthonormal columns, and this means that there are rational points on the spheres orthogonal to the columns.
Next, we describe another way to generate the orthogonal matrices.
We have the following correspondence due to Cayley between orthogonal matrices without eigenvalue and skew-symmetric matrices.
and
We have the following correspondence due to Cayley between orthogonal matrices without eigenvalue and skew-symmetric matrices.
Proof: First note that given an skew-symmetric matrix , satisfies
where we used the is skew-symmetric:
Similarly if is an orthogonal matrix, then
Check that the maps are inverses of each other:
This construction is valid over any field, and in particular rationals.
So we have a method to construct orthogonal matrices provided they don’t have the eigenvalue . What do we do about matrices with eigenvalues ?
We just multiply a diagonal matrix with entries to get an orthogonal matrix without eigenvalue . How do we know that there exists such a matrix? Not just for orthogonal matrices, any matrix over a field (of characteristic not equal to ) can be multiplied by a diagonal matrix to get a matrix without the eigenvalue .
Because if that’s not the case, we have for every diagonal matrix , then for all possible choices of for .
Expanding the determinant and assuming inductively that the submatrix (removing first column and row) has a non-zero determinant for some choice of ,
we get that is constant for any of values for , which is a contradiction.
So in this method we create skew symmetric matrices (which basically is a choice of three rational numbers). And then use the Cayley formula to generate a orthogonal matrix. Further multiply with a diagonal matrix with entries.
Remark:
In the first method: We have 2 parameters for the initial vector, one parameter to choose the second vector and then a choice of to choose the third vector.
But in this second method with skew-symmetric matrices, we set all the three parameters in the beginning to define the skew-symmetric matrix.
Explicit formula for : We can make both the methods explicit, but we can use the relation between quaternions and rotations to get the formula
The formula corresponds to the conjugation action by an element on pure quaternions.