danjcalderone

In this blog post, we give a symbolic Jordan decomposition of the linear Clohessy-Wiltshire-Hill (CWH) astro-dynamcis equations. We present this as an exercise in linear algebraic reverse engineering from the commonly known integrated solutions. (For the basic details of these equations and their solutions, we refer to this well-explained blog post.)

Note, in this post, we use Jordan form to refer to a pseudo-Jordan form with a block diagonal structure where rather than solving explicitly for complex eigenvalues and eigenvectors, we leave \(2\times 2\) rotation matrices intact. This form has the advantage of representing 2D planes of rotation with a basis of real vectors (as opposed to the complex conjugate pairs of eigenvectors). It would be a simple step to go one step further and write the full Jordan form, but the author finds this pseudo-Jordan form more useful.

Linear Dynamical System

The Clohessy-Wiltshire-Hill (CWH) astro-dynamcis equations are given by the following autonomous linear system. Consider the state vector \(\mathbf{x} = (y,\dot{y},x,\dot{x},z,\dot{z})\). The CWH equations can be written in the form \(\dot{\mathbf{x}} = A \mathbf{x}\) where $$ \begin{aligned} \begin{bmatrix} \dot{y} \\ \dot{x} \\ \ddot{y} \\ \ddot{x} \\ \dot{z} \\ \ddot{z} \\ \end{bmatrix} = \begin{bmatrix} 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -2 \omega & 0 & 0 \\ 0 & 3 \omega^2 & 2 \omega & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & -\omega^2 & 0 \end{bmatrix} \begin{bmatrix} y \\ x \\ \dot{y} \\ \dot{x} \\ z \\ \dot{z} \end{bmatrix} \end{aligned} $$ As detailed in this blog post, the solutions can be computed to be the following. Let \(c = \cos \omega t\) and \(s = \sin \omega t\). The time dependent solutions to the CWH model of the are given by \(\mathbf{x}(t) = e^{At} \mathbf{x}_0\) $$ \begin{aligned} \begin{bmatrix} y(t) \\ x(t) \\ \dot{y}(t) \\ \dot{x}(t) \\ z(t) \\ \dot{z}(t) \end{bmatrix} = \begin{bmatrix} 1 & 6s- 6\omega t & 4s/\omega - 3t & -2/\omega +2c /\omega & 0 & 0 \\ 0 & 4-3c & 2/\omega -2c/\omega & s/\omega & 0 & 0 \\ 0 & -6 \omega + 6\omega c & -3 + 4 c & -2s & 0 & 0 \\ 0 & 3\omega s & 2 s & c & 0 & 0 \\ 0 & 0 & 0 & 0 & c & s/\omega \\ 0 & 0 & 0 & 0 & \omega s & c \end{bmatrix} \begin{bmatrix} y_0 \\ x_0 \\ \dot{y}_0 \\ \dot{x}_0 \\ z_0 \\ \dot{z}_0 \end{bmatrix} \end{aligned} $$

Symbolic Jordan Decomposition (Main Result)

We can compute the following Jordan decomposition \(A = \mathbf{V}J \mathbf{V}^{-1}\) and \(e^{At} = \mathbf{V}e^{Jt}\mathbf{V}^{-1}\) with $$ \begin{aligned} J = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & -\omega & 0 & 0 \\ 0 & 0 & \omega & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & -\omega \\ 0 & 0 & 0 & 0 & \omega & 0 \end{bmatrix}, \quad e^{Jt} = \begin{bmatrix} 1 & t & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & c & -s & 0 & 0 \\ 0 & 0 & s & c & 0 & 0 \\ 0 & 0 & 0 & 0 & c & -s \\ 0 & 0 & 0 & 0 & s & c \end{bmatrix} \end{aligned} $$ $$ \begin{aligned} \mathbf{V} = \begin{bmatrix} 1 & 0 & 0 & 2 & 0 & 0 \\ 0 & -2/3\omega & -1 & 0 & 0 & 0 \\ 0 & 1 & 2\omega & 0 & 0 & 0 \\ 0 & 0 & 0 & \omega & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & \omega \end{bmatrix}, \ \ \mathbf{V}^{-1} = \begin{bmatrix} 1 & 0 & 0 & -2/\omega & 0 & 0 \\ 0 & -6 \omega & -3 & 0 & 0 & 0 \\ 0 & 3 & 2/\omega & 0 & 0 & 0 \\ 0 & 0 & 0 & 1/\omega & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1/\omega \end{bmatrix} \end{aligned} $$

Reverse Engineering Derivation

One can of course quickly check the above derivation that \(\mathbf{V}^{-1}\mathbf{V} =I\) and also that \(A = \mathbf{V}J\mathbf{V}^{-1}\) and \(e^{At} = \mathbf{V}e^{Jt}\mathbf{V}^{-1}\). It is, however, interesting to see how one might guess this as the Jordan form from the time dependent solution that is usually derived. We detail that process here. Note that one can work from either the form of \(A\) or \(e^{At}\) or both since they will have the same eigenvector structure and a clearly defined relationship between their eigenvalues. We will work mostly from \(e^{At}\).

We first make several general comments about the form of \(e^{At}\) reproduced here. $$ \begin{aligned} e^{At} = \begin{bmatrix} 1 & 6s- 6\omega t & 4s/\omega - 3t & -2/\omega +2c /\omega & 0 & 0 \\ 0 & 4-3c & 2/\omega -2c/\omega & s/\omega & 0 & 0 \\ 0 & -6 \omega + 6\omega c & -3 + 4 c & -2s & 0 & 0 \\ 0 & 3\omega s & 2 s & c & 0 & 0 \\ 0 & 0 & 0 & 0 & c & s/\omega \\ 0 & 0 & 0 & 0 & \omega s & c \end{bmatrix} \end{aligned} $$ Our first observation is that there are multiple different cosine and sine terms arranged in several a standard rotation matrix structures. From this we can be sure that the matrix has complex eigenvalues that represent internal rotations This will provide the structure for much of our guesses later on. More subtly, there are also \(t\) terms in the matrix. This is important because it indicates that the matrix will not be strictly diagonal and will have some generalized eigenvectors. For a diagonalizable matrix, all time dependent terms will have the form \(e^{\lambda t}\) for some eigenvalue \(\lambda\). (For a complex eigenvalues this will translate into cosines and sines.) Terms that are powers of \(t\) can only arise from generalized eigenvectors. We can also note that the nontrivial Jordan block will only be \(2 \times 2\) since we only have powers of \(t\) not \(t^2\) or \(t^3\), etc.

1. Rotation Matrix Structures

The first major guess that we make is that the following sub-matrices have complex eigenvalues and a general (elliptical) rotation structure. $$ \begin{aligned} \begin{bmatrix} 4-3c & 2/\omega -2c/\omega & s/\omega \\ -6 \omega + 6\omega c & -3 + 4 c & -2s \\ 3\omega s & 2 s & c \\ \end{bmatrix}, \quad \begin{bmatrix} c & -s/\omega \\ \omega s & c \end{bmatrix} \end{aligned} $$ In the case, of the \(2 \times 2\) matrix this this is less of a guess. For the \(3\times 3\) matrix it is more of a stretch but we decide to proceed with this assumption and see where it leads us.

2D Rotation Matrix

We can use the second matrix's similarity to a standard rotation to get that $$ \begin{aligned} \begin{bmatrix} c & -s/\omega \\ \omega s & c \end{bmatrix} = UR_{2 \times 2}U^{-1} = \begin{bmatrix} 1 & 0 \\ 0 & \omega \end{bmatrix} \begin{bmatrix} c & -s \\ s & c \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & 1/\omega \end{bmatrix} \end{aligned} $$

3D Rotation Matrix - Plane of Rotation

For the \(3 \times 3\) matrix, we make several notes. First, any \(3 \times 3\) rotation will have only one one complex eigenvalue pair along with an axis of rotation. The complex eigenvalue pair will generate the sines and cosines and so we note that if the above matrix is a \(3 \times 3\) rotation than the sines and cosines will have a structure consistent with following outer product $$ \begin{aligned} \begin{bmatrix} \substack{| \\ V_1 \\ | } & \substack{| \\ V_2 \\ | } \end{bmatrix} \begin{bmatrix} c & -s \\ s & c \end{bmatrix} \begin{bmatrix} - \ \small{W_1^\top } \ - \\ - \ \small{W_2^\top } \ - \end{bmatrix} \end{aligned} $$ Here we are lucky in that the cosine/sine structure of the \(3 \times 3\) matrix is fairly simply having the following form $$ \begin{aligned} \begin{bmatrix} \substack{| \\ V_1 \\ | } & \substack{| \\ V_2 \\ | } \end{bmatrix} \begin{bmatrix} c & -s \\ s & c \end{bmatrix} \begin{bmatrix} - \ \small{W_1^\top } \ - \\ - \ \small{W_2^\top } \ - \end{bmatrix} = \begin{bmatrix} -3c & -2c/\omega & s/\omega \\ 6 \omega c & 4c & -2s \\ 3 \omega s & 2s & c \end{bmatrix} \end{aligned} $$ Note the location of the sine and cosine terms. From this we can guess that the vectors \(\{V_j\}_{j=1}^2\) and \(\{W_j\}_{j=1}^2\) will have the following form $$ \begin{aligned} \begin{bmatrix} \substack{| \\ V_1 \\ | } & \substack{| \\ V_2 \\ | } \end{bmatrix} = \begin{bmatrix} * & 0 \\ * & 0 \\ 0 & * \end{bmatrix}, \quad \begin{bmatrix} - \ \small{W_1^\top } \ - \\ - \ \small{W_2^\top } \ - \end{bmatrix} = \begin{bmatrix} * & * & 0 \\ 0 & 0 & * \end{bmatrix} \end{aligned} $$ We can now start guessing the elements of \(\{V_j\}_{j=1}^2\) and \(\{W_j\}_{j=1}^2\) according to this structure. Based on the cosine structure, we start by taking \(V_1 = (-3, 6 \omega , 0) \) and then get that \(W_1\) must be given by \(W_1 = (1,2/3\omega,0)\). Similarly, try \(V_2 = (0,0,1)\) and \(W_2 = (0,0,1)\). We now can check if the structure given is consistent with this. $$ \begin{aligned} \begin{bmatrix} -3 & 0 \\ 6 \omega & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} c & -s \\ s & c \end{bmatrix} \begin{bmatrix} 1 & 2/3\omega & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} - 3c & -2c/\omega & 3 s \\ 6 \omega c & 4 c & -6 \omega s \\ s & 2/3\omega s & c \end{bmatrix} \end{aligned} $$ We note that the cosine structure is correct, but the sine structure is off by a scaling. Further inspection shows that the row of sine terms needs to be scaled up by a factor of \(3 \omega \) and the column of sine terms needs to be scaled down by a factor of \(3\omega\). From this we can see that, we can fix this problem by scaling up our choice of \(V_2\) by \(3 \omega \) and scaling down our choice of \(W_2 \) by \(1/3\omega \) which will fix the sine terms without changing the cosine term in the lower right corner. Applying this gives $$ \begin{aligned} \begin{bmatrix} -3 & 0 \\ 6 \omega & 0 \\ 0 & 3 \omega \end{bmatrix} \begin{bmatrix} c & -s \\ s & c \end{bmatrix} \begin{bmatrix} 1 & 2/3\omega & 0 \\ 0 & 0 & 1/3 \omega \end{bmatrix} = \begin{bmatrix} - 3c & -2c/\omega & s/\omega \\ 6 \omega c & 4 c & -2 s \\ 3\omega s & 2 s & c \end{bmatrix} \end{aligned} $$ which corrects the issue. We note also that we could have scaled \(V_1\) down and \(V_2\) up to get the same result Doing this after the fact, this is equivalent to scaling down both \(V_1,V_2\) by \(1/3 \omega\) and scale up both \(W_1,W_2\) by \(3\omega\). This yields the slightly cleaner expression. $$ \begin{aligned} \begin{bmatrix} -1/\omega & 0 \\ 2 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} c & -s \\ s & c \end{bmatrix} \begin{bmatrix} 3\omega & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} - 3c & -2c/\omega & s/\omega \\ 6 \omega c & 4 c & -2 s \\ 3\omega s & 2 s & c \end{bmatrix} \end{aligned} $$ which is also correct. It is still not obvious that this is a proper diagonalization. We have to check that \(V_1,V_2\) and \(W_1,W_2\) can be components of two matrices that are inverses of each other. In particular we need to check that $$ \begin{aligned} \begin{bmatrix} - \ \small{W_1^\top} \ - \\ - \ \small{W_2^\top} \ - \\ \end{bmatrix} \begin{bmatrix} \substack{| \\ V_1 \\ | } & \substack{| \\ V_2 \\ | } \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{aligned} $$ We immediately have that the off diagonal terms will be correct since \(V_1\) and \(W_2\) are clearly orthogonal and similarly \(V_2\) and \(W_1\) are orthogonal. If we write out the full computation, we get $$ \begin{aligned} \begin{bmatrix} 3\omega & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} -1/\omega & 0 \\ 2 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \end{aligned} $$ as desired. (If the multiplication had yielded a scaled version of the identity instead of just the identity, we would have had the option of dividing that scaling factor into the eigenvalues of the diagonalization. In this case, we would have moved away from a strict rotation structure to a rotation with scaling but we would still be able to continue with our diagonalization.)

3D Rotation Matrix - Axis of Rotation

We now want to add an axis of rotation by completing the bases of left and right eigenvectors. We will then check if this is consistent with the remaining constant terms in the \( 3 \times 3\). (If this doesn't work exactly, we will still have the option of scaling the axis by some real number to see if that fixes the problem, but we will see that this is unnecessary.) Our immediate goal is now to find \(V_3\) and \(W_3\) that complete the appropriate bases, ie. such that $$ \begin{aligned} \begin{bmatrix} - & \small{W_3^\top} & - \\ - & \small{W_1^\top} & - \\ - & \small{W_2^\top} & - \\ \end{bmatrix} \begin{bmatrix} | & | & | \\ V_3 & V_1 & V_2 \\ | & | & | \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \end{aligned} $$ Here we just need to pick \(V_3\) orthogonal to \(W_1,W_2\) and \(W_3\) orthogonal to \(V_1,V_2\). From this we can select, \(V_3 = (2,-3 \omega, 0)\) and \(W_3 = (2,1/\omega,0)\). We also note that \(W_3^\top V_3 = 1\) which is lucky. If it was not the case, we could just scale them one of them appropriately. We now compute $$ \begin{aligned} \begin{bmatrix} 2 \\ -3\omega \\ 0 \end{bmatrix} \begin{bmatrix} 2 & 1/\omega & 0 \end{bmatrix} = \begin{bmatrix} 4 & 2/\omega & 0 \\ -6 \omega & -3 & 0 \\ 0 & 0 & 0 \end{bmatrix} \end{aligned} $$ which is consistent with the non-sine/cosine terms in the original \(3 \times 3\) rotation. We are now done and combining everything together we can write $$ \begin{aligned} \begin{bmatrix} 4-3c & 2/\omega -2c/\omega & s/\omega \\ -6 \omega + 6\omega c & -3 + 4 c & -2s \\ 3\omega s & 2 s & c \\ \end{bmatrix}= VRW = V \begin{bmatrix} 1 & 0 \\ 0 & R_{2 \times 2} \end{bmatrix} W \end{aligned} $$ with \( R_{2 \times 2} = \begin{bmatrix} c & -s \\ s & c \end{bmatrix} \) and $$ \begin{aligned} V = \begin{bmatrix} 2 & -1/\omega & 0 \\ -3 \omega & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix} , \quad W = V^{-1} = \begin{bmatrix} 2 & 1/\omega & 0 \\ 3 \omega & 2 & 0 \\ 0 & 0 & 1 \end{bmatrix} \end{aligned} $$

2. Generalized Eigenvector Structure

We are now close to having the full Jordan form. With the current eigenvector information we can make an educated guess that the full Jordan form expansion will have the form $$ \begin{aligned} e^{At} = \begin{bmatrix} 1 & v^\top & 0 \\ 0 & V & 0 \\ 0 & 0 & U \end{bmatrix} \begin{bmatrix} 1 & tI_1^\top & 0 \\ 0 & R & 0 \\ 0 & 0 & R_{2 \times 2} \end{bmatrix} \begin{bmatrix} 1 & -v^\top W & 0 \\ 0 & W & 0 \\ 0 & 0 & U^{-1} \end{bmatrix} \end{aligned} $$ for some vector \(v \in \mathbb{R}^3\) where \(I_1 \in \mathbb{R}^3\) is the first standard basis vector. Note that the middle matrix in this expansion is in Jordan form given the structure of \(R\). Note also that we can check that the outer matrice are inverses of each other. The only real guess here is the addition of the vector \(v\) since \(U,V,W\) need to be where they are given the structure of the rotations as previously discussed. Multiplying this expression out shows that the only expression that needs to verified is $$ -v^\top W + (tI_1^\top + v^\top R)W = \Big[ 6s - 6 \omega t, \ \ 4s/\omega - 3t, \ \ - 2/\omega + 2 c /\omega \Big] $$ We need to see if we can solve this expression for \(v\). Specifically, we have $$ \begin{aligned} v^\top \Big(R-I\Big)W + tI_1^\top W & = v^\top \Big(R-I\Big)W + \Big[ 2t, \ \ t/\omega, \ \ 0 ] \end{aligned} $$ Comparing this with the RHS of the equation above we see that the structure of the \(t\) terms is roughly correct but off by a factor of \(-3 \omega\). We can fix this by including this factor in \(W\). To maintain the previous diagonalization structure we have to scale up \(W\) and scale down \(V\). This gives us the new definitions of \(V\) and \(W\) as follows. $$ \begin{aligned} V = \begin{bmatrix} -2/3\omega & 1/3\omega^2 & 0 \\ 1 & -2/3\omega & 0 \\ 0 & 0 & -1/3 \omega \end{bmatrix} , \quad W = V^{-1} = \begin{bmatrix} -6 \omega & -3 & 0 \\ -9 \omega^2 & -6 \omega & 0 \\ 0 & 0 & -3 \omega \end{bmatrix} \end{aligned} $$ We can now solve the remainder of the equation for \(v\) with the new \(W\). Moving the \(W\) to the other side of this equation gives $$ v^\top (R-I) = \Big[ 6s, \ \ 4s/\omega, \ \ - 2/\omega + 2 c /\omega \Big]V $$ Writing out this equation in more detail gives $$ \begin{aligned} \Big[v_1, \ \ v_2, \ \ v_3 \Big] \begin{bmatrix} 0 & 0 & 0 \\ 0 & c - 1 & -s \\ 0 & s & c - 1 \end{bmatrix} & = \Big[ 6s, \ \ 4s/\omega, \ \ - 2/\omega + 2 c /\omega \Big] \begin{bmatrix} -2/3\omega & 1/3\omega^2 & 0 \\ 1 & -2/3\omega & 0 \\ 0 & 0 & -1/3\omega \end{bmatrix} \\ & = \Big[ 0, \ \ -2s/3\omega^2, \ \ 2/3\omega^2 - 2c/3\omega^2 \Big] \\ & = (-2/3\omega^2)\Big[ 0, \ \ s, \ \ c - 1 \Big] \end{aligned} $$ By comparing the two sides of the equation, we can see that because of the sine/cosine structure, we need \(v_1=v_2=0\) and solving for \(v_3\) gives \(v_3 = -2/3\omega^2\). Computing \(-v^\top W\) we get that $$ \begin{aligned} v^\top = \Big[ \ 0, \ \ 0, \ \ -2/3\omega^2 \ \Big], \quad - v^\top W = \Big[ \ 0, \ \ 0, \ \ -2/\omega \ \Big] \end{aligned} $$

3. Collecting Terms

This finally gives the following definitions for \(\mathbf{V}\) and \(\mathbf{V}^{-1}\) $$ \begin{aligned} \mathbf{V} = \begin{bmatrix} 1 & 0 & 0 & -2/3\omega^2 & 0 & 0 \\ 0 & -2/3\omega & 1/3\omega^2 & 0 & 0 & 0 \\ 0 & 1 & -2/3\omega & 0 & 0 & 0 \\ 0 & 0 & 0 & -1/3\omega & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & \omega \end{bmatrix}, \ \ \mathbf{V}^{-1} = \begin{bmatrix} 1 & 0 & 0 & -2/\omega & 0 & 0 \\ 0 & -6 \omega & -3 & 0 & 0 & 0 \\ 0 & -9 \omega^2 & -6 \omega & 0 & 0 & 0 \\ 0 & 0 & 0 & -3\omega & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1/\omega \end{bmatrix} \end{aligned} $$ As shown above, the form of \(e^{Jt}\) is given by $$ \begin{aligned} e^{Jt} = \begin{bmatrix} 1 & t & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & c & -s & 0 & 0 \\ 0 & 0 & s & c & 0 & 0 \\ 0 & 0 & 0 & 0 & c & -s \\ 0 & 0 & 0 & 0 & s & c \end{bmatrix} \end{aligned} $$ Noting the form of \(e^{Jt}\), we note that we can scale some columns of \(\mathbf{V}\) and the corresponding rows of \(\mathbf{V}^{-1}\) to make the expressions slightly more elegant. Specifically we scale the 3D plane of rotation by multiplying the middle two columns of \(\mathbf{V}\) by \(-3\omega^2\) and dividing the middle two rows of \(\mathbf{V}^{-1}\) by \(-3\omega^2\). This gives $$ \begin{aligned} \mathbf{V} = \begin{bmatrix} 1 & 0 & 0 & 2 & 0 & 0 \\ 0 & -2/3\omega & -1 & 0 & 0 & 0 \\ 0 & 1 & 2\omega & 0 & 0 & 0 \\ 0 & 0 & 0 & \omega & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & \omega \end{bmatrix}, \ \ \mathbf{V}^{-1} = \begin{bmatrix} 1 & 0 & 0 & -2/\omega & 0 & 0 \\ 0 & -6 \omega & -3 & 0 & 0 & 0 \\ 0 & 3 & 2/\omega & 0 & 0 & 0 \\ 0 & 0 & 0 & 1/\omega & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1/\omega \end{bmatrix} \end{aligned} $$ Comparing the form of \(e^{Jt}\) with the spectral mapping theorem gives the form of \(J\) given above.