In the last post, Part I, I talked about how commutation of operators leads them to have the same set of eigenvectors, and if some of the eigenvectors of one operator belongs to the same eigenvalue (degenerate in the first operator), how they may belong to different eigenvalues of the other operator (nondegenerate in the second operator).
This time, let me go the opposite way and show that if two operators have same eigenvectors, then they will commute. I'll use the explanation in Gilbert Strang's Linear Algebra and Its Application [1].
The diagonalization matrix $S$
But first, "remember your linear algebra!" If one puts all eigenvectors of an operator into the columns of a matrix, that matrix will be the diagonalization matrix of the operator. What?
The matrix elements of an operator can be found in any (orthonormal) basis using the sandwich $\left\langle \alpha_m \left| A \right| \alpha_n \right\rangle = A_{mn}$. If the basis is chosen as the eigenvectors of the operator (and remember that whether the eigenvalues are degenerate or not one can build an orthonormal basis using the eigenvectors, as long there are n linearly independent eigenvectors), then the matrix representation of the operator will be diagonal. If $\left\{ \left| a_n \right\rangle \right\}$ is the set of orthonormal eigenvectors with corresponding eigenvalues $\left\{ a_n \right\}$, then $A_{mn}$ $= \left\langle a_m \left| A \right| a_n \right\rangle $ $= a_n \left\langle a_m | a_n \right\rangle$ which is $a_n \delta_{mn}$ due to orthonormality. Hence $$A_{mn} =
\begin{cases}
a_n & \text{ if } m=n \\
0 & \text{ if } m\neq n
\end{cases}$$
To express this process of 'representing an operator in its own eigenvector basis' with a simpler matrix multiplication operation, one uses the similarity transformation with the diagonalization matrix.
$$S=
\begin{pmatrix}
\uparrow & \uparrow & & \uparrow\\
\left| a_1 \right\rangle & \left| a_2 \right\rangle & \ldots & \left| a_n \right\rangle \\
\downarrow & \downarrow & & \downarrow
\end{pmatrix}
$$
of which inverse is
$$S^{-1}=
\begin{pmatrix}
\leftarrow & \left\langle a_1 \right| & \rightarrow \\
\leftarrow & \left\langle a_2 \right| & \rightarrow \\
& \vdots & \\
\leftarrow & \left\langle a_n \right| & \rightarrow \\
\end{pmatrix}$$
Because their product must be identity $S^{-1}S=I$. Now we see that $\tilde{A}=S^{-1}AS$ is the diagonalized version of $A$, because this calculation involves all sandwiches in the calculations of previous diagonalized $A_{mn}$.
Simultaneously diagonalization of two operators [1]
It is time to show that if $A$ and $B$ have the same eigenvectors, hence diagonalized with the same diagonalization matrix (a fancier way of putting this is "if they are simultaneously diagonalizable") then $\left[A,B\right]=0$.
$AB = S\tilde{A}S^{-1}S\tilde{B}S^{-1}$ $=S\tilde{A}\tilde{B}S^{-1}$. Similarly, $BA = S\tilde{B}\tilde{A}S^{-1}$. Therefore $AB-BA$ $=S\tilde{A}\tilde{B}S^{-1}$ $-S\tilde{B}\tilde{A}S^{-1}$ $=S\left(\tilde{A}\tilde{B}-\tilde{B}\tilde{A}\right)S^{-1}$ $=S\left[\tilde{A},\tilde{B}\right]S^{-1}$ which is $0$, because diagonal matrices always commute (another reason why we love diagonal matrices). (The point of using the same diagonalization matrix is that, it cancels its inverse in $S\tilde{A}S^{-1}S\tilde{B}S^{-1}$ and we get the commutation in the end.)
Lifting up the degeneracy
Let me show an hypothetical examples of how the degeneracy is lifted up by using a commuting operator.
Say $A$ is a $3\times 3$ (hermitian) matrix. When diagonalized using $S$, we get $$\tilde{A}=
\begin{pmatrix}
a_d & 0 & 0 \\
0 & a_d & 0 \\
0 & 0 & a_3
\end{pmatrix}$$ The first two eigenvalues are equal. Hence first two eigenvectors are degenerate. The set of eigenvectors labelled according to corresponding eigenvalues is $\left\{ \left| a_{d}^{(1)} \right\rangle, \left|
a_{d}^{(2)} \right\rangle, \left| a_{3} \right\rangle \right\}$.
One can always find a commuting (hermitian) operator with different eigenvalues. How? Using spectral theorem of course! Say $B = b_1 \left| a_{d}^{(1)} \right\rangle \left\langle a_{d}^{(1)}\right|$ $+ b_d \left| a_{d}^{(2)} \right\rangle \left\langle a_{d}^{(2)}\right|$ $+ b_d \left| a_3 \right\rangle \left\langle a_3\right|$ and voila! $\tilde{B}$ in the basis is: $$
\begin{pmatrix}
b_1 & 0 & 0 \\
0 & b_d & 0 \\
0 & 0 & b_d
\end{pmatrix}$$
Now $B$ commutes with $A$ and have different eigenvalues. Here we defıned the eigenvectors of $B$ as such $\left\{ \left| b_1 \right\rangle = \left| a_{d}^{(1)} \right\rangle \right.$, $\left| b_{d}^{(1)} \right\rangle = \left| a_{d}^{(2)} \right\rangle$, $ \left. \left| b_{d}^{(2)} \right\rangle \left| a_3 \right\rangle \right\}$.
If we label these eigenvectors not according to their corresponding eigenvalue of one operator, but corresponding eigenvalues of both operators, we'll have $\left\{ \left| b_1, a_d \right\rangle \right.$ $, \left| b_d, a_d \right\rangle$ $,\left. \left| b_d, a_3 \right\rangle \right\}$. Here, we lifted the degeneracy, because, each eigenvector has two corresponding eigenvalues, and for each eigenvector we have a different pair of eigenvalues. So lovely!
Next time I will talk about CSCOs in the quantum mechanical context of hydrogen atom, free particle, spin$\tfrac{1}{2}$ systems.
For curios people, an example of "pathalogical" or "defective" matrix from [1]: $\begin{pmatrix}
0 & 1 \\
0 & 0 \\
\end{pmatrix}$ Both of its eigenvalues are $0$. And its eigenvectors are $(0,0)$ and $(0,1)$, they are not linearly independent, they span a one dimensional space. Hence it is impossible to construct a complete orthonormal basis from them, namely there is no $S$ matrix.
[1] Gilbert Strang's Linear Algebra and Its Application third edition, Chapter 5.2.
No comments:
Post a Comment