Loading [MathJax]/jax/output/HTML-CSS/jax.js

Tuesday, February 21, 2012

Essential Linear Algebra for Projection Calculations

      
In QM, the probability of a measurement outcome, p(R=Δ), is the square of the norm of the component of the state vector on the related subspace, |ψΔ2. The components of vectors on some subspaces are found by applying a projection operator to the vector. Therefore an elementary knowledge about how these projection operators are constructed is essential. :-) It is "remember your linear algebra!" time. (In this entry I'll assume that the operator has discrete spectra, and it is finite dimensional).
The set on the left consists of possible outcomes of the measurement, hence the eigenvalues of the observable. Right set depicts the related vector spaces corresponding to eigenvalues.
The set on the left, {rn}, is the set of all N eigenvalues of the hermitian operator R, related to the observable/dynamic variable R. Each dots represent an eigenvalue. (There may be more than one eigenvalue with the same value, which results in degeneracy and they are called degenerate eigenvalues.) Δ is a subset of whole eigenvalues. A measurement of R will give only one of the elements of {rn}. We are looking for the probability that the outcome will be in the set Δ. The probability distribution p(R=r) (the probability that the outcome will be a certain r) depends both on the observable and on the quantum state.

The set on the right, H, is the N dimensional Hilbert space. (A Hilbert space is an abstract vector space on which an inner product is defined to measure the lengths of vectors.) The state of a quantum system is described by a vector |ψ that lives in H. For each eigenvalue rn in the set {rn} there is corresponding vector |rn in H which satisfies the relation R|rn=rn|rn. That vector is called the eigenvector of R (corresponding to the eigenvalue rn).

Linear algebra tells us that the eigenvectors of a N×N hermitian operator R, {|rn}, is a complete set that spans all of the N-dim Hilbert space. Span{{|rn}}=H. As we talked previously, if R is non-degenerate (all eigenvalues are ), then {|rn} is not only a complete set but a complete orthonormal set (CONS) of which elements satisfy the relation rm|rn=δmn. If R is degenerate, then eigenvectors in {|rn} are only linearly independent (LI) (assuming R is not a pathological case) but one can always construct a CONS from a LI set (remember Grams-Schmidt process). Any vector |ψ in H can be written as a linear combination of elements of {|rN} or its orthonormalized version {|uN}.

The set of eigenvectors |rΔ related to a subset of {|rN} that we will call Δ can not span the whole H but a subset of it, that we will call V. Span{|rΔ}=V. If Δ has M elements then |rΔ has too. And V is an M dimensional subspace of H. We will call the rest of the Hilbert space V which is the complementary set of V. H=VV and VV=0.

Any vector |ψ can be written as a sum of its two components, one belonging to V and other to V|ψ=|ψV+|ψV|ψVV and |ψVV. (Or we could call them |ψV and |ψΔ )

Our aim is to construct the projection operator MR(Δ) which will give |ψV when applied to any |ψ for a chosen operator R and range Δ

Constructing the Projection Operator onto a subspace using a LI set of vectors that span that subspace

Simple Case: Projection onto a line on R2


First let me demonstrate this by working in R2 and projection onto a line L. Say, our subspace is the 1D line L, which is spanned by a vector r. L= {cr|cR} =Span{r}. ProjL(v)vL and v ProjL(r) =vL

We can express the projected line mathematically using this relation: vProjL(v) L. Which means that, the projection of v onto L is a vector ProjL(v) on L of which difference from v is perpendicular to all vectors on L. Any vector on L can be described by a real multiple of r. Hence vL=cr. To find the projection, we have to find this c.

The inner product of two perpendicular vectors is 0. ab=0 if ab. Using perpendicularity between vvL=vL and L we can write (vcr)v=0vrcrr=0c=vrrrProjL(v)=vrrrrProjSpan{r}(v)=rrvrr r is not unique, it can be any vector on L. Picking a unit vector will simplify the calculations. Let u=rr, projection becomes ProjSpan{u}(v)=u(uv)

(Projection is a linear operation)
Let me show that this operation is linear.

  1. ProjL(a+b) = u(u(a+b)) =u(ua+ub) =u(ua)+u(ub) =ProjL(a)+ProjL(b)
  2. ProjL(ca) =u(uca) =cu(ua) =cProjL(ca).
Hence ProjL(a) is a linear operation and therefore it can be represented by a matrix. ProjL(a) Ma. Let's find the matrix elements of M by sandwiching it between unit basis vectors. Mmn=m|M|n, where |1=(10) and |2=(01).
Mmn=m|M|n=m|ProjL(|n)=m(u(un))=m(uun)=unmu=umun,OR=m|(|uu|n)=m|uun=umun
Hence ML=(u2xuxuyuxuyu2y) =|uu|.

General Case: Projection onto a K-dimensional subspace on RN

In the general case H is N-dimensional, and the subspace V is K-dimensional. Say, V is spanned by a set of K LI vectors {rK}. Then any vector on V can be expressed as a liner combination of ri's. xV x=Kiciri.

If we think of these ci's as components of a vector c, then this relation between v, ci's and {rK} can be shown with a matrix multiplication. Define an K×N dimensional matrix A A=(r1r2rK)
Then, for each xV, x=Ac where c is unique to chosen x. The projection of a vector vH onto V, called vV, is a vector in V. Hence it can be expressed by the same matrix multiplication form. v= vV +vV. Or v= ProjV(v) +ProjV(v) and ProjV(v) =Ac.

According to this definition of A, V is the "column-space" of AV =C(A). A columnspace of a matrix is the subspace spanned by its column vectors. Some deep and mysterious relations of linear algebra tells us that, the complementary subspace of V, which is V, is the left-null-space of A, or the null-space of A transpose, AV =C(A), =N(A). Therefore, ProjV(v) N(A).

If a vector belongs to the null-space of a matrix, it means that, when the matrix is applied to that vector the result is the null-vector. AvV=0A(vProjV(v))=A(vAc)=AvAAc=0Av=AAc
Another linear algebra proverb says that, if the columns of a matrix A are linearly independent then AA (which is an (N×K)(K×N)=N×N dimensional square matrix) is invertable. Therefore c=(AA)1Av. This is the way of calculating the {ci} coefficients of the linear combination (in the form of c) to expand the projection vector vV in this LI vectors basis, C(A). Remember vV =Ac Hence: ProjV(v)=A(AA)1Av
A(AA)1A is the projection matrix MVMR(Δ) that we are looking for. It only depends on the subspace, not the basis we chose to span that subspace. Different bases will give different A matrices but MV will be the same for all bases. In QM, the eigenvectors of the observables, the hermitian matrices, are orthonormal. (or degenerate eigenvectors can be orthogonalized). It may be good to look at the orthonormal basis case. And in general, from any linear independent set of vectors, one can create an orthonormal basis with Gramm-Schmidt process.

Assume the orthonormal basis {uK} spans the K-dimensional subspace V. This time the AA will be identity I. AA=(u1u2uK)(u1u2uK)=(u1u1u1u2u1uKu2u1u2u2u2uKuKu1uKu2uKuK)=(100010001)

Hence the expression for the projection operator reduces to ProjV(v)=AAv Let us explicitly calculate its matrix elements. Let um,n be the nth component of the um. AA=(u1u2uK)(u1u2uK)=(u1,1u2,1uK,1u1,2u2,2uK,2u1,Nu2,NuK,N)(u1,1u1,2u1,Nu2,1u2,2u2,NuK,1uK,2uK,N)=(Kiui,1ui,1Kiui,1ui,2Kiui,1ui,NKiui,2ui,1Kiui,2ui,2Kiui,2ui,NKiui,Nui,1Kiui,Nui,2Kiui,Nui,N)=Ki(ui,1ui,1ui,1ui,2ui,1ui,Nui,2ui,1ui,2ui,2ui,2ui,Nui,Nui,1ui,Nui,2ui,Nui,N)=Ki|uiui|
If the individual terms of the sum are thought as the projections onto the lines spanned by ui then the projection onto the subspace is the sum of the projections onto orthogonal lines. This seems plausible because each term gives one component of the projection vector in {uK} basis. Hence using an orthonormal basis to span V we get MV=AA=Ki|uiui|=KiMLi

Note that MH=Ki|uiui|=I.

1 Whole idea of the A matrix and calculations of projection operators are stolen from Khan Academy's linear algebra online lectures.

1 comment:

  1. Nice but bit complicated blog and I am here to discuss about algebra that is,It is the part of mathematics in which numbers and quantities in formula and equations can be represented by letters and other symbols.
    math word problems

    ReplyDelete