Quantum Mechanics for Scientists and Engineers Notes 6


###1. Types of Linear Operators

####1.1. Bilinear expansion of operators

We know that we can expand functions in a basis set as in or . What is the equivalent expansion for an operator? We can deduce this from our matrix representation.

Consider an arbitrary function , written as the ket from which we can calculate a function , written as the ket by acting with a specific operator :

We expand and on the basis set : , . From our matrix representation of , we know that , and, by the definition of the expansion coefficient, we know that so

Substituting the above equation to , gives

Remember that is simply a number, so we can move it within the multiplication expression. Hence we have

But and and are arbitrary, so

This form is referred to as a “bilinear expansion” of the operator on the basis and is analogous to the linear expansion of a vector on a basis. Any linear operator that operates within the space can be written this way.

Though the Dirac notation is more general and elegant for functions of a simple variables where

We can analogously write the bilinear expansion in the form

The Dirac form of expansion contains an outer product of two vectors. An outer product expression of the form generates matrix. The specific notation is actually, then, a sum of matrices. In the matrix the element in the th column and th row is 1, all other elements are zero.

####1.2. The identity operator

The identity operator is the operator that when it operates on a vector (function) leaves it unchanged. In matrix form, the identity operator is

In bra-ket form, the identity operator can be written as

where the form a complete basis for the space. This statement is trivial if is the basis used to represent the space.

Note, however that even if the basis used is not the set . Then some specific is not a vector with an th element of 1 and all other elements 0 and the matrix in general has possibly and of its elements non-zero. Nonetheless, the sum of all matrices still gives the identity matrix .

The expression above has a simple vector meaning. In the expression , is just the projection of onto the axis, so multiplying by that is gives the vector component of on the axis.

Since the identity matrix is the identity matrix, so no matter what complete orthonormal basis we use to represent it, we can use the following tricks: First, we “insert” the identity matrix in some basis into a expression. Then, we rearrange the expression. Then, we find an identity matrix we can take out of the result.

Consider the sum of the diagonal elements of an operator on some complete orthonormal basis

Now we suppose we have some other complete orthonormal basis . We can therefore also write the identity operator as

In , we can insert an identity operator just before which makes no difference to the result since , so we have

Then we can rearranging the above equation in following sequence:

Hence the trace of an operator (the sum of the diagonal elements) is independent of the basis used to represent the operator.

####1.3. Inverse and unitary operators

For an operator on an arbitrary function , then the inverse operator, if it exists is that operator such that

Since the function is arbitrary, we can therefore identify

Since the operator can be represent by a matrix, finding the inverse of the operator reduces to finding the inverse of a matrix.

A unitary operator , is one for which

that is, its inverse is its Hermitian adjoint.

Note first that it can shown generally that for two matrices and that can be multiplied

That is, the Hermitian adjoint of the product is the “flipped round” product of the Hermitian adjoint. Explicitly, for matrix-vector multiplication

Consider the unitary operator and vectors and . We form two new vector by operating with

Then .

So,

Hence, the unitary operation does not change the inner product. So, in particular , the length of a vector is not changed by a unitary operator.

###2. Unitary and Hermitian Operators

####2.1. Using unitary operators

Suppose that we have a vector (function) that is represented when expressed as an expansion on the function as the mathematical column vector

These numbers , , , … are projections of on the orthogonal coordinate axes in the vector space labeled with , , , …

Suppose we want to represent this vector on a set of orthogonal axes which will label , , , … Changing the axes which is equivalent to changing the basis set of functions does not change the vector we are representing but it does change the column numbers used to represent the vector.

Write transformations for each basis vector , we get the correct transformation if we define a matrix

where and we define our new column of numbers

Now we can prove is unitary. Writing the matrix multiplication in its sum form

So, , hence, is unitary.

Consider a number such as where vector , and operator are arbitrary. This result should not depend on the coordinate system. So the result in an “old” coordinate system should be the same in a “new” coordinate system, that is, we should have . Note the subscripts “new” and “old” refer to representations not the vectors (or operators) themselves which are not changed by change of representation. Only the numbers that represent them are changed. With unitary operator to go from “old” to “new” systems, we can write

since or .

If the quantum state is expanded on the basis to give

then . And if the particle is to be conserved then this sum is retained as the quantum mechanical system evolves in time. Hence, a unitary operator, which conserves length describes changes that conserve the particle.

####2.2. Hermitian operators

A Hermitian operator is equal to its own Hermitian adjoint

Equivalently it is self-adjoint. In matrix terms, the Hermiticity implies for all and . So, also the diagonal elements of a Hermitian operator must be real.

To understand Hermiticity in the most general sense, consider for arbitrary and and some operator .

We examine

Since this is just a number, it is also true that

We can analyze using the rule for Hermitian adjoints of products. So

Hence, if is Hermitian, with therefore , then

even if and are not orthogonal.

In integral form, for function and , the statement above can be written

We can rewrite the right hand side and a simple rearrangement leads to

which is a common statement of Hermiticity in integral form.

Suppose is a normalized eigenvector of the Hermitian operator with eigenvalue . Then by definition

Therefore

But from the Hermiticity of we know

And hence must be real.

Now, let’s prove that orthogonality of eigenfunctions for different eigenvalues

But and are different, so , i.e., orthogonality, presuming we are working with non-zero functions.

It is quite possible and common in symmetric problems to have more than one eigenfunction associated with a given eigenvalue. This situation is known as degeneracy. It is provable that the number of such degenerate solutions for a given finite eigenvalue is itself finite.

####2.3. Matrix from of derivative operators

Returning to our original discussion of functions as vectors, we can postulate a form for the differential operator

where we presume we can take the limits as .

If we multiply the column vector whose elements are the values of the function then

Note this matrix is antisymmetric reflection about the diagonal and it is not Hermitian. By similar arguments, though gives a symmetric matrix and is Hermitian.

We can formally “operate” on the function by multiplying it by the function to generate another function

Since is performing the role of an operator. We can if we wish represent it as a (diagonal) matrix whose diagonal elements are the values of the function at each of the different points.

If is real then its matrix is Hermitian as required for .

###3. Operators and Quantum Mechanics

####3.1. Hermitian operators in quantum mechanics

For Hermitian operators and representing physical variables. It is very important to know if they commute, i.e., . Remember that because these linear operators obey the same algebra as matrices in general operators do not commute.

For quantum mechanics, we formally define an entity

This entity is called the commutator.

An equivalent statement to saying is then . Strictly, this should be written

where is the identity operator but this is usually omitted.

If the operator do not commute, then does not hold an in general we can choose to write

where is sometimes referred to the reminder of commutation or the commutation rest.

Operators that commute share the same set of eigenfunctions and operators that share same set of eigenfunctions commute.

Suppose that operators and commute and suppose that are the eigenfunctions of with eigenvalues , then

So,

This means that the vector is also the eigenvector or is proportional to it. i.e., for some number

This kind of relation holds for all the eigenfunctions . So these eigenfunctions are also the eigenfunctions of the vector with associated eigenvalues . Hence we have proved the first statement that operators that commute share the same set of eigenfunctions. Note that the eigenvalues and are not in general equal to one another.

Now we consider the statement: operators that share the same set of eigenfunctions commute. Suppose that the Hermitian operators and share the same complete set of eigenfunctions with associated sets of eigenvalues and respectively. Then

and similarly

Hence, for any function which can always be expanded in this complete set of function , i.e., , we have

Since we have proved this for an arbitrary function, we have proved that the operators commute hence proving the statement operators that share the same set of eigenfunctions commute.

####3.2. General form of the uncertainty principle

First, we need to set up the concepts of the mean and variance of an expectation value. Using to denote the mean value of a quantity . We have, in the bra-ket notation, for a measurable quantity associated with the Hermitian operator when the state of the system is

Let us define a new operator associated with the difference between the measured value of and its average value

Strictly, we should write , but we take such an identity operator to be understood. Note that this operator is also Hamiltonian.

Variance in statistics is the “mean square” deviation from the average. To examine the variance of the quantity , we examine the expectation value of the operator . Expanding the arbitrary function on the basis of the eigenfunction of .

We can formally evaluate the expectation value of when the system is in the state .

Because the are the probabilities that the system is found, on measurement, to be in the state and for that state simply represents the squared deviation of the value of the quantity from its average value then by definition

is the mean squared deviation for the quantity on repeatedly measuring the system prepared in state .

In statistical language, the quantity is called the variance, and the square root of the variance which we can write as is the standard deviation. In statistics, the standard deviation gives a well-defined measure of the width of a distribution.

We can also consider some other quantity associated with the Hermitian operator and, with similar definitions

So we have ways of calculating the uncertainty in the measurements of the quantities and when the system is in the state to use in a general proof of the uncertainty principle.

Suppose two Hermitian operators and do not commute and have commutation rest as defined above in . Consider, for some arbitrary real number , the number

By we mean the vector written this way to emphasize it is simply a vector so it must have an inner product with itself that is greater than or equal to zero. So,

where .

By a simple (though not obvious) rearrangement

But the equation above must be true for arbitrary , so it is true for

which sets the first term equal to zero, so

So, for two operators and , corresponding to measurable quantities and for which in some state for which , we have the uncertainty principle

where and are the standard deviations of the values of and we would measure.

The conclusion above is generally true. Only if the operators and commute. or if they do not commute, but we are in a state for which . Only in those cases it is possible for both and simultaneously to have exact measurable values.

####3.3. Specific uncertainty principles

We now formally derive the position-momentum relation. Consider the commutator of and (we treat the function as the operator for position). To be sure we are taking derivatives correctly, we have the commutator operate on an arbitrary function:

Since is arbitrary, we can write and the commutation rest operator is simply the number . And so, from , we have

The energy operator is the Hamiltonian and from Schroedinger’s equation

so we use .

If we take the time operator to be just then using essentially identical algebra as used for the momentum-position uncertainty principle.

so, similarly we have

We can relate this result mathematically to the frequency-time uncertainty principle that occurs in Fourier analysis. Noting the in quantum mechanics, we have