###1. Uncertainty Principle and Particle Current
####1.1. Momentum, position, and the uncertainty principle
For momentum, we write an operator . We postulate this can be written as
where , and are unit vectors in the , and directions. With this postulated form, we find that
and we have a correspondence between the classical notion of the energy .
and the corresponding Hamiltonian operator of the Schroedinger equation
This means the plane waves are the eigenfunctions of the operator with eigenvalues of the operator with eigenvalues .
We can therefore say for these eigenstates that the momentum is . Note that is a vector, with three components with scalar values and it is not an operator here.
For the position operator, the postulated operator is almost trivial when we are working with functions of position. It is simply the position vector itself. At least when we are working in a representation that is in term of position, we therefore typically do not write through rigorously we should. The operator for the -component of position would, for example, also simply be itself.
Here we illustrate the position-momentum uncertainty principle by example. We have looked at a Gaussian wavepacket before, we could write this as the sum over waves of different -values, with Gaussian weights, or we could take the limit of that process by using an integration.
We could rewrite the above equation at time as
is the representation of the wave function in space. is the probability density that if we measured the momentum of the particle (actually the component of momentum), it would be found to have value .
The probability of finding a value for the momentum would be
This Gaussian corresponds to the statistical Gaussian probability distribution with standard deviation .
Note also that is the Fourier transform and, as it well known, the Fourier transform of a Gaussian is a Gaussian, specifically here
is the standard form
where the parameter would now be the standard deviation in the probability distribution for . Then .
From , if we now multiply by to get the standard deviation we would measure in momentum, we have
which is the relation between the standard deviations we would see in measurements of position and measurements of momentum.
This relation is as good as we can get for a Gaussian. It also turns out that the Gaussian shape is the one with the minimum possible product of and , so quite generally
which is the uncertainty principle for position and momentum in one direction.
Uncertain principles are well known in Fourier analysis. One cannot simultaneously have both a well defined frequency and a well defined time. If a signal is a short pulse, it is necessary made up out of a range of frequencies
The shorter the pulse is, the larger the range of frequencies.
####1.2. Particle current
In Cartesian coordinates, the divergence of a vector is
When we are thinking of flow of particles, to conserve particles
where is the particle density and is the particle current density.
The minus sign is because the divergence of the flow or current is the the net amount leaving the volume.
In quantum mechanical case, the particle density is , so we are looking for a relation of the form , but with instead of .
We know that
We can also take the complex conjugate of both sides
Then we have
Presuming the potential is real and does not depend in time and taking our Hamiltonian to be of the form
So our equation becomes
Now we use the following algebraic trick:
Hence, we have
which is an equation in the same form as . And
So we can calculate particle currents from the wave function when the potential does not depend on time.
This expression applies also for an energy eigenstate. Suppose we are in the th energy eigenstate
In the equation above, the gradient has no effect on the time factor, so the time term can be factored to the front of the expression and multiply to unity
Therefore, the particle current does not depend on time. That is, for any energy eigenstate ,
Therefore, particle current is constant in any energy eigenstate. And for real spatial eigenfunctions, particle current is actually zero.
###2. Functions and Dirac Notation
####2.1. Functions as vectors
One kind of list of arguments would be the list of all real numbers which we could list in order as , , and so on. This is an infinitely long list and the adjacent values in the list are infinitesimally close together but we will regard these infinite as details.
If we presume that we know this list of possible arguments of the function, we can write out the function as the corresponding list of values, and we choose to write this list as a column vector
For example, we could specify the function at points spaced by small amount , with , and so on. We would do this for sufficiently many values of and over a sufficient range of to get a sufficiently useful representation for some calculation such as integral. The integral of could be written as
where is sufficiently small and the corresponding vectors therefore sufficiently long, we can get an arbitrarily good approximation to be integral.
####2.2. Dirac notation
The first part of the Dirac “bra-ket” notation is called a “ket”, which refers to our column vector. For the case of our function , one way to define the “ket” is
or the limit of this as . We put into the vector for normalization. The function is still a vector list of numbers.
We can similarly define the “bra” to refer a row vector
where we mean the limit of this as .
Note that, in our row vector, we take the complex conjugate of all the values. Note that this “bra” refers to exactly the same function as the “ket”. These are different ways to writing the same function.
is called, variously the Hermitian adjoint, the Hermitian transpose, the Hermitian conjugate or the adjoint of the vector
A common notation used to indicate the Hermitian adjoint is to use the character “” as a superscript
Forming the Hermitian adjoint is like reflecting about a line, then taking the complex conjugate of all the elements.
The “bra” is the Hermitian adjoint of the “ket” and vice versa
The Hermitian adjoint of the Hermitian adjoint brings us back to where we started.
Considering as a vector and following our previous result and adding bra-ket notation
where again the strict equality applies in the limit when .
Note that the use of the bra-ket notation here eliminates the need to write an integral or a sum. The sum is implicit in the vector multiplication. Note that shorthand for the vector product of the “bra” and “ket”
The middle vertical line is usually omitted though it would not matter if it was still there. This notation is also useful for integrals of two different functions.
In general this kind of “product” is called an inner product in linear algebra. The geometric vector dot product is an inner product. The bra-ket “product” is an inner product. The “overlap” integral” is an inner product. It is “inner” because it takes two vectors and turns them into a number a “small” entity. In the Dirac notation , the bra-ket gives an inner “feel” to this product. The special parentheses gives a “closed” look.
####2.3. Using Dirac notation
Suppose the function is not represented directly as a set of values for each point in space but is expanded in a complete orthonormal basis .
We could also write the function as the “ket”
(with possibly an infinite number of elements)
In this case, the “bra” version becomes
When we write the function in this different form as a vector containing these expansion coefficients, we say we have changed its “representation”. The function is still the same function. The vector is the same vector in our space. We have just changed the axes we used to represent the function, so the coordinates of the vector have changed.
The result of a bra-ket expression like or is simply a number (in general, complex) which is easy to see if we think of this as a vector multiplication. Note that this number is not changed as we change the representation. Just as the dot product of two vectors is independent of the coordinate system.
Evaluating in or the in is simple because the functions are orthonormal. Since is just a function, we write it as a ket . To evaluate the coefficient , we premultiply by the bra to get
Using bra-ket notation, we can write as
Because is just a number, it can be moved about in the product. Multiplication of vectors and numbers is commutative.
In quantum mechanics, where the function represents the state of quantum mechanical system, such as the wave function. The set of numbers represented by the bra or ket vector represent the state of the system. Hence, we refer to as the “state vector” of the system and as the (Hermitian) adjoint of the state vector.
In quantum mechanics, the bra or ket always represents either the quantum mechanical state of the system such as the spatial wave function or some state the system could be in such as one of the basis states .
The convention for what is inside the bra or ket is loose, usually one deduces from the context what is meant. For example, it is obvious what basis we were working with. We might use to represent the th basis function (or basis “state”) rather than the notation or . The symbols inside the bra or ket should be enough to make it clear what state we are discussing. Other wise there are essentially no rules for the notation.
###3. Vector Spaces, Operators and Matrices
####3.1. Vector space
For a function expressed as its value at a set of points, we may have infinite number of orthogonal axes labeled with their associated basis function. Just as we label axes in conventional space with unit vectors, one notation is , and for the unit vectors, so also here we label the axes with the kets .
Our vector space has an inner product that defines both the orthogonality of the basis functions
as well as the components .
With respect to addition of vectors, our vector space is commutative and associative
Our vector space is linear in multiplying by constants
And the inner product is linear, both in multiplying by constants and in superposition of vectors
There is a well-defined “length” to a vector formally a “norm”
Any vector in the space can be represented to an arbitrary degree of accuracy as a linear combination of the basis vectors. This is the completeness requirement on the basis set. In vector spaces, this property of the vector space itself is sometimes described as “compactness”.
With complex coefficients rather than real lengths, we choose a non-commutative inner product form
An operator turns one function into another. In the vector space representation of a function. An operator turns one vector into another.
Suppose that we are constructing the new function from the function by acting on with the operator . The variables and might be same kind of variable or quite different. A standard notation for writing such any such operation on a function is
This should be read as operating on .
For to be the most general operation possible, it should be possible for the value of , for example, at some particular value of to depend on the values of for all values of the argument .
We are interested here solely in linear operators. They are only ones we will use in quantum mechanics because of the fundamental linearity of quantum mechanics. A linear operator has the following characteristics
for any complex number .
Let us consider the most general way we could have the function at some specific value of its argument, that is, be related to have values of for possibly all values of and still retain the linearity properties for this relation.
Think of the function as being represented by a list of values , , , … . We can take the values of to be as closely spaced as we want. We believe that this representation can give us as accurate a representation of for any calculation we need to perform.
Then we propose that for a linear operation, the value of might be related to the values of by a relation of the form
where the are complex constants. This form shows the linearity behavior we want. And we can conclude that this form is the most general form possible for the relation between and if this relation is to a linear operator.
To construct the entire function , if we write and as vectors, then we can write all these series at once
The above relation can be written as where the operator can be written as a matrix
Presuming functions can be represented as vectors, then linear operators can be represented by matrices. In bra-ket notation, we can write as
If we regard the ket as a vector, we now regard the (linear) operator as a matrix.
####3.3. Linear operators and their algebra
Operators do not in general commute and it is very important in quantum mechanics. We expanded and . We could have followed a similar argument requiring each expansion coefficient depends linearly on all the expansion coefficients .
By similar arguments, we would deduce the most general linear relation between the vectors of expansion coefficients could be represented as a matrix, the bra-ket statement of the relation between , and remains unchanged as .
Now we will find out how we can write some operator as a matrix, that is, we will deduce how to calculate all the elements of the matrix if we know the operator. Suppose we choose our function to be the th basis function so or equivalently . Then, in the expansion , we are choosing with all operate on this with in to get . Suppose specifically we want to know the resulting coefficient in the expansion . With our choice , and all other ’s 0 then we would have .
But, from the expansion for and , for the specific case of
To extract from this expression, we multiply by on both sides to obtain
But we already concluded for this case that , so
Operator acting on the unit vector generates the vector with generally a new length and direction. The matrix element is the projection of onto the axis.
We can write the matrix for the operator
We have now deduce how to set up a function as a vector and a linear operator as a matrix which can operate on the vectors.