We now turn to the dynamics of quantum systems with a Hamiltonian which itself changes over time. This time dependence is generally induced externally, e.g., by the dynamics in the environment of the system, or when a system is driven by external fields. However, a very similar situation arises if we start with a stationary system but focus on only some of its physical components; the time dependence then arises from the interaction with the rest of the system. Henceforth, we will call systems with time-dependent Hamiltonian driven, and refer to the explicit time dependence as the driving, irrespective of the actual origin of this time dependence. Our main concern will be to investigate how a time-dependent potential induces transitions between quantum states. The starting point will again be the time-dependent Schrödinger equation
(406) |
A striking effect induced by driving are transitions between quantum levels. Let us consider a harmonically driven two-level system with Hamiltonian
(407) |
where is a real constant. In absence of the driving (), the Hamiltonian possesses stationary eigenstates , with energies and , respectively, and no transitions occur if the system is placed in one of these states.
For finite , we can find the exact dynamics of the system by stipulating
(408) |
where the constants , and determine the transition rates and amplitudes between the states . These quantities are determined by inserting Eq. (408) into the time-dependent Schrödinger equation (406), which results in the two equations
(409) | |||||
(410) |
This homogeneous system with two unknowns , is only solvable if both equations are linearly dependent, which requires
(411) |
There are two solutions,
(412) |
where is the Rabi frequency. The associated amplitudes are
(413) |
These two solutions can be superposed to find the full time dependence of the quantum state for arbitrary initial conditions. If the initial state is , then
(414) |
The probabilities for occupation of states and follow as
(415) |
These probabilities vary periodically with the Rabi frequency, which therefore determines the time after which the system returns into its initial state. One such period defines a Rabi cycle. After half a cycle, the occupation probability of state 2 is maximal. It reaches is the system is driven at resonance, , where the Rabi frequency attains its minimal value. At resonance, the system periodically absorbs and emits an energy , in close analogy to the Planck relation.
The Rabi cycle forms the basis of electron spin resonance (ESR), muon spin resonance, and nuclear magnetic resonance (NMR), which are widely used experimental techniques in material science.
For most driven quantum systems the time-dependent Schrödinger equation (406) cannot be solved exactly as in the preceding example. A systematic treatment is still possible in terms of a suitably generalised time-evolution operator , which is obtained by introducing two independent time arguments and stipulating
(416) |
The time-dependent Schrödinger equation (406) then demands that obeys
(417) |
which has to be solved with initial condition
(418) |
The defining property (416) entails the multiplication rule . This can be used to construct an explicit, but somewhat formal solution of (417) which generalizes the expression obtained for stationary problems. In order to formulate this solution, we introduce the time-ordering operator , which acts on products of time-dependent operators according to
(419) |
This is more accurately termed a superoperator since it doesn’t act the the wavefunction but on operators itself; takes a product of such operators an shuffles the operator with the latest time argument to the left. We then can write
(420) |
where acts on the terms of the Taylor expansion of the exponential function. This time ordering is enforced because in Eq. (417), appears to the left of where refers to the latest time during the evolution from to (note that the operators and generally don’t commute).
While Eq. (420) is of some practical use, there exist advanced approaches based on the time evolution operator which circumvent its explicit construction. Among the more notable ones are operator techniques (such as the Heisenberg and interaction pictures), path integrals, and semiclassical approximations. In the following we briefly discuss the Heisenberg picture, and the related Ehrenfest theorem.
We start with a general statement about expectation values. Let us consider the expectation value of an observable ,
(421) |
and evaluate its time derivative using Eq. (417) (the analogous equation for follows from taking the hermitian conjugate of this equation). This then delivers the Ehrenfest theorem,
(422) |
thus, a relation between the time derivative of the expectation value and an expectation value of a commutator. In some cases, these equations are formally identical to the classical equations of motion (following from Newton’s equation). In general, however, the commutators generate new expressions, giving rise to an infinite hierarchy of equations. This problem can be circumvented by introducing time-dependent operators,
(423) |
which fulfill the Heisenberg equations of motion
(424) |
Expectation values are then evaluated as
(425) |
thus, the Heisenberg picture replaces the time-evolution of the quantum state by time-evolution of operators.
An application of the Ehrenfest theorem and the Heisenberg equations of motion is discussed as a worksheet question.
We now revert back to the type of quantum mechanics pursued so farm, and discuss the most direct approach to time dependence, time-dependent perturbation theory, which provides estimates of transition rates if the driving is weak.
Time-dependent perturbation theory sets out by separating the stationary parts of the Hamiltonian from the driving . In absence of the driving, the Hamiltonian is stationary and has energies and eigenstates solving the stationary Schrödinger equation
(426) |
In presence of the driving, we use these states and energies to express the quantum state as
(427) |
The expansion coefficients determine the probability to find the system in the stationary state . At a given time , their values can be obtained from the scalar product
(428) |
The subsequent time dependence of these coefficients follows from the time-dependent Schrödinger equation (406):
(429) |
where denotes the time-dependent matrix elements of the driving, and we defined .
In absence of the driving, the coefficients are constant. If the driving is weak, they only change slowly, and we can approximate on the right-hand side of Eq. (429). We can then integrate to obtain the first order of time-dependent perturbation,
(430) |
This procedure can be repeated by inserting the improved result into the right-hand side of Eq. (429), and integrating again. This delivers the second order of perturbation theory, and in principle can be iterated to successively generate expressions of increasingly high order, as well.
If initially only a single state is populated [, for ], the first-order result (430) estimates the transition probability into state as
(431) |
Furthermore, to this level of approximation the amplitude
(432) |
of the initially populated state only changes its phase.
Consider a system whose Hamiltonian changes abruptly from for to for , where and are time-independent [therefore, , where is the unit step function]. This change does not induce any instantaneous jump of the state at , but modifies the subsequent dynamics because the ‘old’ eigenstates of and energies of differ from the ‘new’ eigenstates and energies of . There are two natural ways to describe the effects of such a sudden approximation:
(i) We can calculate the overlaps between the two sets of eigenstates. These deliver the probabilities for transitions from an old eigenstate to a new eigenstate .
(ii) We can work with the eigenstates throughout, and examine how their population changes over time. Using the first-order approximation (431), the transition probability is estimated as
(433) |
The function in the curly brackets has a maximum around , and decays to small values when one deviates by , in close resemblance to the energy-time uncertainty principle. For sufficiently large times, this function becomes sharply peaked and can be approximated as , where we again encounter Dirac’s delta function. Therefore, energy-conserving transitions are favored.
In practical situations, we are often interested in transitions into a continuous set of extended states with (sometimes further specified by propagation direction into which the system disintegrates, spin, or similar common characteristics of the final states). The delta function is then replaced by the density of the final states, and the transition rate into these states can be estimated as
(434) |
where we assume that for the specified set of final states. This important expression (first derived by Dirac) is known as Fermi’s golden rule.
Fermi’s golden rule can be generalised to the important case of a harmonic perturbation
(435) |
where is time-independent. The derivation follows the steps for the sudden perturbation, and simply requires to account for the additional factors in the integrand of (431). The transition probability then has two maxima: one around , associated with (stimulated) emission of an amount of energy , and another around , associated with absorption of an amount of energy . The associated transition rates are given by
(436) | |||||
An important application of Fermi’s golden rule are transitions between two atomic energy levels and , induced by the action of an (approximately) monochromatic electromagnetic field with central frequency tuned to . Since the wavelength of the field is generally far larger than the extent of the atomic electronic orbitals, one can apply the dipole approximation in which the interaction is of the form
(438) |
where is the dipole operator. Averaged over the direction of , the matrix element of the perturbation takes the form
(439) |
where
(440) |
is the matrix element of the dipole operator, evaluated with the two atomic wave functions involved in the transition (note that is a three-component vector). Fermi’s golden rule (436) gives
(441) |
where is the EM field intensity distribution. The corresponding energy density is . The ratio , known as the Einstein B coefficient, is therefore approximated as
(442) |
For a pair of non-degenerate levels as considered so far, the coefficients for absorption and emission are identical. In the case of degeneracy, they differ by a factor , which enters through the density of final states.
The above considerations apply to classical radiation. When the EM field is quantised, its energy is carried by photons whose number can fluctuate according to the probabilistic principles of quantum mechanics. These fluctuations give rise to spontaneous emissions. Avoiding details of the field quantisation, we obtain the rate for this processes by a phenomenological generalisation of Eq. (441). For this, we assume that the field fluctuations amount to an energy per electromagnetic mode. Combined with the local density of photon states per unit energy interval (see Eq. (220)), these fluctuations correspond to a classical field intensity of . According to Eq. (441), the spontaneous emission rate for an atomic transition between two levels with energy difference (also known as the Einstein A coefficient) is, thus, given by
(443) |
This expression typically allows to obtain highly accurate values for the life time of an electron in an atomic orbital. For example, for the transition in the hydrogen atom, the squared dipole matrix element is
(444) |
where is the reduced mass of the electron. According to Eq. (443), with , this amounts to a spontaneous emission rate . Perturbation theory is well justified because the rate is much smaller than the frequency of the emitted radiation. The associated decay time is . We know from the discussion of the energy-time uncertainty principle that this decay results in a Lorentzian broadening of the emitted frequency intensity, with full width at half maximum .