Errors occur both in computation as well as in communication, and generally degrade information content. In classical computation, typical errors are flipped bits or lost data packages. A primary goal of hardware design is to make such errors unlikely, which can be done, e.g., by utilizing dissipation to stabilize the outcomes of irreversible gate operations. This goal competes with other goals such as speed, capacity, size, stability, longevity, energy consumption, and, of course, costs, which along with practical limitations means that a certain amount of errors must be tolerated. To cope with them, classical computation algorithms and communication protocols introduce a certain amount of overhead (redundancy) into the information. This can be used to detect whether errors have occurred (a step called syndrome diagnosis), and preserves enough of the information so that the errors can be corrected (by recovery operations).
A simple example is the code
which represents one logical bit of
information (subscript
Following this scheme, only errors affecting two or all three of the physical
bits will result in an error of the logical bit. A single-bit error
probability
Classical error correction of many independent single-bit errors can be
achieved when
Quantum computation is sensitive to a wide range of additional types of errors that affect the amplitudes of individual or collective qubit states. The implementation of gates is delicate because of their linear and reversible nature, which prevents the use of dissipation to stabilize the outcomes. In particular, multi-qubit gate operations typically require precise control of interactions, which can also leak on to other qubits. Furthermore, multi-qubit gates tend to propagate errors—e.g., an error in the control bit of a CNOT gate will result in an error of the target bit. To a larger extent than classical codes, therefore, quantum codes must rely on a good error model.
Consider, for example, a system designed to realize the Pauli
An important source of these contributions is the dynamics of the environment, i.e., all physical components that are not directly participating in the computational tasks. For example, the uncontrolled motion of charge carriers in parts of a device results in a fluctuating electromagnetic field. External influences of this kind are worrisome because they generally cause the state of the quantum register to become mixed: the register state will depend on the environment, and therefore becomes entangled with it; its reduced density matrix will therefore describe a statistical mixture. This phenomenon, called dephasing or decoherence, is undesirable because it negatively affects the usable amount of entanglement (as we have seen on worksheet 2 for the example of the equal mixture of all four Bell states). Even more directly, perturbed relative phases of quantum states also negatively affect their superposition, i.e., quantum parallelism.
Consequently, error correction schemes for quantum information need to cope with a much larger variety of errors—in principle, a continuum of errors, which can affect the phase and magnitude of the amplitudes of the state. Moreover, they cannot establish redundancy by copying the information, which would violate the no-cloning theorem. Surprisingly, resilient quantum error correction strategies not only exist, but also get by with a finite set of diagnosis and recovery operations—a phenomenon known as error discretization.
In order to see how this comes about, let us specialize to single-qubit
errors. Such errors can generally be represented by a unitary
Erroneous flips of the
Figure 20 shows how these conditional operations can be achieved without doing any measurements, but instead utilizing CNOT and Toffoli gates involving two ancilla qubits. How does this circuit cope with the continuous set of possible errors? A single-qubit error moves the quantum state into a subspace spanned by the computational basis states with distance 0 and 1 to the logical qubits, and in this basis is specified by four complex amplitudes. In the circuit, these amplitudes simply determine the probabilities that the control bits trigger the various CNOT operations. At the end of the procedure, the complex amplitudes are transferred to the two ancillary qubits, whose joint state also resides in a four-dimensional space.
Analogously, errors of the type
Once we can cope with individual
Note how
Since
More general schemes—in particular, codes based on the stabilizer formalism, which utilizes group theory—allow to cope with complex types of errors affecting a collection of qubits, and also take care of errors accumulated by faulty gate operations. If the initial error rate falls below a certain threshold, these codes can be scaled up by adding more and more overhead, thereby allowing (in principle) to achieve arbitrarily good (fault-tolerant) quantum computation. However, while present technology has advanced sufficiently to enable reliable quantum communication, a universal quantum computer is still far removed from reality.
This brief concluding section juxtaposes the main technological requirements for a workable quantum computer and the key features of some specific physical implementations.
As is clear from the preceding section, quantum information processing poses serious technical challenges. E.g., fault tolerant computation requires that the error rate of gate operations falls below a certain threshold, and can only be implemented when the system can be scaled up by adding more and more components. The various challenges have been canonized by diVincenzo into a set of five core requirements, which are known as the DiVinzenco criteria:
Well-defined qubits. This requires to identify physical systems whose quantum dynamics is essentially constrained to two quantum levels. Examples for naturally occurring two-level systems are the spin of electrons and certain nuclei, as well as the polarization of photons. In many proposed systems, however, the reduction to two levels is only approximate. Examples are atoms in ion traps, photons stored in microcavities, the magnetic flux penetrating a superconducting ring, and electrons confined to (normal conducting or superconducting) solid-state devices, such as quantum dots. In all these cases, care has to be taken that the system does not populate the other available energy levels (i.e., one needs to avoid leakage), which can be best done by making these levels energetically inaccessible.
It is of course possible to design quantum computation schemes that
are not binary, and therefore make use of more than two levels in the
energetically accessible range
By convention, if there is a clear energy separation between the two
levels, the state with the lower energy is designated
Initialization to a pure state. The quantum register must start in a well-defined state. It is sufficient to have a reliable method to prepare at least one such state, since a universal quantum computer would be able to transform this into any other state. Utilities to prepare a larger variety of states further improves the efficiency of quantum computation.
Using the convention of labeling qubit levels according to their
energy, the register state
It is not necessary, however, that the preparation process is deterministic. E.g., a viable strategy is to make a hard, complete measurement of the register, thereby forcing it into a pure state that is completely determined by the recorded measurement outcomes.
If initialization is not perfect, it can be combined with error correction schemes to enhance its accuracy. In particular, if the state is not entirely pure, the entropy can be transferred into ancilla qubits, so that the register state become purified. Such procedures also allow to carry out initialization in multiple steps.
Universal set of quantum gates. As discussed in section III, a universal set of quantum gates can be obtained using single-qubit rotations on the Bloch sphere, and at least one type of two-qubit operations (such as CNOT). Alternative constructions use a sufficiently large number of multi-qubit gates. Using facilities to swap qubits, it is not necessary that each pair of qubits can be coupled directly; still, the coupling network needs to be sufficiently interconnected, and also should be scalable to a large number of qubits. This is a severe problem for many proposed implementations.
For each implementation, the precision of gate operations can be increased not only via error correction, but also using insight into the specific quantum dynamics of the system. E.g., echo and refocussing techniques in nuclear magnetic resonance employ judiciously timed magnetic-field pulses to average out the effects of spurious qubit couplings and unwanted single-qubit terms in the Hamiltonian. This exemplifies the natural tradeoff between precision and speed of gate operations, which is a general obstacle in all implementations.
Qubit-specific measurement. Ideally, to determine the
outcome of a computation one should be able to carry out ideal
measurements on each physical qubit. In practice, a finite degree of
imperfection can be tolerated. This may be because the computation
can be repeated, or because it can be carried out on many systems in
parallel. An interesting simplification occurs because algorithms
often use quantum parallelism only during the calculation, but are
designed to deliver a classical bit sequence
Long coherence times. This statement subsumes various
requirements for the protection of the quantum register state
throughout the computation. In particular, one needs to preserve the
capacity to use superposition and entanglement as computational
resources. As discussed before, this capacity is in particular
degraded by spurious internal and external interactions. These
effects can be broadly categorized depending on whether they affect
the population probabilities or interference of the register states
(i.e., the modulus or complex phase of the amplitudes):
Relaxation (on a time scale
In most systems,
These requirements are supplemented by two additional criteria for reliable quantum communication:
Convert stationary and flying qubits. Stationary qubits reside in registers, while flying qubits propagate along quantum transmission lines. Photons make ideal flying qubits, while nuclei and atoms typically serve as stationary qubits. In this respect, electrons in solid state devices are particularly flexible because they can move through conducting regions, but can also be confined electrostatically.
Transmit flying qubits between distant locations. This can be achieved with high fidelity for photons, but is far more challenging for electrons. In spintronics, e.g., the electronic spin can be flipped by scattering off magnetic impurities in the transmission line.
At present, none of the various physical candidate platforms score well on all of the core requirements. The key challenge is to overcome the natural trade-off between easy access of qubits (initialization, control, readout), a high degree of isolation (coherence), and scalability. On the other hand, this trade-off can also be exploited to balance strengths in certain areas (e.g., a long coherence time) against weaknesses in other areas (e.g., imprecise gates, whose errors can then be corrected by running a more sophisticated, time-consuming error correction scheme). That said, any viable quantum computer is likely to be a hybrid device which combines the specific strengths of the various physical platforms.