The last post! A single pdf will follow in a day or so.

Today’s post discusses a surprising connection between the Fermionic ideas we’ve been discussing up to now, and one-dimensional spin systems. In particular, a tool known as the Jordan-Wigner transform can be used to establish an equivalence between a large class of one-dimensional spin systems and the type of Fermi systems we have been considering. This is interesting because included in this class of one-dimensional spin systems are important models such as the transverse Ising model, which serve as a general prototype for quantum magnetism, are a good basis for understanding some naturally occurring physical systems, and which also serve as prototypes for the understanding of quantum phase transitions.

Note: This post is one in a series describing fermi algebras, and a powerful tool known as the Jordan-Wigner transform, which allows one to move back and forth between describing a system as a collection of qubits, and as a collection of fermions. The posts assume familiarity with elementary quantum mechanics, comfort with elementary linear algebra (but not advanced techniques), and a little familiarity with the basic nomenclature of quantum information science (qubits, the Pauli matrices).

### The Jordan-Wigner transform

In this post we describe the Jordan-Wigner transform, explaining how it can be used to map a system of qubits (i.e., spin- systems) to a system of Fermions, and vice versa. We also explain a nice applications of these ideas, to solving one-dimensional quantum spin systems.

Suppose we have an -qubit system, with the usual state space , and Pauli operators acting on qubit . We are going to use these operators to define a set of operators acting on , and satisfying the Fermionic CCRs.

To begin, suppose for the sake of argument that we have found such a set of operators. Then from our earlier discussion the action of the operators in the occupation number representation must be as follows:

• Suppose . Then .
• Suppose . Let be that vector which results when the th entry of is changed to . Then , where .

If we identify the occupation number state with the corresponding computational basis state , then this suggests taking as our definition

where is used to denote the matrix acting on the th qubit. It is easily verified that these operators satisfy the Fermionic CCRs. This definition of the is known as the Jordan-Wigner transform. It allows us to define a set of operators satisfying the Fermionic CCRs in terms of the usual operators we use to describe qubits, or spin- systems.

The Jordan-Wigner transform can be inverted, allowing us to express the Pauli operators in terms of the Fermionic operators . In particular, we have

This observation may also be used to obtain an expression for by noting that , and thus:

Substituting in the expressions for in terms of the Fermionic operators gives the desired expression for in terms of the Fermionic operators. Similarly, we have

which, together with the expression for the operators, enables us to express solely in terms of the Fermionic operators.

These expressions for and are rather inconvenient, involving as they do products of large numbers of Fermi operators. Remarkably, however, for certain simple products of Pauli operators it is possible to obtain quite simple expressions in terms of the Fermi operators. In particular, with a little algebra we see that:

Suppose now that we have an -qubit Hamiltonian that can be expressed as a sum over operators from the set and . An example of such a Hamiltonian is the transverse Ising model,

which describes a system of magnetic spins with nearest neighbour couplings of strength along their axes, and in an external magnetic field of strength along the axis.

For any such Hamiltonian, we see that it is possible to re-express the Hamiltonian as a Fermi quadratic Hamiltonian. As we saw in an earlier post, determining the energy levels is then a simple matter of finding the eigenvalues of a matrix, which can be done very quickly. In particular, finding the ground state energy is simply a matter of finding the smallest eigenvalue of that matrix, which is often particularly easy. In the case of models like the transverse Ising model, it is even possible to do this diagonalization analytically, giving rise to exact expressions for the energy spectrum. Details can be found in the paper by Lieb, Schulz and Mattis mentioned earlier, or books such as the well-known book by Sachdev on quantum phase transitions.

Exercise: What other products of Pauli operators can be expressed as quadratics in Fermi operators?

Problem: I’ve made some pretty vague statements about finding the spectrum of a matrix being “easy”. However, I must admit that I’m speaking empirically here, in the sense that in practice I know this is easily done on a computer, but I don’t know a whole lot about the computational complexity of the problem. One obvious observation is that finding the spectrum is equivalent to finding the roots of the characteristic equation, which is easily computed, so the problem may be viewed as being about the computational complexity of root-finding.

Finis

From → General

I suspect that would have something to do with the form of the matrices whose eigenvalues you want, probably because they represent a physical system. In general it’s so hard that you have to do something other than just solve a system of equations (and my applied math and optimization profs were explicit about how hard it is in general). Perhaps n is small enough that the matrices just aren’t that big, relative to our currently available computer power?

2. I didn’t state the problem very clearly. Here’s a better version.

I’m just asking what the running time of an algorithm to find the eigenvalues is, given the size of the matrix (n), and the desired accuracy (epsilon). Ideally it’d be low degree polynomial in n and in 1/epsilon. I know this is true as a function of n, and think it’s true as a function of 1/epsilon, but I don’t know the full picture.

Of course, given the size of the starting matrix – a 2^n by 2^n Hamiltonian – this is incredibly fast! The only place physics enters the picture is in the transformation allowing us to go from the 2^n by 2^n matrix to an n by n matrix.

3. The complexity of finding eigenvectors and their corresponding eigenvalues for general d-dimensional matrices to accuracy epsilon is, if I recall correctly, O(log(epsilon^(-1) d^2). However the constant in this case is huge and most people prefer an algorithm with O(log(epsilon^(-1)d^3)).

4. Thanks Dave. That polylog behaviour is pretty amazing! I presume you’re missing a power in the logarithm? Or is it really just log(1/epsilon)?

5. Oops. Those should be O(log(epsilon^(-1))d^2) and O(log(epsilon^(-1))d^3). Darned right “)”.