In this post, I will take a break from the usual problem solving and instead present a famous problem in physics: the exact solution of the two-dimensional (2D) Ising model. Many undergraduate statistical physics courses will cover solutions of the 1D Ising model and the 2D Ising model in the mean field approximation, but even graduate classes will not tackle the exact solution of the 2D Ising model. I can understand why — there are more important things to do than spend weeks trudging through the math in order to calculate a single quantity.

Nevertheless, I recently had to present on the 2D Ising model with two of my classmates. Although difficult to follow at first, the methods used to find the exact solution were surprisingly mathematically rich. On one hand, we have the original tour de force solution by Onsager in 1944, which was subsequently rewritten into modern formalism by Kaufman using spinors (Onsager’s original paper used quaternions). On the other hand, we have a novel approach introduced by Kac and Ward in 1952, which was later refined by Feynman. This approach looks at the calculation pictorially and involves counting up graphs drawn on the lattice (in the typical Feynman gusto). Both approaches were equally enlightening, so I had the inclination to share them here.

I plan to make this a four-part series. Part I below will give an introduction to the Ising model problem by starting with the exact solution of the 1D case and the solution of the 2D case in the mean field approximation. Parts II and III will give the Onsager-Kaufman solution of the 2D Ising model, with II aiming for readability and skipping proofs that will be included in III. Part IV will give the combinatorial solution, which is personally my favorite.

Anyway, let us begin!

The 1D Ising Solution

Here we solve the 1D Ising model exactly in the presence of an external magnetic field. This is usually given in a undergraduate class on statistical physics. We will not deviate much from the usual treatment.

Consider particles equally spaced on a straight line.

Each particle has a spin which can be either up or down. We denote the spin by with labelling the particle. We call an up spin, and a down spin.

We also impose periodic boundary conditions, which says that we wrap this line into a circle such that particles and are actually next to each other.

Each particle interacts with its two nearest neighbors via magnetic dipole-dipole interaction. We ignore all the details about magnetic moments, lattice spacing, etc. and simply say that if two neighboring particles have aligning spins, then the energy of their interaction is . Likewise if two particles have anti-aligning spins, then their energy is . For ferromagnets where , this says that it is energetically more favorable to have aligning spins, as the energy of that configuration is lower. We can write the interaction energy between spin and its neighbor compactly as,

You can verify that we get the correct interaction energy by plugging in any combination of spins for .

Now each particle also has an energy due to the external magnetic field. If this magnetic field is directed in the direction of the up spin, the energy of particle sitting in this magnetic field is given by,

So it is energetically favorable for any individual spin to be pointing in the direction of the magnetic field.

Given any configuration of spins, i.e. a list of values for , we can calculate the total energy of that configuration by adding up the interaction energies between neighboring particles and the energies of all particles due to the magnetic field,

Recall that due to periodic boundary conditions, we recognize that .

Now when we say we want to “solve” the 1D Ising model, we are specifically asking to calculate the partition function. From the partition function we can calculate ensemble averages, like the specific heat or average magnetization, in terms of the thermodynamic variables. If we let be the usual inverse temperature, the partition function is defined as usual by the sum over all configurations weighted by the Boltzmann factor,

We focus on simplifying this expression for so we can evaluate it exactly. We first define the spinor representation of a spin to be the two-component vector given by the identification,

So if particle has spin , we can write its spinor as,

and likewise if . Note that in this representation, we have the completeness relation,

Now we define transfer matrices from their matrix elements with spinors,

The matrix basically gives the interaction energy between two spins, and gives a spin’s energy with the magnetic field. It is a straightforward exercise to show that if we write them explicitly as matrices, we have,

For instance, we can verify that the matrix element of with the same spin gives , and with different spins gives .

Defining these matrices allows us to write the partition function compactly.

where the last step is justified because is diagonal. Now using the completeness relation, we can eliminate all the intermediate leaving us with,

So we have written the partition function as the trace of some matrix . If we are able to diagonalize the matrix where , then we have,

From the cyclic property of the trace, , we have,

So diagonalizing will give us the partition function. This is quite straightforward as we have already given the explicit forms of the matrices and above. Using the cyclic property of the trace, we can equivalently diagonalize , which will turn out to be more convenient.

Now this matrix is the usual transfer matrix presented in other sources. Calculating its eigenvalues will give,

Now that we have the partition function, we can calculate ensemble averages. In particular, we have . This means in the limit, we can approximate the partition function by just the largest eigenvalue,

We can calculate the free energy ,

and likewise, the average magnetization per particle ,

If we specialize to the ferromagnetic case , we can plot the magnetization as a function of magnetic field for various temperatures.

We can see that at all temperatures, as we have . Therefore there is no spontaneous magnetization in the 1D Ising model. This is also the case with the anti-ferromagnetic coupling .

To summarize, we were able to calculate the partition function of the 1D Ising model exactly in the presence of an external magnetic field. We did so by introducing a transfer matrix, and reduced the problem down to solving the eigenvalues of this matrix. We will see that this strategy generalizes to the 2D problem in Part II.

The 2D Ising Mean Field Solution

Now we give an approximate solution to the 2D Ising model by taking a mean field approximation. This is also usually given in an undergraduate class on statistical physics, right after solving the 1D Ising model. It is the easiest way of showing that the 2D Ising model exhibits a phase transition at finite temperature, whereas we just saw that the 1D Ising model does not.

The set-up is quite similar to the 1D case: take particles arranged in a square lattice (the details of what the lattice looks like is not important right now). Each particle has a spin for which can take either up or down values.

The total energy of a particular configuration is given by the sum of the interaction energies between neighboring particles and energies of each spin in the magnetic field. Using the same constants as defined in the 1D problem, we can write this as,

where denotes that the sum is over all possible pairs of spins which are nearest neighbors of each other. In a square lattice, each particle has four nearest neighbors (up, down, left, and right). We can write this energy alternatively as,

where “n. n.” means “nearest neighbors.” The factor of comes in order to compensate for the double-counting when writing the sum in this way.

Now the mean field approximation means that we replace inside the sum with , the average spin of the particles in our lattice. In general this is not an allowed replacement, as we are taking a dynamic quantity in our partition function and replacing it with the ensemble average. However, this is a good starting place to see what happens. After all, a particle interacting with its neighbors will see, on average, spin for each of its neighbors.

Thus, in the mean field approximation (MFA), the total energy is now,

where is the half the number of nearest neighbors a particle has (on the square lattice, this is ). If we define the quantity , then our total energy takes the very simple form,

This can be interpreted as the energy of non-interacting particles sitting in a magnetic field which is the original magnetic field with a correction due to the “average magnetic field” created by the neighboring spins.

Now the partition function for non-interacting particles factorizes and can be evaluated easily.

We can calculate the free energy and average magnetization per particle .

This gives us an implicit equation for ,

which can be inverted to give as a function of ,

Here we use temperature in lieu of for interpretability (set units where ). Now there are two interesting cases which we can plot for ferromagnets ( ). The left plot shows the graph for , and the right plot shows the graph for .

We see that in both cases, at zero magnetic field we have . However in the case where , we also have two non-zero solutions . The value of cannot be given analytically but it is the positive solution to the equation,

Nevertheless, we see that we can have spontaneous magnetization at low enough temperature. For the square lattice, we identify as the critical temperature for this phase transition. By expanding the above equation for small , we can find the scaling near the critical temperature.

We have a critical exponent of . The plot below shows the magnetization below the critical temperature,

The plot of is continuous, but its derivative is not at the critical temperature. Therefore we have a second-order phase transition.

So using the mean field approximation, we have calculated that the 2D Ising model displays spontaneous magnetization below the critical temperature with the average magnetization scaling near the critical temperature as . We will see how this compares to the exact solution in the upcoming parts.

Before we end, note that this mean field approximation can be generalized to higher dimensions — the only thing that changes is the value of . So in the 3D Ising model, we can also predict a phase transition at critical temperature (a cubic lattice has 6 nearest neighbors for each site, so ) with the same scaling. However, this also means that the mean field approximation predicts a phase transition in the 1D problem, which is not the case from our exact solution above. So we should take the results from the mean field approximation with a grain of salt.

Continue on to Part II.

Last edited: 1/2/2018