One of the difficulties in observing how the immune system goes about its business is that it isn't easy providing an environment where exactly one immune cell type is in proximity to one specific pathogen. This is compounded by the lack of imaging systems that provide an adequate combination of spatial and temporal resolution.

To overcome this problem, researchers from Harvard Medical School and Massachusetts Institute of Technology have combined forces to produce a solution in the form of a combined optical trapping and imaging system that is optimized for just this sort of problem.

The solution is not that startling, but it represents the kind of legwork that physicists, engineers, and physicians have to do together if we want to turn technologies into useful biomedical research tools. So what exactly has been done? Cut down to its most basic, the researchers have used a specific type of microscope that can provide 3D images at 10-20 frames per second, and married it to an optical trap that allows them to pick individual cells up and place them in proximity to each other.

Let's backtrack a little bit and see how an optical trap works. A good laser beam doesn't just have a round shape, it also has a specific intensity or brightness profile. Typically, a profile has a single round spot in the center that is the brightest, and the intensity fades off as we move outwards. This intensity profile looks like the familiar Bell curve that many of you will have been forced to face down in statistics lessons.

This intensity profile is also what allows light to trap particles. Imagine a cell sitting in water and illuminated by this light beam. Cells are often roughly spherical in nature, so they act as a weak lens, bending the light as it passes through. Since light is made up of a stream of photons that have momentum, changing their direction requires giving them a kick. And, when you kick a photon, it kicks you back, providing a force.

So, let's consider two possibilities: the light beam hits the cell dead in the center, and the light beam hits it off-center. When the light beam hits the cell dead center, the light is focused by the cell, but all the forces that result from the bending of the light beam act against each other. The end result is that there is no force on the cell.

However, when the beam is off-center, the brightest part of the beam is bent toward the center of the cell, and that's balanced by the edge of the beam, which has a lower intensity. As a result, the forces don't cancel out, and the cell experiences a total force that pushes it toward the center line of the laser beam.

The cool thing about this is that the laser beam can be moved around and, as long as it doesn't move too fast, the cells will follow it around. Using various tricks, you can trap many different cells, bring them together, and separate them again on time scales of a few milliseconds.

This is exactly what the researchers did. Except, since they only wanted to manipulate two objects, they decided to use one optical trap to capture and control one of them, and simply shift the microscope stage around to control the position of the other cell.

The imaging system was also a fairly standard piece of kit. A confocal fluorescence microscope uses a laser to excite fluorescent light from a sample that is then imaged using a camera. Using fluorescence instead of scattered light increases the contrast of the image. The confocal part refers to the presence of a pinhole before the camera, which only allows light that originates from the focus of the laser beam through. This increases contrast and maximizes the resolution.

Unfortunately, it also slows down the imaging process because you have to scan the laser beam around. To increase the imaging speed, the researchers use a spinning disk of pinholes, which rapidly places at least one pinhole over every point in the image plane, allowing the image to be built up quickly without sacrificing resolution. Taking into account that they wanted 3D images—meaning that they still have to scan the image plane up and down through the cells to obtain depth information—the researchers were able to achieve something like 10-20 frames per second, which is certainly fast enough to watch white blood cells engulf pathogens.

The researchers demonstrated this by moving a pathogen called Candida albicans over to a white blood cell and watching it get engulfed. Nothing new in seeing that, but what was new was that the researchers could choose the pathogen and the white blood cell instead of just watching a big sample and waiting until something interesting happened.

They then went a bit further and stuck a bunch of immune system receptors on plastic beads (anti-CD3 receptors to be precise). These receptors are part of the signalling system that lets the body know that it is under attack. T cells form what is called a synapse with the receptor. This is exactly what the researchers saw when they moved the plastic beads into proximity with the T cells. Indeed, the T cells bound so strongly to the anti-CD3 receptors that they were able to pull the beads right out of the optical trap.

I should point out that there is nothing really new in all this; instead, this is the development of a tool. One of the things that research of this nature highlights is that general tools are all right at many things, but useless for most specific jobs. But the total market for this type of research machine—real-time imaging and position control of cells—is kind of small, so researchers are forced to develop the tools themselves.

Unfortunately, it is hard for physicists and engineers to know what tools to develop, while bio-medical researchers are largely unaware of the possibilities afforded by custom engineering. That makes collaborations like these especially important.

PLoS One, 2010, DOI: 10.1371/journal.pone.0015215