by Michael Mohammadi

Clear Lipid-echanged Acrylamide-hybridized (Anatomically) Rigid Imaging/Immunostaining/In situ hybridization-compatible Tissue-hYdrogel.

Or just CLARITY. Whatever you call it, this newly published research technique from Karl Deisseroth and colleagues at Stanford University will allow scientists to image far deeper into fixed tissue than ever before. The images are absolutely stunning. A youtube video (below) of the data resulting from the first publicatoin on CLARITY went viral on the internet last week and a feature article appeared in the NY Times as well as a number of science sites and blogs. The majority of these give a brief overview of CLARITY and the implications of the work. Here I will focus on how scientists make things look colorful, why we do it, and what CLARITY has done to make some really cool and exciting pictures. I hope to make scientific imaging as well as CLARITY approachable to the non-scientist.

CLARITY first hit the scenes in 2012. In October I had a chance to attend the optogenetics workshop that preceded my favorite scientific meeting of the year- the annual meeting for the Society for Neuroscience. (SfN brings the top 35,000 or so researchers in neurosciences to one city, 2012 was New Orleans, and under one roof to share new findings, develop new ideas, and imbibe in adult beverages with friends and colleagues.) One of the better talks I attended was presented by Dr. Karl Deisseroth, but the topic wasn’t totally dedicated to optogenetics. Instead, he gave a quick teaser of a poster his lab was presenting at the main meeting a few days later. Deisseroth described a “clearing” technique that would remove the opacity of an in tact, “fixed” brain therefore allowing for very high-resolution imaging of an entire brain. The images and videos he shared were stunning, and 6 months later we have now seen the paper published in the journal Nature. Here I give a general overview of how scientists make brains (and other tissue) look bright and colorful and why fluorescence imaging, and CLARITY, is so important to research!

The basics of imaging and fluorescence microscopy

Microscopy and imaging techniques are essential in biological sciences. The use of optics for science dates back to Roger Bacon in the 13th century. The earliest biologists used microscopy and basic tissue staining to learn cell morphology (shapes) and describe differences in cell types within and between tissue types. Among them, neuroscientist Santiago Ramon y Cajal won a Nobel prize for his use of Golgi staining to produce amazingly accurate images of neural structures (like the hippocampus seen here). There have of course been many advances in technology since the 13th century and the history of microscopy and imaging have been well explored and is beyond the scope of this article, I suggest a history of microscopy, a microscope and imaging overview, and the very comprehensive MicroscopyU.

Basic light microscopy is still highly practiced today (including dark field, differential interference contrast, phase contrast, etc) but more and more researchers are using advanced imaging methods to visualize cells and other cellular components in live and fixed tissue/cells, as well as in whole living organisms (in vivo).

Luminescence takes place in certain molecules (both in nature and synthesized in labs) and refers to the absorption of a certain wavelength of light and subsequent emission of a different wavelength of light. When electrons absorb the energy of a photon of light they enter an excited state which is unstable. To return to a more stable state the molecule gives off energy (very rapidly) in the form of a photon of a different, longer wavelength of energy. This shift to a lower energy state (Stoke’s shift seen in the graph) is the basis of fluorescence microscopy. A precise wavelength of higher energy light (purple line) is used to “excite” a molecule and the resulting energy transfer results in light in a lower energy state (higher wavelength green line).

Roger Tsien and others won a Nobel Prize in 2008 for their groundbreaking work with Green Fluorescence Protein (GFP) which is the most widely used fluorescence probe in science. GFP is excited with light at 488 nm and when it absorbs photons of that wavelength it emits 525 nm light. Using specific filters we can select what light makes it to our camera and therefore we can image specifically the molecules that are tagged with GFP. Using a variety of fluorescence probes targeted to different proteins and other molecules allow researchers to look at the interactions of specific proteins, dissect signal transduction pathways and actually observe changes in cell morphology following specific controlled manipulations. Imaging allows for direct assessment of how things interact both within and between cells.

I won’t go into the numerous ways one can get these fluorescence proteins (FP’s) into the mouse brains and other tissue/cells, a more in-depth review on the topic can be found here. I will highlight a few ways we actually visualize, or image, the FP’s once we have them in our cells or tissue.

Wide-Field Fluorescence Microscopy (WFFM)– This is the most fundamental and widely used (also least expensive) method for fluorescence imaging. In WFFM we only require optics (generally a microscope), a light source (Hg-arc lamp, LED, etc) and a photodetector (usually a very sensitive CCD or sCMOS camera). In this scenario, white light (~300-1000 nm) may be passed through an excitation filter (let’s say one that only lets light around 488 nm through). The excitation light illuminates the sample and the FP (one that is excited at 488 nm) absorbs photons of the 488 nm light, undergoing a Stoke’s shift giving off light at 525 nm, or green light. Through a variety of filters in the microscope we can selectively allow light of 525 nm through to the camera therefore have a good level of certainty that the light we are seeing is coming from our probe. WFFM has some limitations (poor signal-to-noise ratio in thick samples due to light scattering) and is the lowest resolution method described here.

Confocal Microscopy (CM) – To get around some of the limitations of WFFM confocal microscopy blocks “out of focus” light allowing it to be used for thicker samples. The fluorescence principles are the same as in wide-field fluorescence, but rather than illuminating the entire sample with light a single, high-power laser spot is rapidly scanned across the specimen. The key to this technique is that a small pin hole is placed between the excitation and emission light paths at a conjugate image plane which in effect only permits the light that is coming from the area around each point (in XYZ) to reach the photodetector (confocal microscopy uses photo-multiplier tubes). This technique gives us much better resolution and contrast, allowing for imaging deeper into the tissue because the pinhole will block the light that would normally scatter and make imaging difficult. More information on the principles of confocal microscopy can be found here. Chung et al. (2013) paper use a confocal microscope to image through very thick (though highly cleared!) tissue.

Spinning Disk Confocal Microscopy (SDCM) – Like confocal microscopy, SDCM uses a pin hole based approach to block the out of focus light. But instead of single pin hole, the laser light passes through a disk that has thousands of pin holes, decreasing the time it takes to image a large area and reducing the likelihood of phototoxcity to live cells and live tissue. The disk pattern itself is called a Nipkow pattern, and a Japanese company called Yokogawa has owned the market on this technology. Their newest version called the CSU-W1 features increased spacing in pin holes which has improved the ability of SDCM to go deeper into tissue and thicker samples, possibly offering an advantage over traditional point scanning confocal microscopy. A more technical overview on spinning disk technology may be found here.

Super-Resolution Microscopy (SRM) – The new kid on the block, SRM uses a variety of techniques (all with their own acronyms- NSOM, STED, PALM, STORM, SIM, LIMON, SOFI, GSD, SSIM, etc). These are well beyond the focus of this brief review but the common goal here is to be able to image things that are much smaller than the diffraction limit in normal microscopy (<200 µM, down to 10’s of nm). These methods tend to use individual cells or groups of cells on a cover slip or in a dish and for the most part haven’t yet been used in thick tissue preparations (though CLARITY might change this for some of the methods). A nice review of super resolution techniques can be found here. I have a friend who is a “super-res” expert, maybe I can convince him to contribute an overview at some point. For those already using SRM, a variety of papers in Nature and its journals may be found here.

How tissue is traditionally imaged

A major limitation to whole-tissue or whole-organ imaging is the scattering of light as a result of lipids and other dense molecules. To overcome this, researchers take a whole organ (like a brain) and slice it into very thin slices (10’s of µM or less) and use those individual slices for labeling with fluorescence probes. The process of slicing brains and other tissue is widely used in science and variety of companies sell products (reagents as well as microtomes) to allow for very precise and delicate cutting of the tissue. Once the tissue is sliced and labeled, a researcher can take images of multiple slices and reconstruct the whole organ in 3D. This process can be quite challenging and requires significant time to prepare the slices, take the images and reconstruct the data, and the result is still somewhat incomplete because of data lost from the sections of the slice that the cutting physically damaged or removed. In an “ideal world”, tissue would be transparent, allowing for imaging through thicker organs without the process of tissue slicing.

CLARITY- an “ideal world”

(This section is a bit more technical, I welcome questions and refer you to the paper)

Stanford University neuroscientist/psychiatrist Karl Deisseroth, MD/PhD (yes the same Deisseroth of optogenetics) and his colleagues set out to discover ways to make tissue clear, reducing scattering and allowing for imaging of whole, in-tact brains and other organs. This idea wasn’t unprecedented in the imaging field (“Scale” for instance had some success), but no single method could allow for imaging of very thick tissue (500 µm- 4mm) without some type of sectioning.

Chung et al. first infused hydrogel monomers with formaldehyde and other temperature sensitive chemicals (Step 1) to cross-link the tissue as well as to link nucleic acids, proteins and other biomolecules (essentially they stabilize the important stuff in the tissue so that it will stay in place). Next, they thermally induce polymerization of the molecules (at 37 degrees C) which stabilized their linked structure (Step 2)- essentially making a hydrogel lattice that only the biomolecules were very tightly bound (this basically means they use temperature to make everything stick together firmly while not including the “not-as-relevant” stuff like lipids). Finally, they used electrophoresis to get rid of the lipids and other unwanted molecules (pulling away the lipids and other “junk”). In a nutshell they replaced the lipids in the tissue with a hydrogel- thereby they could maintain structure without all the mess! The result? Clear brains!

Importantly CLARITY also makes the tissue permeable to molecules like GFP, allowing fluorescence probes to reach their specific targets. To the right we see a mouse brain before fixing (a), after CLARITY (b) and finally a brain that was fixed using CLARITY and subsequently tagged with GFP.

What Chung et al. has accomplished is game changing. CLARITY not only produced tissue that was clearer than previous techniques, but the authors argue that it addresses significant protein loss that has been reported elsewhere. In Figure 3 they show that comparing CLARITY to other imaging methods they observe ~8% protein loss vs. >24% in other methods.

Chung et al. go on to show the technique applied to a human brain (which was in 500 µm sections) specifically the cortex of a 7-year-old child who had autism. They very clearly show that they can follow an individual neuron through multiple layers of the cortex (Figure 5-n) and their data suggests that the technique could be essential in tracing neurons that may be implicated in disease states.

The artistic 3D reconstructions performed by Chung et al. used the industry standard Bitplane Imaris software (which in full disclosure I’m proud to say is part of the Andor group!). Click here for more information on Imaris or to demo the software yourself!

Final thoughts

CLARITY will inevitably change the way scientists look at their research, providing a much improved tool for looking deep into thick tissue and whole organs. For the first time neuroscientists will be able to image the location of tens of thousands of neurons, identifying different subtypes while analyzing their connections. The best part about CLARITY may be the fact that, as with his work in optogenetics, Karl Deisseroth has published a website dedicated to sharing the technique freely with any scientist who would like to try it (http://clarityresourcecenter.org). I look forward to seeing the how CLARITY changes the imaging field and am excited for the contributions the technique is sure make in our understanding of the structure, function and disease states of the brain and other major organs.

Cheers!

References

Chung K, Wallace J, Kim S-Y, Kalyanasundaram S, Andalman AS, Davidson TJ, Mirzabekov JJ, Zalocusky KA, Mattis J, Denisin AK, Pak S, Bernstein H, Ramakrishnan C, Grosenick L, Gradinaru V, Deisseroth K. Structural and molecular interrogation of intact biological systems. Nature, April 10, 2012. DOI: 10.1038/Nature12107.