Ralph, in the movies, cryonics is portrayed in a number of different ways, but the real-life process that you’re working with over at Alcor involves freezing terminally-ill patients in liquid nitrogen in the hopes of reviving them in the future. Can you start us out by explaining this process?

Sure! Alcor currently uses liquid nitrogen to keep patients at a temperature of 77 degrees Kelvin, which is cold enough that chemical reactions are effectively halted. Basically, once the patient reaches that temperature and is placed in permanent storage in our Scottsdale facility, they are in a form of stasis, and can remain that way unchanged for centuries.

Obviously the important part is minimizing damage to the human organism before and during the cooling process. In the past, freezing an organism created ice-crystals that damaged cellular structure.

It might surprise most people to learn that this really isn’t an issue in today’s cryonic processes, thanks to the introduction of cryo-protectants and ice-blocking agents that suppress ice-formation entirely, meaning that you can now cool the tissue and not form ice, which is known as vitrification. It’s making people take cryonics a lot more seriously…

That brings up an interesting question, because there are some reactions that can proceed even at those temperatures, right?

Actually, down at that temperature, although you can theoretically have chemical reactions going on, you’ve gone below the glass transition point, so the tissue is locked into a vitreous solid which prevents all reactions. There’s still some radiation damage — if you go through the calculations you find that there’s a tiny bit of accumulated damage from background radiation. Obviously, since each of our patients is stored in a stainless-steel Dewar, there is no photochemical damage, so those processes are in fact brought to a complete halt.

Now for space-applications, the astronauts traveling through space are going to be picking up a little more radiation from cosmic rays than they’d see on Earth, right?

“Bigfoot” Dewar containers at -196 degree Celsius (Alcor)

Presumably you’d be using cryonic-suspension for interstellar travel, as it doesn’t seem very pragmatic for the shorter time-periods involved with most interplanetary trips. So for interstellar missions, the kind of extended timeframes that you’re talking about would be helped greatly by some kind of suspended animation.

I think one of the issues floating around is that cryonics today is pretty much by definition an experiment protocol, and by that I mean the following: if you think about it, right now we’d like to determine whether or not cryonics works, according to the standard established practices, and those practices involve using clinical trials. In other words, if you want to find out if some treatment works, you try it out and see.

The appropriate clinical trials to evaluate cryonics are very straightforward — you select a number of experimental subjects and preserve them at the temperature of liquid nitrogen using the currently best available protocols. Here’s the part that people have a hard time grasping: you have to wait for future medical technology, because right now the medical technology to revive them doesn’t exist yet.

There’s an assumption at the very heart of cryonics that medical technology in maybe 100 years won’t simply be an incremental or a modest improvement over a today’s technology, but will instead be a profoundly revolutionary advance. This type of quantum leap in our medical capabilities that will give us the ability to reverse even the kind of injury you see in today’s cryonic suspensions. As a consequence, at that point in time, we should be able to restore people to good health.

Thus, we have a problem with the cryonics experimental protocol which is often found when you’re conducting clinical trials, which is that before you complete the clinical trials, people ask you whether or not the procedure will work. This is well known in the case of something like an AIDS patient who is trying a new and experimental procedure, but it’s posed in a much more severe way in the case of cryonics because we’re waiting for a whole new medical technology in order to reverse the process.

Functionally speaking, can you walk us through a plausible preservation & resuscitation process on a future spacecraft? How would it work, and what kind of steps would be involved, and could it be automated?

A fictional hypersleep chamber in 2001, A Space Odyssey

The type of cryopreservation that would be used on astronauts would likely involve whole body vitrification, and would likely involve preservation at 145 Kelvins. This temperature is significantly warmer than liquid nitrogen in order to reduce the risk of fracturing, but still incredibly cold by most standards.

The process would begin as cryoprotectants and ice blockers are circulated through the astronaut’s body, and his temperature would then be rapidly lowered. There’s a growing body of research today focused on cryopreserving whole organs in a fully reversible fashion, and the results of that research might some day be used on astronauts.

How long do you see the resuscitation process taking to bring somebody back from being frozen in liquid nitrogen to being fully conscious and capable of performing shipboard functions in another solar system?

A guess at how long this process might take (and it’s only a guess) would be about 24 to 48 hours.

Jerry Pournelle introduced a fictional “stasis-chamber” into the Ringworld series to allow astronauts to survive damage to the craft, as well as get around the hard-G’s involved with acceleration and maneuvering in space. Do you think that cryonics might convey a few similar benefits to protect astronauts from radiation, acceleration, braking, and even decompression on interstellar missions?

High-energy space radiation causing damage to cells (windows2universe)

Radiation damage would still occur, whether a patient was cryopreserved or not, and a vitrified patient would likely be just as vulnerable (if not more so) to high g-forces.

The primary benefit of cryopreservation for a long voyage would be to spend the multi-decade trip in suspension, not as a means to avoid damage.

Given the role of molecular nanotechnology in reversing the cryopreservation process, by the time nanotech is advanced enough for missions to other star-systems, is it possible that it will have made cyronics completely obsolete? Kurzweil talks about scanning the entire mind & brain into a machine, and using molecular NT to build the entire person from that blueprint, which in a way would let you “photocopy” astronauts. It could also be a safety measure (by storing a backup) if the craft was destroyed during the mission…

Cryonics is based on the idea that future medical technology will be a quantum leap over present medical technology. In the future, as medical technology approaches the limit of what is possible, this will no longer be the case. The need for cryonics will fade, and the use of cryopreservation will become rare.



Also, we expect to use advanced medical technologies to reverse today’s cryopreservation process because we’re forced to. I don’t think astronauts would use an experimental process that caused significant damage when better preservation technology could avoid the damage in the first place.

We have a lot of NASA readers, and for them I’m wondering if you could give us your professional insight into what they should think about cryonics for space-travel. Is this a future technology worth learning about now, or will it be more of a 22nd century technology than something worth pursuing for spaceflight in the next few decades? If you worked at NASA, would it enter into any of your thoughts on future spaceflight?

Today’s research on better cryopreservation technologies might well prove to be of great use in long space flights — trips that lasted years, decades, or perhaps even longer. I think it would be quite appropriate for NASA to pursue research on cryopreservation as part of an overall research program into interstellar travel, and perhaps for long duration interplanetary trips as well.

It seems like using cryonics on healthy astronauts brings up a whole new set of ethical issues surrounding the process. As I understand it, today’s cryonics is typically used on patients only after they’re considered dead. What are your thoughts on this?

Well yes, basically if you look at the practice of cryonics today, people who go into cryonic suspension are legally dead, and that’s a requirement imposed by the social & legal environment. Being legally dead is not considered a good indicator of actual viability, and it should be distinguished from being dead by current medical criteria. You can be legally dead at a point in time when in fact you could be revived by even current medicine, which happens in the case of DNR’s or “Do Not Resuscitate” orders.

Conversely, you can also be legally alive but have no real life at all in any significant sense of the word if the centers of the brain are effectively gone but the body is still breathing. We have to distinguish the standards of today from the definitions of life and death that are going to be used 100 years from now — when we expect that people who are considered dead by today’s standards may indeed be able to be restored to good health.

Oh, I see what you mean. So as the law & technology both progress, people in this suspended state may legally progress from being legally dead to being considered to be in a truly suspended state?

If you think about it, 200 or 300 years ago people were declared dead if their heart stopped, but nowadays you can call the crash-cart and apply techniques to restore heartbeat. The same is true for using CPR on someone who has stopped breathing. As a consequence, we no longer consider heartbeat or breathing to be reliable indicators of life and death.

Preparation of a deceased patient for cryopreservation (KrioRus)

As time moves forward, and as this medical technology improves, we’ve seen that our criterion for death also changes. This raises the obvious question as to whether there’s a “gold standard” or actual definition of death that’s independent of technology.

I think the answer is yes — and that as time goes by we’ll develop a mature medical nanotechnology to let us reverse all the damage that occurs, if in fact that damage can be reversed at all. Reversibility is that gold standard.

Eventually you start asking yourself rather fundamental questions about what kind of injuries can in principle actually be reversed. When you ask questions like that, you find that in principle almost any damage that lets you identify the structure can in fact be reversed. So today if you’re heart stops, that’s functional change, so your heart can be started up again. If the cells in your body stop metabolizing, today we are unable to reverse that injury, but in the future if the cell isn’t metabolizing or if the cell’s energy levels have fallen too low, then that injury could be reversed by a sufficiently advanced medical technology. As a consequence, that person would not be dead, and therefore should be in a state that could be restored to full health.

Presumably the medical nanotechnology to revive someone is something that will evolve here on Earth, so I’m guessing it’s a safe bet that we’re not going to be launching any sleeper-ships in the near future — at least until we’ve developed technology to revive the passengers. But if this arrives in a few decades, does that make science-fiction’s “sleeper-ship” a real possibility for the 22nd-century?

Well pretty clearly I think this kind of experimental procedure isn’t one that we would want to apply to someone who is healthy, and no astronaut would want to be suspended or preserved with current technology because we have no experimental feedback that tells us it’s going to work. We think it might work, and we obviously think that if you’re facing imminent death it’s better to be frozen than not. In that case, you have a better chance of success being cryopreserved than nothing at all, and I personally think that you have a very good chance of success even for people frozen using today’s technology. Until the probably of success is 100%, though, you’d want to avoid using this on a healthy person like an astronaut, and that’s going to take a rigorous process of development and testing to ensure.

You know, the advances that you’ve described in Alcor’s vitrification process have really renewed my excitement about this technology. I think that most people still believe that the freezing process causes ice-crystals inside your cells that cause them to explode, but it looks like that’s changing, right?

Well, the image of cells bursting from ice inside has always been incorrect. It turns out that when you cool in the absence of a cryoprotectant, then you get ice formation externally to the cells and the cells themselves are dehydrated, which does cause the cell ruptures that people have heard about.

If you use cryoprotectants and ice-blockers you can cool the tissue and not form ice, which is known as vitrification, which completely eliminates the concerns about ice damage. Alcor has been using these for years now, and has recently adopted an entirely new class of cryoprotectants and ice-blockers that allow you to suppress ice-formation entirely. This is obviously an outcome that we’re very pleased with.

Ice crystals grown in the absence (left) and presence (right) of a cryoprotectant (C&EN)

Eliminating ice-formation is a tremendous breakthrough — I’d always considered this single challenge the biggest obstacle to making successful cryopreservation a reality. Do these recent successes with vitrification mean that the public is beginning to take cryopreservation more seriously?

Well certainly the advent of vitrification has been accompanied by a growing public realization that the level of damage caused by the cryopreservation process has been drastically reduced, and people are starting to show an increasing interest in the process. People have started saying to themselves “oh my, this might actually work,” which is beneficial for cryonics as serious public interest continues to grow.

I’m wondering if some of the recent advances in cryopreservation might not be side-benefits of research in other areas of medical technology, such as work in cryogenically storing and reviving human reproductive cells for use in artificial insemination?

Well certainly the research in vitrification has been carried out by a number of laboratories at this point. I think the person who’s been working on it the longest has actually been looking at the vitrification of kidneys for the Red Cross for a many, many years. The techniques that he developed are published, and Alcor is using those techniques among others to provide vitrification.

Since you mentioned kidneys, I’m wondering if Alcor has considered organ-donation for the neurotransport patients as a means of offsetting the cost of the cryopreservation process. After all, if only the brain is being cryogenically stored, perhaps organ donation might play another role in paying for the procedure, in addition to the life-insurance payouts currently in place for most Alcor members.

Well basically the cost structure at this point is $150,000 for preservation of the entire body, and if you only wish to preserve the brain, then the cost for the brain-only neuropreservation is $80,000. These are typically paid for with life-insurance, so the actual payments by the individual are quite affordable.

Basically the neuropreservation technology preserves the entire head, because if you remove the brain from the skull then you’re going to cause significant damage to the brain. The reason that neuropreservation is used, of course, is that we expect that future medical technology, which will be required anyways to reverse the cryopreservation process, will also be able to restore any missing or damaged tissue, with the obvious exception of the human brain. The brain itself must be restored, because it contains the core information that really makes us who we are.

Legally, we’re using the uniform anatomical gift act, and basically the forms that you sign state “yes, you’re making a donation to Alcor”, so in some sense we are using the ability to donate organs to facilitate the legal process.

Does the technology to build a neurotransport patient a new body mean that they can’t be brought back as soon as a full-body cryopreservation patient? I’m assuming that the technology to rebuild a missing body from scratch is more advanced than the technology to simply revive a frozen patient, and I’m wondering how much additional time will be added to the entire process for patients who only participate in the neuropreservation process?

Cryopreservation will require nanotechnology to reverse cellular damage

The time kind of timeframe involved will be what it takes for the development of true molecular nanotechnology, which is the technology that will really be required to both repair cellular damage from the neuropreservation process as well as replace damaged or missing tissues throughout the body — including the whole body itself, if that’s necessary.

The timeframes that it involves are going to be on the order of decades, but not on the order of centuries, so it would be reasonable to assume that the timeframe will be sometime within several decades when we’ll have the technology in place to rebuild tissue and replace missing or damaged tissues.

Of course, it’s not entirely clear that we’ll be using the biological model for the replacement of tissue. The biological model involves, for example, tissue engineering of some type. We might also use some constructive technique to build some type of tissues, or indeed we might build an artificial body. This would be a synthetic body that’s functionally equivalent, but doesn’t actually follow the details of biological systems, which would be more of a bionic body. So there are a lot of options at that point, and I’m content to let the future sort out which of the numerous possibilities proves to be the most cost-effective and most desirable.

I think that in today’s world, a lot of people would look at having an artificial body or uploading your consciousness as being a “step down” from the body they were born with, but that by the time technology reaches the point where this is achievable, something on this order could in fact be far preferable to a “normal” human body, which by then might seem antiquated.

There are a lot of possibilities, and this definitely gets into a lot of philosophical issues, but I think that it’s sufficient to point out that one could expect a technology that could restore people in their full biologically-accepted form, at the very least. Then, if some other technology proves to be desirable, and after thinking about for a suitable period of time, that if people want to go in that direction then it could also be an available option.

This certainly takes us squarely into the realm of molecular nanotechnology, and I guess that the cross-over in terms of your areas of interest and expertise is that cryonics in it’s present form almost seems to require some form of nanotechnology in order to revive preserved people in the future. What are your thoughts on this?

I think that’s true — obviously you’re looking at quantum leaps in medical technology, and the kind of quantum leap that we’re looking at is the ability to really go in at the molecular and the cellular level and to reverse that damage. That kind of ability is really only available if you have a full-blown nanotechnology to enable that kind of repair. So yes, cryonics is a bet that medical technology is going to be developed in more or less the form that we believe is feasible to repair the damage caused by the preservation process itself.

Well, in terms of nanotechnology, my impression from following the media that nanotech was really hurt by the dot-com bust a few years ago. Do you think that’s the case?

Certainly the dot-com bust reduced the amount of money available for a variety of R&D activities. In general, the dot-com bust wasn’t a good thing for nanotechnology, but I think the level of interest in nanotechnology has been increasing, and is fairly widespread at this point.

Nanotech is a buzzword these days, and there are a ton of commercial products claiming to be nanotech such as lipstick, fabrics, etc. Is it fair to reclassify everything using nanoscale molecules into the “nanotech category”, or is there a hard & fast definition that cuts through some of the marketing hype?

Nanotechnology involves building machines on a molecular level (NNI.gov)

Well yes, this hype really is a problem. Basically, I think the kind of nanotechnology that’s really interesting — the type that was really defined by Eric Drexler many years ago — is now more specifically called “molecular nanotechnology” or “molecular manufacturing”.

This really takes us back to a talk by Richard Feynman, the Nobel Prize-Winning physicist, who in 1959 proposed that we should be able to develop a manufacturing technology that would let us arrange the atoms in most of the ways consistent with physical law, and I think that definition and image of nanotechnology is really the one that provides all of the high-payoff, long-term objectives and goals. The use of nanotechnology is a buzzword, of course, that as you’ve already stated, that really any type of research is now labeled nanotechnology research and it’s not entirely clear that the broad usage of the term is useful in any particular sense.

My impression has been that Eric Drexler & Richard Feynman seem to be competing a bit for the “father of nanotechnology” title. Personally, I think that Drexler should get it, as Feyman made a casual observation about nature, not a focused concept like Drexler. However, in your view, which of these gentlemen really deserves the title — or does it belong to someone else entirely?

I think you’re correct. I think that basically, Feynman’s very visionary talk in 1959 was something that we can look back on and be amazed that he even considered the possibility at all, but it is certainly the case that Drexler’s work — and the work of several others subsequently — has gone into much more detail about what such molecular manufacturing systems might look like, what the basic principles and fundamental limitations are, and how they actually might function. Indeed, I think it was Drexler who provided a synthesis of the work of both Feynman and Von Neumann.

As you’ll recall, Von Neumann was talking about manufacturing systems, universal constructors, and manufacturing systems that could function to build other manufacturing systems and things like that.

Well historically speaking, do you think Drexler will end up being considered the true father of nanotech, then?

K. Eric Drexler, author of “Engines Of Creation”

Well I think that’s the case. I think that you have to look at this and say, “what’s going to be happening, and what will the winning concepts in this emerging technology ultimately be?” I think that one of the core-concepts that Drexler describes is positional assembly and the use of this concept at the molecular scale. I think that as that technology comes into fruition — as people find that they can indeed build molecular structures by using positional devices, whether it be a scanning-probe microscope today or some descendant of the microscope in the future, then I think we’ll find that Drexler will increasingly be credited with that because he basically described how that would work and what the possibilities were.

Speaking of Drexler, I’ve heard that the recent mainstream interest in nanotech is starting to displace his visionary role, and the critics are suggesting that maybe this is the end of the creative vision and the beginning of bureaucracy for the nanotech industry. Do you think this is the case, and does it mean that this is the beginning of nanotech as an industry, or just is the field getting complicated by hangers-on who want to associate themselves with the latest fad technology?

Speaking pragmatically, there is a lot of funding out there for something called “nanotechnology”, and obviously people who want to pursue funding or who are advertising commercial products are going to want to be associated with it. The term has become so widespread that really it’s no longer particularly useful unless you qualify it. In the discussion we’re having, obviously we’re talking about molecular nanotechnology, which gets back to the kind of statements that Feynman and Drexler have made.

The broader concept of nanotechnology includes anything where a critical dimension in its physical size is less than 100 nanometers, and the scope of what that encompasses is really so broad at this point that it’s not entirely useful.

You know, nanotech benefits from a convergence of emerging trends in materials science, computing, communications & optics, medicine, biology, and several other fields of study. It’s interesting that Bill Joy wrote about a nanotech “grey goo” eating up the Earth, because I get the real feeling that nanotech as a science has become so encompassing that it’s eating up our scientific textbooks as well. Is everything nanotech these days?

Yeah, it really is an area where you have to be careful about what it is you mean when you say nanotechnology.

What’s a good degree program for younger people interested in a career in nanotech?

Information on nanotech education paths is available on Prof. Merkle’s website.

That’s something on my nanotechnology webpage as one of people’s most frequently asked questions. There’s really no good answer to that, other than suggesting that it would be very useful to learn molecular mechanics, which allows you to understand how atoms and molecules can interact with each other.

Understand basic chemistry, because if you’re to talk about manufacturing out of atoms, and you’re rearranging atoms, then it sounds suspiciously like basic chemistry. Beyond that, there are a number of other areas that can be pursued depending on your particular proclivities, abilities, desires, and interests. An in-depth understanding in almost any other area of interest can be something that you can use to contribute to the development of nanotechnology.

Are there any being hyped as good degrees that really won’t apply as the field matures? In the early days of computing, everybody pushed “assembly language” in the educational system, but it seems archaic in today’s programming world. Will be see the same thing in nanotech?

It’s hard to say which way the technology’s going to go and which particular bits of information will remain useful, but on the other hand, assembly language is a case where even if you don’t use it, knowing how the machine works is a useful bit of knowledge that helps you understand on a conceptual level what’s going on. So even if we do find ourselves in the future using a set of design rules for nanotechnology to simplify the design process, understanding the basic principles of just how atoms and molecules interact is going to remain valuable for I think a very long time. The world is made out of atoms and molecules, and understanding how they interact and how to use them to build more complex structures is going to remain useful knowledge in the foreseeable future.

Well you’ve made an excellent point about knowing the fundamentals, but I’m wondering if the building blocks themselves might not become larger, modular components as this field matures. Maybe instead of working with atoms the fundamentals will be predesigned objects from nanoassemblers?

I think there will be simplifications in the design and manufacturing process, and certainly one can envision a manufacturing process that use maybe a couple of dozen basic building blocks, where each of those blocks has hundreds, thousands, or even tens of thousands of atoms in them. In that case, the assembly of those basic building blocks is the primary task, and the integration of those blocks together into structures is what gives people the ability to design and build more complex structures. Certainly you have a kind of layered control of complexity in computer-science, where you can have successive levels of abstraction that allow you to deal with very complex structures and yet maintain the understandability of the software in a way that lets you deal with it intelligently and reasonably.