On November 12th, Lawrence Krauss and James Dent wrote a paper that caused quite a kerfuffle in the media:

Most of the paper is quite technical, but on November 22, the New Scientist took a couple of sentences and blew them out of proportion in a story with a far-out title: Has observing the universe hastened its end?

On November 24, Krauss and Dent changed those sentences to something a bit more reasonable.

The bulk of the paper remains unchanged… but nobody ever talked about that part. It’s a cute example of how sensationalism amplifies the least reliable aspects of science, while ignoring the solid stuff.

Details follow…

For the most part, Krauss and Dent’s paper is an unremarkable analysis of what might happen if the universe were in a ‘false vacuum state’ — that is, a state with a bit more energy density than it would have in its ‘true ground state’.

Astronomers believe the Universe contains about a billionth of a joule of ‘dark energy’ per cubic meter. Could we be in a ‘false vacuum state’? If so, the false vacuum could decay into true vacuum! The universe could be unstable! But, it could last a long time.

A similar but much simpler problem is the decay of a radioactive atom. An atom of uranium-238 has a bit more energy before it decays than afterwards. Why doesn’t it decay right away? Because the nucleus must tunnel through a classically forbidden ‘barrier’ before it can shoot out an alpha particle and decay. Metaphorically, we’re talking about something like this:

The nucleus tunnels from 1 to 3 in a random sort of way, with a half-life of about 4.5 billion years. So, if you occasionally look at an atom of this isotope, the chance of it still being U-238 decreases exponentially… but very slowly.

In theory, our universe could be in a similar situation. If it ‘decayed’ to a state of lower energy density, that could be really bad. What if the half-life were, say, 15 billion years? Then it might go poof! any day now. Or in just a few billion years!

This is way, way, way down on my list of worries. But, it’s fun subject for theoretical physics papers.

There are lots of subtleties my simplified description overlooks. For one thing, the ‘decay’ probably wouldn’t happen everywhere at once — we’re talking quantum field theory here, not quantum mechanics. For another, we’re talking about quantum field theory on curved spacetime.

Krauss and Dent focus on another subtlety. I said that radioactive decay proceeds exponentially. But, this is only approximately true! There are some theorems that say there must be deviations from this exponential law. Under some reasonable assumptions, people think the approximate exponential decay switches over to an approximate power-law decay at late times.

This is a little-known and somewhat controversial fact, mainly because the deviations are very small in practice. As far as I know, they’ve never been seen!

Krauss and Dent give a nice review of these slight deviations and then apply the idea to cosmology — that’s what ‘The late time behavior of false vacuum decay’ means.

They might be right, they might be wrong. Either way, it’s not the sort of thing you’d normally find huge crowds of people arguing about on the internet. You need to understand the Paley–Wiener theorem to really follow this stuff, for god’s sake!

But the last sentence of their abstract was this:

Several interesting open questions are raised, including whether observing the cosmological configuration of our universe may ultimately alter its mean lifetime.

Could just looking at the Universe speed or slow the decay of a false vacuum state? That’d sound completely crazy if you’d never heard of the quantum Zeno effect,where repeatedly observing a quantum system can keep it from changing its state. If you understand this effect, I think you’ll conclude that while it’s not completely crazy, it’s just wrong to worry about human observations of the cosmos affecting the decay of a false vacuum state. For one thing, the relevant concept of “observation” in the quantum Zeno effect is not at all anthropomorphic. The universe was “observing” itself long before we ever showed up.

Krauss and Dent don’t say much about this, except in the last two sentences of the paper:

If observations of quantum mechanical systems reset their clocks, which has been observed for laboratory systems, then by measuring the existence dark energy [sic] in our own universe have we reset the quantum mechanical configuration of the universe so that late time will never be relevant? Put another way, could internal observations of the state of a metastable universe affect its longevity?

But then the folks at New Scientist got ahold of this and ran with it. Who cares that nobody knows if we’re in a false vacuum state. And forget the quantum Zeno effect! What if observing the universe made the false vacuum decay faster? Then we could blame cosmologists for hastening the end of the Universe!

Now that’s a story! So, the New Scientist published this:

Has observing the universe hastened its end? 22 November 2007

Marcus Chown

Have we hastened the demise of the universe by looking at it? That’s the startling question posed by a pair of physicists, who suggest that we may have accidentally nudged the universe closer to its death by observing dark energy, which is thought to be speeding up cosmic expansion. Lawrence Krauss of Case Western Reserve University in Cleveland, Ohio, and colleague James Dent suggest that by making this observation in 1998 we may have caused the universe to revert to a state similar to early in its history, when it was more likely to end. “Incredible as it seems, our detection of the dark energy may have reduced the life-expectancy of the universe,” says Krauss. […]

Clearly Krauss shares a lot of the blame if he actually said that sentence.

Then folks at newspapers like the Telegraph read the New Scientist article and processed it into something even more sensational. Instead of a question, the headline is now a bald statement of ‘fact’:

Mankind ‘shortening the universe’s life’ By Roger Highfield, Science Editor Forget about the threat that mankind poses to the Earth: our very ability to study the heavens may have shortened the inferred lifetime of the cosmos. [read on - the version you’ll see has been edited to make it less nutty than the original]

Then the blogosphere kicked into action, with Slashdot advertising the story and Peter Woit pouring some much-needed cold water on it.

Krauss got embarrassed, and on November 24 he wrote on Woit’s blog:

Hi. I wanted to chime in with an apology of sorts regarding the confusion in the press regarding our work. Our paper was in fact about late-decaying false vacuum decay and its possible cosmological implications. Needless to say, the explosion of press interest, prompted by the final two sentences of the paper, misrepresented the work, which was not intended to imply causality, but rather to ask the question of whether by cosmological measurements we constrain the nature of the quantum state in which we find ourselves, inferring perhaps that we are not in the late-decaying tail. However, I do take responsibility in part for the flood, as I was undoubtedly glib in talking to the new scientist reporter who read the paper on the arxiv. I have learned that one must be extra careful in order not to cause such misrepresentations in the press, and I should know better. In any case, the last two sentences of the paper have been revised so that it should be clear to the press that causality will not be implied. mea culpa

He changed the last sentence in the paper’s abstract from this:

Several interesting open questions are raised, including whether observing the cosmological configuration of our universe may ultimately alter its mean lifetime.

to this:

Several interesting open questions are raised, including whether observing the cosmological configuration of a metastable universe can constrain its inferred lifetime.

Note how ‘alter’ becomes ‘constrain’. And, he changed the last two sentences in their paper to this:

Have we ensured, by measuring the existence dark energy [sic] in our own universe, that the quantum mechanical configuration of the universe is such that late time decay is not relevant? Put another way, what can internal observations of the state of a metastable universe say about its longevity?

This isn’t terribly clear. But the tempest in the teacup has blown over — apparently with no one the wiser about the actual contents of the paper, or the fascinating issue of deviations from exponential decay.

To begin remedying the last point, here’s an old post of mine on sci.physics.research, written on May 23,1992:

Carlo Graziani writes: Exponential decay is in fact a theoretical necessity. It is a generic quantum mechanical feature of problems in which you have a discrete state (e.g. an excited atomic or nuclear state) coupled to a continuum of states (e.g. the atomic or nuclear system in the ground state and an emitted photon flitting around somewhere). There is nothing ad hoc about it. The original paper is Weisskopf & Wigner, 1930, Z. Physik, 63, 54. If you can’t get a translation from German, (or don’t speak German), see Gasiorowicz, “Quantum Physics” 1974 (Wiley), pp 473-480, or Cohen-Tannoudji, Diu, & Laloe, “Quantum Mechanics” 1977 (Wiley), vol 2, pp. 1343–1355. The essence of the result is the effective modification of the energy of the excited state by a small complex perturbation, E ↦ E + ( d E − i R / 2 ) E \mapsto E + (d E - i R/2) where d E d E is the small radiative energy correction (Lamb shift) and R R is the decay rate. The time dependent phase factor is thus also modified: exp ( − i E t ) ↦ exp [ − i ( E + d E ) t ] exp [ − R t / 2 ] . exp(-i E t) \mapsto exp[-i(E+d E) t]exp[-R t/2]. This is the source of the decay; probabilities, which go as the square of the amplitudes, will exhibit a time dependence exp [ − R t ] exp[-R t] . This is indeed the conventional wisdom. Let me begin by saying: 1) I agree that the exponential decay law is backed up by theory in this sort of situation and is far from an ad hoc “curve fitting” sort of thing. 2) The exponential law is apparently an excellent approximation, and as far as I know no deviations from it have ever been observed. Here I am not talking about the (necessary) deviations due to finite sample size. I am talking about deviations present in the limit as the sample size approaches infinity. 3) If you ever wanted someone to actually calculate a decay rate for you, I’m sure Graziani would do a whole lot better job than I would. What follows has nothing to do with the important job of getting an answer that’s good enough for all practical purposes. It is a matter of principle (my specialty). There’s no real conflict. Okay. So, Graziani has offered the conventional wisdom, what everyone knows about radioactive decay, that it is a “theoretical necessity”. It’s precisely because this is so well-entrenched that I thought I should point out that one can easily prove that quantum-mechanical decay processes cannot be EXACTLY exponential. There are approximations in all of the arguments Graziani cites. Let me just repeat the proof that decay processes aren’t exactly exponential. It uses one mild assumption, but if the going gets rough I imagine someone will raise questions about this assumption. It’d be nice to get a proof with even weaker assumptions; I vaguely recall that one could use the fact that the Hamiltonian is bounded below to do so. This is just the proof that Robert Israel gave a while ago (an improved version of mine). Let ψ \psi be the wavefunction of a “new-born radioactive nucleus”, together with whatever fields that are involved in the decay. Let P be the projection onto the space of states in which the nucleus has NOT decayed. Let H H be the Hamiltonian, a self-adjoint operator. The probability that at time t the system will be observed to have NOT decayed is | | P exp ( − i t H ) ψ | | 2 ||P exp(-i t H) \psi||^2 The claim is that this function cannot be of the form exp ( − k t ) exp(-k t) for all t > 0 t\gt 0 , where k k is some positive constant. Just differentiate this function with respect to t t and set t = 0 t = 0 . First, rewrite the function as ⟨ exp ( − i t H ) ψ , P exp ( − i t H ) ψ ⟩ , \langle exp(-i t H) \psi, P exp(-i t H) \psi \rangle, and then differentiate to get ⟨ − i H exp ( − i t H ) ψ , P exp ( − i t H ) ψ ⟩ + ⟨ exp ( − i t H ) ψ , − i P H exp ( − i t H ) ψ ⟩ \langle -i H exp(-i t H) \psi, P exp(-i t H) \psi \rangle + \langle exp(-i t H) \psi, -i P H exp(-i t H) \psi \rangle and set t = 0 t = 0 to get ⟨ − i H ψ , ψ ⟩ + ⟨ ψ , − i H ψ ⟩ = 0 \langle -i H \psi, \psi \rangle + \langle \psi, -i H \psi \rangle = 0 Here we are using P ψ = ψ P \psi = \psi . Since we get zero, the function could not have been equal to exp ( − k t ) exp(-k t) for k k nonzero. That should satisfy any physicist. A mathematician will worry about why we can differentiate the function. This is a simple issue if you know about unbounded self-adjoint operators. (Try Reed and Simon’s Methods of Modern Mathematical Physics vol. I: Functional Analysis, and vol. II: Fourier Analysis and Self-Adjointness.) For the function to be differentiable it suffices that ψ \psi is in the domain of H H . For physicists, this condition means that ‖ H ψ ‖ < ∞ \|H \psi\| \lt \infty . [Let me put in a digression only to be read by the most nitpicky of nitpickers, e.g. myself. An excited state ψ \psi , while presumably an eigenvector for some “free” Hamiltonian which neglects the interactions causing the decay, is not an eigenvector for the true Hamiltonian H H , which of course is why it doesn’t just sit there. One might worry, then, that the eigenvector ψ \psi of the “free” Hamiltonian is not in the domain of the true Hamiltonian H H . This is a standard issue in perturbation theory and the answer depends on how singular the perturbation is. Certainly for perturbations that can be treated by Kato-Rellich perturbation theory any eigenvector of the free Hamiltonian is in the domain of the true Hamiltonian H H , cf. Thm X.13 vol. II R&S. But I claim that this issue is a red herring, the real point being that any state we can actually prepare has ‖ H ψ ‖ < ∞ \|H \psi \| \lt \infty . Instead of arguing about this, I would hope that any mathematical physicists would just come up with a theorem with weaker hypotheses.] As Israel pointed out, this argument shows what’s going on: when you are SURE the nucleus has not decayed yet (i.e., it’s “new-born”), the decay rate must be zero; the decay rate then can “ramp up” very rapidly to the value obtained by the usual approximate calculations. Physicists occasionally mistrust mathematicians on matters such as these. Arcane considerations about the domains of unbounded self-adjoint operators probably only serve to enhance this mistrust, which is ironic, of course, since the mathematicians are simply trying to avoid sloppiness. In any event, just to show that this isn’t something only mathematicians believe in, let me cite the paper: Grotz and Klapdor, Time scale of short time deviations from exponential decay, Phys. Rev. C 30 (1984), 2098–2100. “In this Brief Report we discuss critically whether such quantum mechanically rigorously demanded deviations from the usual decay formulas may lead to observable effects and give estimates using the Heisenberg uncertainty relation. It is easily seen that the exponential decay law following from a statistical ansatz is only an approximation in a quantum mechanical description. [Gives essentially the above argument.] So for very small times, the decay rate is not constant as characteristic for an exponential decay law, but varies proportional to t. [….] Equations (2) and (3) tell us that for sufficiently short times, the decay rate is whatever [arbitrarily - these guys are German] small. However, to make any quantitative estimate is extremely difficult. Peres uses the threshold effect to get a quantitative estimate for the onset of the exponential decay […] Applying this estimate to double beta decay yields approximately 10 − 21 10^{-21} sec, which is much too small to give any measurable effect. [They then go on to argue with Peres.]” This is all I want to say about this, unless someone has some nice theorems about the allowed behavior of the function ‖ P exp ( − i t H ) ψ ‖ 2 \|P exp(-i t H) \psi\|^2 when H H is bounded below and ψ \psi is not necessarily in the domain of H H . (This would probably involve extending t t to a complex half-plane.)

I sound like quite a showoff in that post — am I still that bad, 15 years older? I hope not. I probably am.

My post was far from definitive. The argument I gave uses the assumption P ψ = ψ P \psi = \psi — it assumes that at the start of the experiment, we’re sure the atom has not decayed. Do we need that? Also, it treats the situation as a closed system. What if we treat it as an open system? Also, Grotz and Klapdor talk about short-time deviations from exponential decay. What about long-time deviations? How does the Paley–Wiener theorem get into the act?

And: has anyone ever seen deviations from exponential decay for radioactive nuclei or similar systems?

There are lots of interesting questions. You can learn a lot about some of them — though not, alas, the last — from the paper by Krauss and Dent. But you’d never know that from the media kerfuffle.