THE business of gaining understanding of the world about us rarely follows a simple path from A to B. False starts, dead ends and U-turns are part of the journey. Science’s ability to accept those setbacks with aplomb – to say “we got it wrong”, to modify and abandon cherished notions and find new ideas and explanations that better fit the emerging facts – is what gives it incomparable power to make sense of our surroundings.

It also means we must be constantly on our toes. While revolutionary new ideas such as evolution by natural selection, or quantum physics, are once-in-a-generation occurrences, the sands of science are continually shifting in less dramatic ways. In the following, we focus on nine recent examples – a tweak of a definition here, a breaking or weakening of a once cast-iron concept there – that together form a snapshot of that process in action…..



The periodic turntable



Atoms don’t always weigh the same



We like to think of the periodic table of the elements as immutable. It isn’t. Its nether regions have for some time beenfilling up with new elements that physicists have forged from smaller atoms. Now even its more mundane areas, populated by familiar, everyday elements, are undergoing a fundamental change: elements are losing their precisely defined atomic weights.



Atomic weight expresses the average mass of an element’s atoms relative to those of other elements. It is not to be confused with atomic number, the unvarying number of protons found in the nucleus of atoms of a particular element. The atomic weight adds the tally of neutrons to this, and that’s where the problems start: elements may come in different forms, isotopes, whose atoms contain different numbers of neutrons.



To reflect that, the guardians of the periodic table, theInternational Union of Pure and Applied Chemistry (IUPAC), calculated an average atomic weight based on the relative abundances of an element’s natural isotopes. Most hydrogen atoms, for example, have a nucleus that contains a single proton and nothing else, but a very few have one or two neutrons, too, leading to an official atomic weight of 1.00794 – till now.



The problem with this approach, says Tyler Coplen of the US Geological Survey’s Reston Stable Isotope Laboratory in Virginia, is that it perpetuates a misconception. “Teachers are teaching their students that atomic weights are fundamental constants of nature,” he says. They are not: the ratio of the different isotopes of a particular element depends on the processes that created, transported or aggregated the material of which it forms part.

As water vapour circulates through Earth’s atmosphere from the equator to the poles, for example, molecules containing heavier isotopes of hydrogen fall back into the sea earlier. So the average weight of hydrogen atoms tends to be slightly higher in tropical waters than in seas near the poles. For different reasons, the average weight of the carbon atoms in a hydrocarbon called crocetane, seeping through the ocean floor off the coast of Alaska, is 0.01 per cent greater than the periodic table suggests it should be.

A continuous stream of new isotopic measurements meantconstantly changing atomic weights. “It was driving us crazy,” says Coplen. And so it is all change. In December 2010 IUPAC stripped 10 of the most troublesome elements – including hydrogen, lithium, boron, carbon, sulphur and nitrogen – of their falsely precise atomic weights. Their weights now come as an upper and lower bound taking into account the spread in isotopic ratios in all known terrestrial samples. Hydrogen, for example, is “H [1.00784; 1.00811]” (Pure and Applied Chemistry, vol 83, p 359).

Some elements will not be affected by this ongoing switch. Fluorine, aluminium, sodium, gold and 17 other elements have only one stable isotope, so their atomic weight really is a constant of nature. And some highly radioactive elements exist too fleetingly for their atomic weights even to be defined.





Confusion over nuclear fission

We’ve built the bomb. We’ve built the reactors that provide us with vast amounts of low-carbon power. If that seems remarkable, it becomes all the more so when you realise that the whole enterprise of nuclear fission is based on a misunderstanding.

This much we thought we knew: when a susceptible element undergoes fission, it will split into roughly equal parts, and if it doesn’t, it is down to “magic” numbers. These numbers spring from an elaborate, but slightly shaky, construction for understanding atomic nuclei. It starts off by imagining a nucleus as a drop of a strangely viscous liquid. When this doesn’t quite deliver the desired results, it adds on “shells” that, like the electron shells envisaged to form an atom’s outer coat, can each hold a certain number of protons and neutrons.

Just as an atom with a full outer electron shell is a peculiarly unreactive noble gas, an outer shell with the right number of protons and neutrons makes a nucleus magically stable. So if an atom doesn’t split in exact halves, it will preferentially split to make a magic nucleus or two.

Last year, these ideas were put to the test at ISOLDE, a facility for making rare radioactive isotopes at CERN near Geneva, Switzerland, to predict the outcome of fissioning mercury-180. Dividing mercury-180 evenly gives two zirconium-90 nuclei, which just happen to have a magic number of neutrons and an almost magic number of protons. Given all that, says Phil Walker of the University of Surrey in Guildford, UK, to expect exactly that outcome is “a no-brainer”.

Sadly, mercury-180 doesn’t play by the rules. It divides asymmetrically into the distinctly unmagical nuclei ruthenium-100 and krypton-80 (Physical Review Letters, vol 105, p 252502).

“It’s surprising that a process as basic as fission so obviously does not agree with what is expected,” says Walker. The forgotten factor, the ISOLDE team proposes, is time. As a nucleus splits, it elongates and a neck appears between two lobes. Some nuclei, perhaps, simply cannot reach a symmetrical equilibrium before that neck breaks. But as for what nuclear factors determine that – there, the experts are split.

Hydrogen bonds in a bind

here is a reason why ice floats on water, and it is called the hydrogen bond. Whatever that is.

Nobel laureate Linus Pauling thought he knew. In fact, theInternational Union of Pure and Applied Chemistry (IUPAC), which concerns itself with such things, still bases its official definition on the one that appears in Pauling’s classic 1939 bookThe Nature of the Chemical Bond.

A hydrogen bond, in this picture, is what forms when a hydrogen atom that is already stably bound into one molecule finds itself attracted to a highly electronegative atom – one like oxygen, nitrogen or fluorine that likes to suck in electrons and turn into a negatively charged ion – elsewhere in the same molecule or in a nearby molecule.

Take good old H2O. The two hydrogen atoms of a water molecule are bound covalently, through shared electrons, to its central oxygen atom. But should a second water molecule come near, the electron orbiting one of the hydrogen atoms can be drawn towards the second molecule’s electron-hungry oxygen.

Ice is less dense than liquid water because, when water molecules are cold and still, weak hydrogen bonds between them keep them consistently at arm’s length. In free-flowing water, however, the bonds are continually breaking and reforming, allowing the molecules to jostle closer together.

That is all fine and dandy. But this traditional picture also implies a strict range of admissible hydrogen-bond strengths. Over the past 40 years, though, reams of evidence about much weaker bonds, including ones between hydrogen and elements like carbon, which are not very electronegative, have come to light.

Six years ago, IUPAC set up a committee to clear up the confusion. Its conclusion, set out in a seven-page draft redefinition published last year, is that the hydrogen bond is a far fuzzier entity than we thought. “It is not an interaction with sharp boundaries,” says Gautam Desiraju from the Indian Institute of Science in Bangalore, a member of the IUPAC committee.

This is about more than just semantics, Desiraju says. A new definition will counter a widespread misconception among chemists about when and where hydrogen bonds can occur, and encourage them to consider the bond’s influence in new situations – for example, in allowing organic molecules to form and react in ways never thought possible. Exploring such avenues could help steer us away from our current dependence on toxic and expensive catalysts containing precious metals towards cheaper, greener organic-based alternatives.

Microscopes without frontiers

Microscopes are good, but not that good. The view through them gets prohibitively fuzzy when you try to look at things smaller than half the wavelength of the light used for imaging; for visible light, that is anything below a few hundred nanometres. Many things we would like to see in intimate detail, such as the processes that sustain life, are at far smaller scales than that.

We used to think of this “diffraction limit” as a fundamental physical barrier, caused by the bending and spreading out of light waves whenever they encounter an obstacle such as the lens of a microscope. Not any more. The rot started with electron microscopes, which exploit the tiny wavelengths of electrons to image objects just a few nanometres across. Unfortunately, living cells cannot survive being bombarded by electrons. So to expose life’s little secrets, we need to break light’s diffraction limit using light itself.

The near-field scanning optical microscope, invented in 1984, does just that. It harnesses short-lived light waves that form along a material’s surface when it is illuminated. These “evanescent waves” do not have a chance to diffract, and capturing them before they disappear brings the size of the object that can be viewed down to about 50 nanometres. The downside is that to do that, the microscope’s aperture must stick very close to the sample, so you can only see a bit of it at any one time.

A far zippier solution is stimulated emission depletion (STED) microscopy. Laser beams are shot at a sample to produce distinctive patterns of fluorescence with a resolution of just 5 nanometres – only twice the width of a DNA molecule. That works whether the sample is living or dead. “The beauty is you can image anything with STED,” says Garth Simpson of Purdue University in West Lafayette, Indiana.

The cutting edge now is diffraction-busting superlenses made from nano-engineered “metamaterials”, which could exploit evanescent waves while allowing a variable focus over a larger area. But even as we leave the diffraction limit behind, a more formidable barrier comes into view. As we enter the quantum realm, the notorious uncertainty principle, which limits any measurement’s accuracy, threatens to irrevocably blur our sight

Magnetic north without south

“THERE ARE NO MAGNETIC MONOPOLES”. The garish pink capitals in which the lecturer chalked those words up on the blackboard remain etched in my mind, an indelible memory from my first year as an undergraduate physicist. That was 1997. How the world has changed.

Or not. The cosmic monopole remains as elusive as ever. This freely moving particle, predicted by many grand theories of the universe, is thought to carry a single quantum of magnetic “charge”, rather as an electron carries a single unit of electric charge. As far as we can tell, though, nature only supplies magnetic charges, or poles, in pairs – the inseparable north and south poles of the bar magnets beloved of school science demonstrations, for example. Why, we are not quite sure.

But it turns out we can make our own monopoles (New Scientist, 9 May 2009, p 29). If imbued with a quantum-mechanical property known as spin, individual atoms act as tiny bar magnets with north and south poles. Get the atoms’ polar axes to align, and the material itself becomes magnetic.

Now here’s the trick. At very low temperatures, a class of exotic materials known as spin ices exist in a “frustrated” magnetic state. Their atoms would dearly love to align magnetically, but they are corralled into a tight crystal structure that stops them from doing so – unless, that is, you raise the temperature just a little. That enables a single atom to flip its poles into the right alignment, setting off a domino effect of further flips that can pass through the solid crystal (see YouTube video at bit.ly/j7hcYs). “In all practical senses, that amounts to a freely propagating magnetic charge,” says Steve Bramwell of University College London.

In March this year, he and his colleagues announced that they had managed to store long-lived monopole current in the magnetic equivalent of a capacitor, a first step towards fully fledged “magnetronic” circuitry (Nature Physics, vol 7, p 252). At the moment, such devices remain a curiosity, but that doesn’t mean they won’t be useful in the future, says Bramwell. After all, “for a long time, electricity had no obvious use”.

Einstein’s cosmological fudge

Albert Einstein’s towering reputation is only enhanced by his self-styled biggest blunder. It might not have been a blunder after all.

At stake is the fate of the universe. In 1915, Einstein derived the equations of general relativity that describe the workings of a gravity-dominated cosmos. He added a fudge factor called thecosmological constant to ensure that, in keeping with contemporary tastes, the universe described neither expanded nor contracted. Soon after, though, Edwin Hubble showed that distant galaxies were receding from us, blowing the static universe apart. Einstein reputedly disowned his idea.

He might now want to disown the disowning. The discovery in 1998 that very distant supernovae appear to be not just receding but accelerating away from us suggests the presence of a mysterious “dark energy” that counteracts gravity’s pull (The Astronomical Journal, vol 116, p 1009). And it turns out that a good way to reproduce this effect is to add the fudge back into Einstein’s cosmological recipe.

That is not to everyone’s taste, largely because no one knows what dark energy might be. Some cosmologists favour other solutions. If Earth were at the heart of a giant cosmic void, for instance, that too would create the illusion that the distant cosmos is flying away from us. But that would involve abandoning an idea we have held dear for centuries: the “Copernican principle” which says that Earth’s place in the universe is not at all special (New Scientist, 15 November 2008, p 32).

Working out the true story may take some time. But if the evidence collected on these pages is anything to go by, science rarely shies away from slaughtering its sacred cows.

http://www.newscientist.com/special/rewriting-the-textbooks



