Image copyright Thinkstock

Summer is upon us and you are almost certainly planning at least one trip to the beach. This year, as you lie back in the sun, put down your book or magazine and sift the sand through your fingers - and take a moment to reflect upon how much of the world economy is built on the stuff.

I don't mean "built on sand" in a philosophical sense, however true that may be. I'm talking about three technological revolutions that are literally based on sand, one of which is only just beginning and, if it lives up to its potential, has mind-boggling implications.

You've probably already guessed the element at the heart of these revolutions - silicon, the main component of sand.

The original silicon revolution was of course, glass. Man first began to explore its properties a million and a half years ago - that's when our ancient ancestors discovered that obsidian, the almost jet black glass which is sometimes formed when lava cools rapidly, was useful.

Obsidian breaks leaving a very keen edge, so was good for weapons and tools including, in some ancient cultures, knives used for ritual circumcisions.

Image copyright Thinkstock

But it wasn't until the first civilizations arose in the plains of Mesopotamia that we learned to actually make glass.

The recipe is simple, but must be followed carefully. The main ingredient, silicon dioxide or silica, is everywhere. Three-quarters of the earth's crust is made up of this compound of oxygen and silicon. Silica is the basis of most rocks, the reason they all seem so different is because of the different processes by which they are formed and the different crystals silica creates with other compounds.

Having got your silica, usually in the form of sand, you heat it until it melts - about 1,600C. You melt in a little soda ash and dash of limestone and then cool the mixture fairly quickly.

With any luck - and it has taken 5,000 years to perfect the process - you'll have created an "amorphous solid", which is what glass is.

What that means is that the atoms in glass are locked in place but, instead of forming neat orderly crystals, they are arranged randomly - so glass is rigid, like a solid, but has the disordered arrangement of molecules of a liquid.

Once we'd discovered we could create this incredible tough-but-see-through stuff there was no stopping us. Think what life would be like without glass windows, windscreens and bottles and what would the world's scientists do without the lenses in their microscopes or telescopes?

Image copyright Science Photo Library Image caption Flemish or German miniature depicting a 15th Century glass-blowing factory

And chemistry is very dependent on glass too, as Prof Andrea Sella of University College London - the king of our chemical sandcastle - was keen to point out as I make my now ritual visit to his lab.

He ushers me into the university's glass workshop, where the glass beakers and burettes, and test tubes and pipettes, are created by UCL's glassblower-in-chief, John Cowley. But while he inflates a spectacular ball flask over a roaring gas jet, Sella explains that the next silicon revolution was based on a very different form of the element.

We are talking, of course, about the computing revolution driven by microprocessors etched into silicon chip. The silicon in these chips uses silica that has been stripped of its two oxygen molecules and refined into one of the purest materials on the planet.

Silicon key facts Image copyright Science Photo Library Word silicon comes from the Latin silex , which means pebble, stone, or flint

, which means pebble, stone, or flint Second most abundant element in the world

Atomic structure makes it an extremely important semiconductor

Silica is the main ingredient of glass

Symbol Si and atomic number 14 Encyclopedia Britannica

Sella pulls a silver-grey metallic-looking lump from a box by his side. It is, he tells me with a note of awe in his voice, 99.9999999% pure silicon - the standard level of purity in the microprocessor industry.

But why silicon?

The answer is the fact that it is a semiconductor - a substance whose electrical conductivity can be manipulated. Computer chips are in essence tiny assault courses for electrons.

"The whole of the semiconductor industry," explains Sella, "is based on deliberately adding impurities to tweak the behaviour of the silicon. These impurities create tiny obstacles for the electrons to negotiate. You can turn the obstacles on and off to tweak the behaviour of the electrons and vast numbers of these obstacles allow us to do all of the logic functions associated with a computer processor."

The obstacles are known as transistors.

The first integrated circuit - as computer chips were originally known - was a relatively simple affair. It was created by an engineer named Jack Kilby at Texas Instruments and demonstrated on 12 September 1958. Kilby's chip was made of germanium, another semiconducting element.

But within months a team led by Robert Noyce at a rival company, Fairchild Electronics, created a chip based on silicon. The entire modern computing industry can trace its lineage back to this one chip, though modern chips are millions of times more complex.

Indeed, the miracle of the modern microprocessor is the vast number of transistors the industry has learned to pack on to a tiny wafer of silicon. It's why even tiny devices can have incredible computing power these days.

Image copyright Science Photo Library

It was a colleague of Noyce's, Gordon Moore, who first realised how quickly the power of computers would multiply. A few years after that first silicon chip was created, he predicted that the number of transistors on a chip would double roughly every two years.

Even he didn't expect what became known as Moore's Law to hold for more than a couple decades, but it has - thanks, in good part, to the innovations of company that he and Noyce founded, Intel, the biggest computer chip manufacturer in the world in terms of revenue.

In the company's in-house museum there is a display that graphically illustrates Moore's law in action. The first chip, produced in 1969, contains 1,200 transistors. By 1972 that had almost doubled to 2,500. It went on doubling and then doubling again - Intel's latest chips have two billion transistors or more packed on to a single tiny chip of silicon - and almost 50 years after he first formulated his law we are still asking how long this incredible miniaturisation can continue.

"I've been in the industry long enough to remember when the experts were saying you cannot make devices smaller than 100 nanometres," says Mark Bohr, the man in charge of working out how Intel can pack even more transistors on to even smaller slices of silicon. "Now we are making devices that are 10 nanometres in size and we don't see an end to it yet."

To give you a sense of scale, a human red blood cell is about 4,000 nanometres across.

But he admits that operating at this nano scale does produce weird and fascinating new challenges. One is a phenomenon called "quantum tunnelling". That happens when the circuits are so small that you can't say with any certainty where an electron is, you can only attach a probability to where it might be.

It means electrons "jump" or "tunnel" across all those carefully created obstacles. And this creates all sorts of problems. It can lead to power draining or "leaking" from chips, it can stop your chip working at all, and it can make your chips get very hot.

In response, the computer industry has had to completely redesign the transistors. New materials have been introduced - including the element hafnium - and new more complicated layered structures.

But Bohr acknowledges that tackling these challenges isn't delivering the same increases in processing speed we used to see. That's why, as you may have noticed, desktop computers haven't got much better in the last 10 years or so, but you can now cram the same processing power into a much smaller device, like a smartphone.

Jack's chips

Image copyright Getty Images Image caption Jack Kilby's original notebook and the two first integrated circuits, manufactured from germanium

Jack St Clair Kilby came up with the idea of the integrated circuit during the summer of 1958. According to Texas Instruments, most of his colleagues had left for the traditional two-week holiday period, but Kilby "as a new employee with no vacation, stayed to man the shop"

Left to his own devices at work, Kilby decided to try and crack the "tyranny-of-numbers" issue facing the industry

Kilby's first integrated circuit was about half the size of a paper clip - it's now possible to get about 100 million transistors in the same space

The invention led Texas Instrument to win a contract to supply chips for Minuteman intercontinental ballistic missiles

Completely new technologies will be needed, he says, if chips are continue to shrink. New devices may not be based on the flow of large numbers of electrons - as they are today - but by modulating the spin of electrons, for example.

"If I think ahead 20 years from now," he says, "it's not so much how many transistors you can pack on a chip but how many transistors you can pack into a cubic volume."

Until now all microchips have been circuits etched on to a two-dimensional plane, if it becomes possible to manufacture a three-dimensional chip that would open up the possibility of far greater connectivity. The chip would function more like a human brain, Bohr says.

But he wonders whether silicon will retain its stranglehold on the world of high technology. There is a lot of research into alternative materials - compounds containing gallium, carbon, molybdenum, indium and arsenic among others. But the low costs and easy availability of silicon mean it may still be the foundation on to which these new elements are deposited.

Silicon also remains the foundation of the final of my three technological revolutions - the only one you probably haven't already guessed. This technology is already one of America's fastest-growing industries, creating tens of thousands of jobs, and it is no coincidence that, like the computer industry, it has been nurtured in Silicon Valley.

The first consumers weren't techno-nerds, however, but a bunch of cannabis-growing hippies, way up in the California hills... They were entranced by solar panels.

The man who turned them on to the sun's power was John Schaeffer, a counter-culturalist who had left San Francisco in the 70s for a hippy commune in the backwoods. Pretty soon he realised his fellow adventurers in alternative lifestyles needed somewhere to buy the shovels and lentils with which new utopias are constructed, so he opened up a general store - and one morning a guy drove up in a Porsche and asked if he was interested in a new line of business. Curious, John walked over to the car.

Image copyright Science Photo Library Image caption Blocks of silicon being prepared for the manufacture of solar panels

"He pulled out a couple of photovoltaics he'd rescued from the Space Programme," John recounts with a chuckle. "We hooked them up, and pretty soon all these hippies from the woods started filtering into the store and started going crazy about these things."

Those first solar panels were pretty modest by modern standards.

At just nine watts they would barely power even the tiniest torch and they were pricy - he sold them at $900 a pop - but his customers were rich from the profits of their pot plantations and full of tree-hugging zeal. Within a few weeks he'd sold more than 1,000 panels.

"Before long these executives from Arco Solar, the first solar company down in LA, flew up in Lear Jets in business suits to this hippy general store in Willets find out what was going on," John remembers, still laughing at the ridiculousness of it all.

But unfortunately the rest of the world was neither as wealthy nor as idealistic, and after that initial flurry of interest the development of the industry was to be a long slow slog.

The problem was the first solar panels were just too expensive.

We've already learned how Moore's Law powered an exponential increase in the number of transistors on a silicon chip. Well, another exponential law explains why the price of solar panels has fallen a hundredfold since that man in Porsche first pulled up outside John's store.

The law was first formulated by an engineer and photovoltaic pioneer called Dick Swanson, who I met in the factory down in Silicon Valley where his company Sun Power constructs solar panels.

Swanson explains that, like the computer chip, photovoltaic cells exploit the fact that pure silicon is a semiconductor.

"Silicon has the interesting property that when light hits it, it can knock loose the electrons that are bonding the atoms together," he says. Those electrons are now free to wander around.

They can go from one edge of the wafer to another, he explains. "They have no idea where they are supposed to go, so they wander aimlessly."

It's his job, as a photovoltaic cell designer, to give those electrons a sense of purpose.

"We put materials on the surface of the cell so that if an electron gets close to it, it pulls it out," he says. Every electron the sunlight knocks out leaves what he calls "a hole".

"The whole game in designing an efficient solar cell is to convince the electrons to come out one wire and the holes to come out the other," he says.

And the challenge for engineers is to do that progressively more efficiently and more cheaply.

Image copyright Science Photo Library

Swanson flushes with embarrassment when I mention "Swanson's Law". He doesn't want to claim credit, but he was the first to realise just how quickly solar prices might fall.

"If you look at refrigerators, pencils or aeroplanes", he says, "they all tend to come down in cost as you make more of them, because you learn how to drive costs out and you get efficiencies as volumes increase."

Just as Moore's law forecast an exponential increase in the number of transistors on a chip, Swanson's Law predicts an exponential decrease in the price of solar. What he forecast is that every time the number of solar cells in the world doubles, the cost of making one would fall 20%.

And this law too has proved remarkably accurate. Prices have fallen like a stone since - from $100 per watt in the 70s, down to less than $1 per watt now.

That exponential fall in price is why the solar industry is beginning to really take off now. To use the jargon, solar has reached "grid parity".

What that means is that in sunny places with relatively high electricity costs - like Hawaii, California, Japan or Italy - the cost of supplying a watt of electricity from a solar cell to the electricity grid is now very similar to the cost of generating power from coal, gas or nuclear energy.

And according to Dick Swanson there's room to cut prices still further. He thinks the price per watt could fall to 35 cents.

That means solar could pretty soon deliver cheap, plentiful power without- it almost goes without saying - all the consequences of the pollution of other forms of electricity generation.

Now that really would be revolutionary - and all thanks to the potential locked away in the sand of your holiday beach.

Subscribe to the BBC News Magazine's email newsletter to get articles sent to your inbox.