Humans have survived ice ages and deadly pandemics to become the dominant species on Earth, even if our reign over the planet barely represents a blip in a geological record that has seen countless living organisms come and go. We have adapted to live almost anywhere, and have harnessed the power of nature by splitting atoms and splicing DNA to reshape the world. Yet those same technologies could also doom humanity to extinction if misused.

Can humans survive? (Give your answer in poll below.)

A few doom prophets say no. More experts say yes, but caution that humans must learn to wield technology more wisely to fend off natural threats such as asteroids. Wisdom can also teach humans to avoid destroying themselves with biotechnology or nanotechnology run amok. [10 Ways to Destroy Earth]

"As we move to a civilization that's so much more powerful in terms of controlling nature and manipulating nature, and becoming ever more powerful in our tools and capabilities, there's an inherent risk in that," said Benny Peiser, a social anthropologist and director of the Global Warming Policy Foundation in London, England.

Technology has given humans a better shot at long-term survival today than at any point in their history, Peiser said. He added the cautionary note that humans still face a risky transition before being able to responsibly use such technology.

People once faced extinction at their own hands during the Cold War, when the United States and the Soviet Union pointed hundreds of nuclear weapons at one another and stood ready for mutually assured destruction at a moment's notice. But nuclear doesn't represent the only double-edged technology around today.

{{ embed="20100719" }}

From biotech to nanotech

"The biggest challenges we have are that technology is growing at an exponential rate, which means the ability to fundamentally affect the world and affect large populations has gone from something that only governments can do to something that individuals and small groups can do," said Peter Diamandis, chairman and CEO of the X Prize Foundation.

As a leading innovator and entrepreneur, Diamandis has supported both individual and collaborative solutions for some of the world's greatest challenges through the X Prizes. He noted that technology will enable almost anyone to change the world in the near future, for better or for worse.

"On the positive side, this means small DIY groups can develop and fly ships in space, or develop new medicines or drugs," Diamandis told LiveScience. "On the negative side, these same exponentially growing technologies enable terrorists to do incredibly powerful things."

For instance, digital maps of DNA sequences and ever-cheaper lab equipment could eventually enable so-called garage biologists to design new synthetic organisms that could revolutionize medicine and usher in a new era of clean energy.

But they could also allow individuals to unleash new, deadlier infectious diseases upon the world, such as a reengineered version of the 1918 influenza virus that killed 50 million people.

Peiser recalled the late science fiction writer Arthur C. Clarke once telling him about the idea of putting a computer chip in people's brains to keep them from turning terrorist, so to speak. But Peiser pointed to political or cultural solutions as the more realistic way to keep technology in check.

"There is no technological fix [for super-tech]," Peiser said.

Biotechnology will pose the greatest challenges during the next decade, according to Diamandis. The following rise of nanotechnology and artificial intelligence, or AI, could also raise fresh challenges.

Nanotechnology's focus on manipulating molecules on the tiniest scales has evoked the apocalyptic "gray goo" vision of self-replicating creatures or robots running amok, at least in the popular imagination. Yet a likelier challenge may come from AI becoming self-aware and perhaps rivaling humans as a second intelligent species.

Rise of the robots

Military experts do occasionally warn of the so-called "Terminator" scenario, given the thousands of rolling, crawling and flying robots now roam the battlefields. But their concerns reflect a more practical view of how to get today's robots to avoid killing the wrong targets due to malfunction or system error.

Indeed, today's AI falls far short of sentience and still struggles to learn how to perceive and navigate the real world, not to mention detect the social behaviors and emotions necessary for complex interactions with humans.

Most AI outside of the labs has become the specialized brains behind certain technologies found in factories, homes and cars. That partnership may still serve humanity well in tackling future threats posed by rogue individuals.

Better AI could figuratively crawl across the Internet and search for unrelated pieces of data that may create a trail to would-be perpetrators such as bioterrorists, Diamandis said. They might also trigger automatic systems to prevent natural or man-made disaster, without the need for an error-prone human.

"We will soon have large sensor networks that are sensing the air and scanning for bacteria and viruses that you might breathe out, identifying those and shutting them down," Diamandis explained, referring to bioweapons.

But if AI does truly become an intelligent rival in the future, humans may end up facing a situation not unlike that of advanced extraterrestrials descending upon Earth – except AI would possibly already control the world by default.

They came from outer space

People ranging from science fiction authors to famed British physicist Stephen Hawking have long pondered the idea of Earth at the mercy of aliens. Researchers continue to fiercely debate about the widespread existence of extraterrestrial life in the universe, or lack thereof.

A more certain threat from outer space exists in the form of giant asteroids or comets. One such space rock spelled doom for the dinosaurs that ruled Earth for hundreds of millions of years, and scientists say it's only a matter of time before another planet-killer heads for Earth.

A loose coalition of ground and space-based observatories already watch out for incoming danger, even if astronomers still wish for better coverage of the sky. Better instruments placed farther out from Earth could also give the advanced warning necessary to prepare a response.

Diamandis remained more concerned about the man-made threats from technologies emerging over the next several decades. But he acknowledged the asteroid threat, and also pointed to the many benefits of humans spreading out beyond Earth.

"When I got a chance to talk with Stephen Hawking, Hawking said [he didn't] think humanity has a future if it doesn't get off the planet because of all the exponential dangers," Diamandis recalled. "I do believe it's a moral imperative for the human race to get off the biosphere."

Clearing the doomsday climate

Escaping the Earth could also ease the strain that energy-hungry humans have placed on the planet. Experts remain divided about whether humans have pushed Earth beyond its environmental and climate tipping points, but at least one scientist predicted last month that humans would go extinct within 100 years.

Frank Fenner, a microbiologist at Australian National University who helped wipe out the disease smallpox, told The Australian that he believed overpopulation, environmental destruction and especially climate change would seal humanity's fate.

His views deviate sharply from those of most experts, who don't view climate change as the end for humans. Even the worst-case scenarios discussed by the Intergovernmental Panel on Climate Change don't foresee human extinction.

"The scenarios that the mainstream climate community are advancing are not end-of-humanity, catastrophic scenarios," said Roger Pielke Jr., a climate policy analyst at the University of Colorado at Boulder.

Humans have the technological tools to begin tackling climate change, if not quite enough yet to solve the problem, Pielke said. He added that doom-mongering did little to encourage people to take action.

"My view of politics is that the long-term, high-risk scenarios are really difficult to use to motivate short-term, incremental action," Pielke explained. "The rhetoric of fear and alarm that some people tend toward is counterproductive."

Searching for solutions

One technological solution to climate change already exists through carbon capture and storage, according to Wallace Broecker, a geochemist and renowned climate scientist at Columbia University's Lamont-Doherty Earth Observatory in New York City.

But Broecker remained skeptical that governments or industry would commit the resources needed to slow the rise of carbon dioxide (CO2) levels, and predicted that more drastic geoengineering might become necessary to stabilize the planet.

"The rise in CO2 isn't going to kill many people, and it's not going to kill humanity," Broecker said. "But it's going to change the entire wild ecology of the planet, melt a lot of ice, acidify the ocean, change the availability of water and change crop yields, so we're essentially doing an experiment whose result remains uncertain."

Others seemed more sanguine about humanity maintaining a happier existence on Earth. For instance, X Prize founder Diamandis expressed confidence about humanity solving its energy and environment issues.

Similarly, social anthropologist Peiser called for a sober assessment of the risks ahead, but also kept an optimistic outlook.

"Obviously we need a little bit of luck in terms of time, where we can have perhaps a couple of centuries to prepare for big comet or asteroid impact," Peiser said. "But apart from that, I think it's really in our hands."