It’s a dark reality of the digital age: Every year, security breaches, privacy hacks, and other forms of cyber warfare get more frequent, more invasive, and leave more destruction in their wake. Last year, the Sony Pictures hackers didn’t just expose executive and employee emails, medical records, salaries, performance reviews, and Social Security numbers—they claimed to be in sole possession of more than 100 terabytes of stolen data. The hack even triggered an international incident with North Korea that required President Obama to weigh in.

Information security and privacy experts, however, know that hackers’ methods are evolving as fast as technology itself—and that new waves and new forms of attacks are on the horizon. “With so many things that happen, the attacker has an advantage,” says Bruce Schneier, privacy specialist and author of the recent New York Times bestseller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. “We have to get used to a future with more exposure. Keeping things private is harder and harder than before.”

Prevention is similarly challenging—that’s the other, darker reality of the digital age. Could Sony have prevented its own disaster? “No,” says Ralph Echemendia, CEO of cyber-security consulting firm Red-e Digital. “There’s no such thing as a completely protected system.”

So what’s worrying our top security gurus these days? What kinds of scenarios look not just alarming but scarily plausible?

The Next-Gen Corporate Hack

The attack on Sony was an unusual one, not just because of the immediate damage it caused. Marc Rogers, principal security researcher for CloudFlare and director of security for DefCon, the world’s longest running and largest underground hacking conference, notes that in a hack so thorough, it’s unusual for the people who have infiltrated the system to show their hand. As odd as it may sound, Sony got lucky.

“There’s very little value to the guys involved in doing that,” he explains. “When you dump that much information online, you can’t use it. … If the bad guys had held onto that they could have used that control to deliver a whole generation of malware, and very few people would have been able to do anything about it.”

What does that mean? Picture a situation, Rogers says, where attackers have taken complete control of a company’s systems, but do it in such a way that they don’t reveal themselves or their actions—they plant the malware and let it do its thing. With a public company, they could conceivably gain access to sensitive data that could sink a company’s share price if made public.

Here’s another variation on the theme: A piece of software the company publishes could be quietly infected with the hacker’s malware, opening a much broader channel to breach the computers of potentially millions of people. It takes companies 205 days, on average, to identify a breach, according to research from cyber-security company FireEye. That gives hackers plenty of lead time to steal information and inflict damage.

A Black-Market Boom in Medical Data

The move to electronic medical records means better coordination of care among healthcare providers, but it also means that valuable patient data—from the databases of drugstores to those of hospitals and HMOs—are growing more vulnerable to hacks, according to Echemendia.

“You rarely hear healthcare as the focus of the cyber-security industry,” Echemendia says. “With the Sony hack, an entire corporation was taken completely down. Nobody could go to work. If you do that to a hospital, people die.”

Today, security concerns surrounding patient data stem from a growing black market, where hackers hold information for ransom or use it for identity theft. One study, released in February, found that two-thirds of medical identity-theft victims pay—on average, $13,500 per incident—to resolve the theft.

Future hacks could cause lethal, not just financial, damage. Imagine a patient in the ICU, Echemendia explains, hooked up to machines that monitor vitals and communicate with systems that contain results from blood lab tests, X-rays, MRIs, prescription information and physician reports. “You modify something like a decimal point in a drug prescription in a computer system, then you can kill somebody,” he says. “By the time you find out, it’s too late.”

If the motivation for a cyber-terrorist is to directly affect human lives, there’s hardly a more mission-critical system to attack than that of a hospital or healthcare provider.

Robotic Hijacking

The boom in Internet of Things technologies and robotics has exposed one of the biggest and broadest security threats of all—the vulnerability of billions of newly connected devices and machines (each packing a hacker’s primary target, software) to every type of hack imaginable.

Security researchers, for instance, have already been able to hijack a surgical robot, and confirm that should that occur during a surgical event, hackers could either render the robot useless or order it to perform procedures of their own choosing—while the surgeon in charge of the procedure hundreds of miles away watches helplessly.

Even failing to take full control of the robot could result in disastrous consequences, as delays to the surgeon’s actions result in a patient suffering further injury or death. Similarly, while volumes have been written about the snooping potential of drones, the hackable potential of the software powering them could mean the owner of that drone has no idea it’s being used for nefarious purposes.

In January, corporate security expert Rahul Sasi discussed how he had successfully infected a Parrot AR Drone 2.0 with malware. “We buy drones from so many countries,” he explained at an industry conference in February. “People buy drones from their neighbors. What if the drone that we bought is back-doored?”

Sasi’s malware was a silent install, meaning the drone’s owner wouldn’t have had any idea it had happened. Once the malware is in place, Sasi says, it gives the hacker complete control of the drone’s flight controls and camera. A hacked drone can then easily be used to spy on pretty much anyone—with complete anonymity for the hackers.

Whether it’s drones or commercial robots, the ability for hackers to gain access to these devices opens up a world of liability questions, with no answers on the near horizon. “In earlier generations of technology, we adapted a contractual approach where we were presented with warnings, clicked ‘yes’ and there was an assumption we were using that code at our own risk,” says Andrea Matwyshyn, Microsoft visiting professor at the Center for Information Technology Policy at Princeton University. “But as more advanced devices use code, the calculation of that risk is dramatically altered. The question of whether legal responsibility needs to be ratcheted up a notch in the software creation process is something we’ll need to address in the near future.”

Echemendia likens this emerging world of privacy and security risk to our relationship with earthquakes. Hacks and attacks will occur naturally just like a hurricane or an earthquake, he says, but while we know when a hurricane is coming, how big it is, and how we can prepare for landfall, an earthquake always catches us off guard. We’re equally unprepared for hacks, Echemendia says. So rather than be reactive, or even preventive, resiliency, he says, is the key to seeing our way through. “We can’t fully mitigate risk,” he says. “We have to learn to operate even while being hacked.”

Share your thoughts on privacy and security in the comments below. #maketechhuman