I’ve been thinking (and writing) a lot lately about the intersection of hardware and software, and how standing at that crossroads does not fit neatly into our mental models of how to approach the world. Previously, there was hardware and there was software, and the two didn’t really mix. When trying to describe my thinking to a colleague at work, the best way to describe the world was that it’s becoming “oobleck,” the mixture of cornstarch and water that isn’t quite a solid but isn’t quite a liquid, named after the Dr. Seuss book Bartholomew and the Oobleck. (If you want to know how to make it, check out this video.)

One of the reasons I liked the metaphor of oobleck for the melding of hardware and software is that it can rapidly change on you when you’re not looking. It pours like a liquid, but it can “snap” like a solid. If you place the material under stress, as might happen if you set it on a speaker cone, it changes state and acts much more like a weird solid.

This “phase change” effect may also occur as we connect highly specialized embedded computers (really, “sensors”) to the Internet. As Bruce Schneier recently observed, patching embedded systems is hard. As if on cue, a security software company published a report that thousands of TVs and refrigerators may have been compromised to send spam. Embedded systems are easy to overlook from a security perspective because at the dawn of the Internet, security was relatively easy: install a firewall to sit between your network and the Internet, and carefully monitor all connections in and out of the network. Security was spliced into the protocol stack at the network layer. One of my earliest projects in understanding the network layer was working with early versions of Linux to configure a firewall with the then-new technology of address translation. As a result, my early career took place in and around network firewalls.

Most firewalls at the time worked at the network and transport layers of the protocol stack, and would operate on IP addresses and ports. A single firewall could shield an entire network of unpatched machines from attack, and in some ways, made it safe to use Windows on Internet-connected machines. Firewalls worked well in their heyday of the 1990s. Technology innovation took place inside the firewall, so the strict perimeter controls they imposed did not interfere with the use of technology.

By the mid-2000s, though, services like Skype and Google apps contended with firewalls as an obstacle to be traversed. (2001 even saw the publication of RFC 3093, a firewall “enhancement” protocol to allow routed IP packets through the firewall. Read it, but note the date before you dive into the details.) Staff inside the firewall needed to access services outside the firewall, and these services now included critical applications like Salesforce.com. Applications became firewall aware and started working around them. Had applications not evolved, it is likely the firewall would have throttled much of the innovation of the last decade.

To bring this back to embedded systems: what is the security model for a world with loads of sensors? (After all, remember that Bartholomew almost drowned in the Oobleck, and we’d like to avoid that fate.) As Kipp Bradford recently pointed out, when every HVAC compressor is connected to the Internet, that’s a lot of devices. By definition, real-time automation models assume continuous connectivity to gather data, and more importantly, send commands down to the infrastructure.

Many of these systems have proven to have poor security; for a recent example, consider this report from last fall about vulnerabilities in electrical and water systems. How do we add security to a sensor network, especially a network that is built around me as a person?Do I need to wear a firewall device to keep my sensors secure? Is the firewall an app that runs on my phone? (Oops, maybe I don’t want it running on my phone, given the architecture of most phone software…) Operating just at the network transport layer probably isn’t enough. Real-time data is typically transmitted using UDP, so many of the unwritten rules we have for TCP-based networking won’t apply. Besides, the firewall probably needs to operate at the API level to guard against any action I don’t want to be taken. What kind of security needs to be put around the data set? A strong firewall and cryptography that prevents my data in flight is no good if it’s easy to tap into the data store they all report to.

Finally, there’s the matter of user interface. Simply saying that users will have to figure it out won’t cut it. Default passwords — or even passwords as a concept — are insufficient. Does authentication have to become biometric? If so, what does that mean for registering devices the first time? As is probably apparent, I don’t have many answers yet.

Considering security with these types of deployments is like trying to deal with oobleck. It’s almost concrete, but just when you think it is solid enough to grasp, it turns into a liquid and runs through your fingers. With apologies to Shakespeare, this brave new world that has such machines in it needs a new security model. We need to defend against the classic network-based attacks of the past as well as new data-driven attacks or subversions of a distributed command-and-control system.

If you are interested in the collision of hardware and software, and other aspects of the convergence of physical and digital worlds, subscribe to the free Solid Newsletter.