I was once put in charge of one of the first Coca-Cola machines on the Internet. This was the late 20th century at MIT, where we thought it was pretty awesome that you could, in theory, make the machine dispense a Coke from your desktop computer without having to walk over to it. (Of course you still had to walk there to pick up your Coke in the end.)

Everybody knew then that it was inevitable that more and more “things” would end up connected to the Internet. Now they are: Fridges, smoke detectors, thermostats, furnaces, cars, light bulbs and toasters are all being networked. The benefits of all this connectivity will probably be greater than our Internet Coke machine. But there’s also a pretty good argument that the Internet of Things is going to be a security and privacy disaster.

It’s not just paranoia — it’s more of a business reality. It is a software truism that anything connected to the Internet needs to be patched regularly, or else it becomes vulnerable to vandals. The lack of regular fixes is already a problem with a $500 smartphone; the maker typically loses interest in supporting it and patching it after maybe 18 months. It’s going to be a much bigger problem with a $50 device or a $5 device that lingers in your house for years. Inevitably, bad guys will have their way with them. So somebody far away will be able to turn on your oven when you’re on vacation. Your lawnmower will be part of a botnet sending spam. The fridge of the future will offer to reorder your preferred groceries, because it’s been scanning the barcodes on everything you put inside. That’s great, until bad guys figure out how to read the barcodes off bottles of antiretroviral drugs and learn who has HIV.

To address the technical issues and make these systems more robust and secure, we have started the Secure Internet of Things Project, a collaboration among Stanford, the University of California at Berkeley and the University of Michigan.

On the policy front, it's not yet clear what role Washington will be able to play in addressing the upcoming risks. But a good start would be for political leaders and the public to recognize the importance of communications transparency. I’d even suggest that policymakers consider a new consumer right: “the right to eavesdrop on what our Things are saying about us.”

What is transparency, and why should Washington start paying attention to it?

Your computer and cellphone are already tracking and sending a lot of data about you, and although you may not realize it, you can listen in. (You can generally do it by installing something called a “root certificate.”) Even though only a few people might do this, their findings benefit us all. For example, in 2012, it was a Stanford graduate student, Jonathan Mayer, who first publicized the fact that Google was circumventing the security system in Apple’s Safari browser to track users across the Web, contrary to Google’s prior statements to consumers. Mayer documented this by reading what Google’s servers were saying to the Safari browser on his computer and what Safari was sending back. Mayer reported this, Google stopped doing it, and the company eventually paid $39.5 million in civil penalties to the federal and state governments. Without the Web’s communications transparency, documenting this security hole — and the fact that it was being exploited by a major advertiser — would have been vastly more difficult.

The problem is that today’s Internet of Things devices are different: For the most part, you can’t eavesdrop. Manufacturers are shipping devices as sealed-off products that will speak, encrypted, only with the manufacturer’s servers over the Internet. Encryption is a great way to protect against eavesdropping from bad guys. But when it stops the devices’ actual owners from listening in to make sure the device isn’t tattling on them, the effect is anti-consumer.

It’s likely that bad guys will still learn of bugs they can exploit: Organized criminals and even governments are said to be paying millions of dollars to learn about vulnerabilities in mass-produced products. But good guys will be less likely to find the same bugs and help manufacturers fix them.

Stymieing security research is one result of a lack of communications transparency. Another consequence is to make it difficult for consumers and businesses to buy and install their own independent checks on security and privacy. One such check is called an Intrusion Prevention System: a separate box that acts like a firewall, auditing incoming communications and making sure nobody is telling a device to do bad stuff, and auditing outgoing communications to stop a device from divulging too much. It’s not possible to set up the box if it can’t eavesdrop on what the device is saying.

At a technical level, my colleagues and I don’t yet have a concrete proposal for exactly how to build and enforce a system of communications transparency. And there are many concerns with the Internet of Things that won’t be solved just by eavesdropping on what our devices are saying. The growing power of machine learning means that companies can infer things about us even if nothing tells them straight out, just by looking for patterns across a whole population of customers. That’s a real concern as data become more pervasive and get saved forever. Meanwhile, the algorithms used to analyze that data will only get more sophisticated over time.

This is part of a much larger battle going on, about the power and perils of Big Data and the cloud: Do the makers of these products really need the ability to collect data from the households of all the device owners and hold on to it? Voice, video, everything I ever put in my fridge — you can learn a lot about me from recording all that and saving it forever. Something I said in my TV’s presence in 2015 could be used against me in 2025. Google already keeps a record of everywhere I’ve been at every moment since I started carrying a smartphone — to their credit, their website lets me review and delete that record if I choose (but some of Google’s competitors aren’t so transparent, and anyway, how do I know it’s really been deleted?). These broader questions of personal data and who should control it may be among the most vexing that our society faces.

Even if we don’t have the answers today, policymakers can start to think about what best practices in the industry might look like. Should there be an “Underwriters Laboratories” that audits the software on Internet of Things devices for prudent security and privacy practices? What should happen when a manufacturer stops supporting a networked device that’s in 30 million homes?

As our policies evolve, it’s important to remember that communications transparency — or the “right to eavesdrop” — has been a big part of the practical success of PCs, smartphones and the Web. Knowing what is being said about you is one of the major checks against security and privacy problems. This ought to be preserved as Internet-connected devices become even more intimately wound into our lives. An Internet of Things where your fridge is telling mobsters about the medicine you just put inside, and you don’t even know about it, would make for a scary future. Let’s not head there.

Keith Winstein is an assistant professor of computer science and, by courtesy, of law, and a Robert N. Noyce Family Faculty Scholar at Stanford University. He is a member of the Secure Internet of Things Project, a collaboration among computer-science and electrical-engineering faculty at Stanford University, the University of California at Berkeley and the University of Michigan. From 2007 to 2010, Winstein was a staff reporter at The Wall Street Journal. Thanks to Jonathan Mayer for reading a draft of this essay.



Authors: