I’m a mechanical engineering professor at MIT, and 17 years ago, with my colleagues David Brock, Kevin Ashton and Sunny Siu, I helped launch the research effort that laid some of the groundwork for the Internet of Things. As you might imagine, my life is pretty connected.

A few years ago, before the Nest thermostat, a friend and I wired my house to make it easily controllable. Within a few months, we had dozens of switches, motion sensors and thermostats, all on a network running through wireless routers and the power lines within the house. I had a computer controlling lights and turning them on and off when we traveled, to make the house look occupied, and complex heating schedules in the winter that anticipated the habits of the family. The next step was going to be connecting my home to the Internet.

And then I killed the project.

I realized that anyone could plug into the outlet on my deck and take control of my house.

Although I’m broadly optimistic about the wider potential of the world of networked things to help with everything from the food and medical supply chain to missing-plane searches, one concern has only grown in my mind as it develops and expands into more corners of society: security.

When people talk about security threats in this environment, they tend to use broad terms — Chinese hackers, malicious trolls. As someone closely familiar with the technology, I can be a little more specific about where I see the issue and how it’s different from the Internet security questions we tend to think about.

THE TRUTH IS, the Internet of Things isn’t some futuristic thing. We are already surrounded by hundreds of systems that are “networks of things.” If your car is relatively modern, it has more than 100 sensors, all connected over an internal network. A factory may have thousands of sensors.

The problem isn’t the IOT per se, but the pell-mell rush to build systems in any which way. Consider a valve that has been hastily turned into an IOT object. A motor that turns the valve on and off has now been networked, and the plant is connected to the Internet. In principle, there is security in some form to prevent a malicious user from turning the valve on or off. A firewall can reduce the risk of an intrusion into the system. Access control will in principle prevent an unauthorized agent from messing with the valve. Devices inside the network will need to authenticate themselves before their information is accepted into the decision-making process — to ensure that a "Trojan horse" has not been inserted into the system. Encryption will prevent others from overhearing conversations within the system.

All this involves cryptography — which means many cryptographic keys, taking careful inventory of all the systems and diligent “hygiene.” If all goes well, and the protocols being used today — Bluetooth or Zigbee, for example — are bulletproof, then the system will be safe.

However, building and maintaining such a complex system is difficult. The practices, the procedures and the safeguards aren’t in place yet. Will a security expert accompany every valve technician or electrician on every trip out into the plant?

The underlying challenge is that there are no clear and agreed upon architectures for building such systems. Your light switch might have one level of encryption and your remote another. One may use Zigbee, another Bluetooth, and yet another WiFi. Bridges to connect across them will abound. Even if independent systems are secure, we will cobble together systems, and the chain will only be as strong as the weakest link.

Besides lack of security, these patchwork systems are creating other problems: They’ll be hard to maintain and hard to upgrade or improve. This is not so different from software today. Many companies have such a patchwork of legacy systems that it is virtually impossible to replace any one without a wholesale replacement of all. Security breaches are merely one symptom, and this will only become a bigger issue in the future.

IT MIGHT SEEM hard to create a secure order out of such a hodgepodge, but in fact solutions can be crafted. The U.S. is terrific at laying out and embracing new procedures, new standards and best practices. This requires cooperation among competitors and would benefit from a clear policy push by the government. Overall, I see three main needs as the Internet of Things grows:

First, there needs to be some agreement on system architecture. The Internet itself, for instance, has a clear hourglass architecture with Internet Protocol at its core. The electric power grid has its own architecture of alternating current with step-up and step-down transformers. Great architectures are simple, modular, decentralized and tolerant of failure. Most importantly, they must enable an easy design metaphor and innovation. The Internet of Things is today an abstract collection of uses and products with little anyone can agree on or, worse yet, disagree on. So everyone does it their own way, often poorly. It is sorely lacking an established paradigm of implementation and use.

Second, we need open standards that reflect the best architectural choices. Today, there are standards for things to talk to one another — but too many of them, and each does a different thing. The result is a series of walled gardens, which at best are individually trustworthy but don’t necessarily work well together. Since my home-automation experiment, I’ve installed a commercial Nest thermostat, which I trust for the time being because it does one thing and does it well — but it is also opaque and difficult to integrate with other systems outside the Nest ecosystem and its chosen partners. I would push for an open standard, rather than a series of private ones; open standards always receive the scrutiny and consensus that is necessary for true security.

Third, and most important, the world needs a test bed in which all these best practices can be incubated and perfected. While the first two activities are best handled by industry, the test bed is best created by the government — and this is a huge opportunity. The modern Internet would not have existed without the leadership of ARPA (now called DARPA) in incubating the network with a number of academic institutions, labs and companies. For example, what if the government created an Internet of Things test bed in national parks with different standards at work to manipulate sensors and actuators for maintenance? Perhaps companies could cooperate to control electronic signage, watering systems, cameras and so on — with an agency or an industrial committee chosen by the government monitoring emerging issues such as security, privacy, emerging practices and standards. Only the government has the breadth to make something like this happen.

In the quest for a smart, broad-based standard, my own preference is for something I call the “cloud of things.” The idea is to create an avatar for everything and to have the avatars talking to one another. It’s easier to update the way avatars talk than to update the objects themselves, and the cloud is quite robust — we’ve learned how to take things to the cloud directly and securely and braved attack after attack.

Whatever the final standard, the key thing is to seed its development now. It is easy for policymakers to look at the private sector's huge enthusiasm, and all the momentum in the tech industry, and assume things will just work themselves out. But if leaders don't think this through, and don't create a framework for it to succeed, there’s a real chance that the full potential of the Internet of Things could be compromised. They need to understand the role government can play — and think as big as the opportunity itself.



Authors: