Every time a major Internet-connected-product is released, we keep coming back to the debate over security vs. convenience. The progression of arguments goes something like this:

One group expresses outrage/skepticism/ridicule of how this product doesn't need to be connected to the Internet;

to be connected to the Internet; Another group argues how the benefits outweigh the risks and/or how the risks are overblown;

There will be news stories on both sides of the issue, and the debate soon dies down as people move on to the next thing; and

Most users are left wondering what to believe.

As a security researcher, I often wonder whether the conveniences offered by these Internet-connected-devices are worth the potential security risks. To meaningfully understand the nuances of this ecosystem, I consciously made these devices a part of my daily life over the past year. One thing immediately stood out to me: there seems to be no proper mechanism to help users understand the ramifications of the risk/reward tradeoffs around these commonly used “personal” Internet-connected-devices, which makes it difficult for users to have any sort of effective understanding of their risks. I pointed out the same in a recent CNN Tech article about Amazon Key, where I also said:

A simple rule of thumb here could be to visualize the best case, average case, and worst case scenarios, see how each of those affect you, and take a call on whether you are equipped to deal with the fall out, and whether the tradeoffs are worth the convenience.

Without knowing a user’s specific needs, this is probably as close as it gets to any sort of “useful advice” any security professional could give. But this is still only a semi-useful platitude, because it doesn’t answer a very important question:

How could users meaningfully determine what the best case, average case, and worst case scenarios are, without truly understanding the ramifications of the security/convenience tradeoffs they make?

It turns out that we need to answer a few other questions before we could even get to this seemingly obvious question. And these other questions are often not quite obvious themselves. So until we figure out what these other questions are and what their answers could be, I’m afraid the best any security professional can do is give semi-useful platitudes like the one I gave.

Well, semi-useful platitudes suck. But this is also a broad and complicated question. Given its scope and complexity, I'll address the question in three parts: in the first part, I define what exactly we are trying to solve for, and how Personal Threat Models are pertinent to the solution. In the second part, I show how Personal Threat Models currently work, how they are inadequate to solve our (now clearly defined) problem, and what needs to change. In the third part, I discuss how we could rethink our approach toward Personal Threat Models so that we could maybe offer something more than semi-useful platitudes.

IoT Risk and an undying debate

Irrespective of how they are marketed, smart devices like Amazon Echo, Amazon Key, Google Home, etc. are "Lifestyle Products" aimed at improving convenience—how necessary these products are depends on how meaningfully they integrate into one's lifestyle.

Hence, whether it is "worth" compromising some security/privacy to reap the conveniences offered by these products is a very personal and subjective decision. In some cases there is genuine improvement to one's quality of life (e.g. voice assistants are quite useful for people with certain disabilities, and the privacy concerns outweigh the convenience for most people in this context), but in other cases, these Internet-connected-products just add to the number of avenues that could be used to compromise one's security (these “avenues” are formally called attack vectors).

So how do we decide what products are "safe?" In other words, what is "acceptable risk" in the tradeoff between security and convenience? Also, "safe," "trust," "risk," etc. mean different things to different people. How do we even define/formalize these terms?

Clearly, there are no "right" (or standard) definitions here, but until we decide on what these things should mean in this context, we will always come back to the same debate every time a new Internet-connected-product is released.

Further, the Internet of Things (IoT) ecosystem includes a broad variety of devices and device-systems such as power plants, vehicles, home appliances, etc. Risk assessment in the IoT ecosystem is fairly complicated owing to, among other things, the non-homogeneity of the underlying platforms (giving rise to ecosystem-specific challenges w.r.t data management, authentication/authorization protocols etc.).

Given this scenario, there is little value in defining/adopting the same terminology and risk assessment metrics for… say, an Internet-connected-speaker for domestic use, and a wireless-sensor for crop monitoring. In other words, although there is the unifying theme of all these IoT devices being connected to the Internet, threats associated with "Internet Connected Lifestyle Products" need to be visualized differently.

Further, given the fragmented nature of this Internet Connected Lifestyle Products ecosystem (different types of users, lifestyles, requirements, hardware, protocols, data storage etc.), there is no objective, generalized way to definitively determine what level of risk is "acceptable" except to analyze each case where security would be compromised for convenience and determine what tradeoffs would be acceptable for each user in each of these cases. At best, we could group similar cases and give some general best practices, but this is not nearly enough given how some of these devices can catastrophically compromise one's security (often due to suboptimal/erroneous risk assessment).

Thus, in the context of security vs. convenience, a lot boils down to one's personal definitions of "safe" and "trust," then one's Personal Threat Model (and consequently, risk assessment) resulting from these definitions. Unless we define the scope clearly, come up with a meaningful way to formalize some of these terms, address any implicit assumptions, and assess/quantify risk in a way that makes sense in this specific ecosystem, we will keep having variants of the same debate.

Further, even if we assessed the potential attack vectors (and risk) associated with whatever Internet-connected-device is the flavor of the week, the benefits of doing that might not matter if there is no meaningful way to assess the risks in the scope of the user's Personal Threat Model.