iRobot, the makers of the popular automated vacuum, the Roomba, could share the data it collects about people's houses to tech companies that create "smart" home tools: corporations like Amazon, Apple, or Google. As Reuters described, the data that iRobot might share elsewhere is "of the spatial variety" – that is, information such as the distance between walls or furniture, the sort of details that might make a device that heats a room more efficient, or might allow someone to market missing items to future customers. Is that weird?

Privacy advocates think so. Jim Killock, the executive director of the Open Rights Group, a U.K. online rights nonprofit, told the Guardian that iRobot's decision to share data is a "particularly creepy example of how our privacy can be undermined by companies that want to profit from the information that smart devices can generate about our homes and lives." Companies ought to treat data about people's homes as if it were personal data, he said, "and ensure that explicit consent is sought to gather and share this information."

But even if it were treated as personal data, would anyone be more careful with it?

Reading the terms and conditions

The Roomba's terms and conditions already carry a clause that states that owners allow data collected to be shared with "other parties in connection with any company transaction, such as a merger, sale of all or a portion of company assets or shares," as well as in a few other instances. Do company assets include the data the vacuum cleaner collects? Probably. Is that enough of a hint to tell Roomba owners what might happen to the mapping data their vacuum cleaner will collect? Most consumers would likely say no.

Since the initial Reuters story surfaced, iRobot's CEO, Colin Angle clarified that "iRobot will never sell your data," and emphasized that customers have control over their data. Which is true – but only to whatever point the terms and conditions allow.

Still, the Roomba data case is a good reminder that the rules that govern many aspects of our lives, both online and – increasingly – off, are not, as Killock put it, "explicit." The "creepy" implications are just that: implied. We are left largely unaware of what those implications might actually mean in practice – if we even bother to read user agreements in the first place (which we don't). That needs to change.

How? The forward-facing language of agreement itself could be altered. One study from academics at Berkeley and TU Dresden showed that if presented language other than just a simple "I accept" box (such as "take part"), 26 per cent of users were less likely to click.

The ubiquitous default boxes on user agreements, the researchers concluded, "have trained even privacy concerned users to click on 'accept' whenever they face an interception that reminds them of [end-user license agreements]." That automatic behaviour, they wrote, "thwarts the very intention of informed consent."

A project called "Terms of Service; Didn't Read" has been around since 2012, pushing to solve what it calls "the biggest lie on the web" (the phrase: "I have read and agree to the terms"). Users can download a browser extension that, on certain sites, will provide an easy-to-digest version of the terms of service. Author R. Sikoryak took that concept one step further. He created a book of comic strips featuring the late Apple founder Steve Jobs, conceived of as various famous characters, wherein the only dialogue spoken by anyone in the entire book was the text, start to finish, of the iTunes terms of service.

Finding a solution

Yet, no silver bullet solution to clarifying end-user agreements or simplifying their terms has been found. It is why 22,000 people recently found they had, in agreeing to the terms of use for free Wi-Fi access, also signed up for 1,000 hours of community service. And while stories like that one and the Roomba data news create a stir, they fail to sustain movement toward a real solution.

The reasons for this are simple enough. For one, tech companies are reluctant to push for change. A recent attempt by the European Union to force U.S. corporations like Google and Apple to simplify their terms of use is a good case study. The EU request has been largely met with apathy, prompting the EU's consumer rights commissioner to say she is "becoming very impatient."

Secondly – and more importantly – we have yet to collectively properly come to grips with what our data is worth. We understand, in an abstract sense, why our data needs to be collected – that ad-based internet revenues rely on it, but it seems we don't really grasp what that means.

Consumer choices have long been framed by data, but we may want to consider the limits. (Shutterstock)

In short, it means that your private decisions – where you want to put your couch in relation to your TV or fridge, for instance – are, to some extent, no longer private. Moreover, in an extreme scenario, those decisions might then be used as data points within a larger profile to determine the best way to sell you something. Consumer choices have long been framed by data, but we may want to consider the limits.

And, of course, there is another sort of worry: that the information about the layout of your house rests in some server somewhere, waiting to be hacked.

Is that scaremongering? Perhaps. But the point is these are the sorts of possibilities that we entertain without thought on a daily basis. Yes, we receive products that are either free or available at low cost, and they make our lives easier. But, as the scholars of the user agreement language study suggest, we have been lulled into dangerous complacency.

This column is part of CBC's Opinion section. For more information about this section, please read this editor's blog and our FAQ.