Hashtag Data Positive

I’ll be honest. I’m conflicted. I’ve got this idea that I want to share, but I’m reluctant to do so. I’m worried that I might be wrong, or that introducing it may do more harm than good.

Since I tend to overthink these things, I’ll just get right to it: I believe that we need a phrase that emboldens us to become the masters of our digital destinies, and that serves as a bulwark against the fear produced by the endless news cycles that sensationalize hackers, data breaches, and our apparent loss of privacy.

We are entering into a new world with new threats, but that doesn’t mean that we must succumb to them, nor allow to them to force us to live in paranoia and fear. Indeed, there are plenty of ways to practice safer computing, just as there are for having safer sex. It’s simply a matter of replacing nails-on-chalkboard-sounding topics like “security”, “privacy”, “authentication”, “authorization”, “permissions” and the like with something more seductive.

I propose that the set of attitudes that promote the active, enthusiastic, and responsible use of data-rich services, coupled with practices and safeguards necessary to continue enjoying these services, be called data positivity. The social movement to advance data positivity should be called the Data Positive movement. Hashtag Data Positive.

There, I said it.

The case for data positivity

So is this a terrible idea? I don’t know, maybe.

Is it unnecessary? No, I don’t think so.

Privacy, safety, and security aren’t sexy topics, and neither are condoms and STIs. But understanding them is critically important to being able to enjoy sex responsibly. And sex positivity, as a social construct, recast “consensual sexual activities as fundamentally healthy and pleasurable, [encouraging] sexual pleasure and experimentation.”

Safer sex isn’t antithetical to good or interesting sex, good sex is safe sex. And practicing safe and consensual sex makes you sex positive. You can have your cake and eat it too!

What worries me is that ignorance of digital privacy will rise with the rapid expansion of mobile computing. Having worked on digital identity, I know how hard it is to engage mass audiences on this subject. And right now, the mainstream narrative is “you’re not safe and the hackers are going to get you, it’s just a matter of time.” That’s a terribly frightening proposition, considering how much we rely on devices very few of us deeply understand.

Left unanswered, this narrative will cause people 1) to turn away from cloud services over time and/or 2) become apathetic about setting intentional privacy boundaries and taking charge of their digital safety.

I don’t like either of these eventualities.

The cloud can be just as secure (if not more so) than non-cloud solutions. But it takes a data positive attitude to demand answers to what should be straight-forward questions about how cloud systems are configured and secured. Users should demand to know what account protections are available before they sign up. Asking these questions was once difficult and would invite a condescending response; no longer. As the media stirs up paranoia about digital security, answers to these questions are now equally relevant to everyone.

Meanwhile, design has become the differentiating aspect of successful modern businesses, and as a result, features that protect users have become more user-friendly and ubiquitous. Even Apple, the leading light of effective user experience, now provides two-factor authentication when once they refused. Commensurate with increasing support for account security, the shame that once met questions about safe computing should be banished.

Data Positivity is about the

future, not the past

Let me put this into historical perspective, because I anticipate considerable skepticism for this idea.

Remember these old beasts?

Those of us who grew up with 17" CRTs and mice and keyboards learned computing on fully equipped workstations — products intended for academic and corporate use. They may have been called “PCs”, but there was nothing personal about them, they were designed for work. With their infinite capabilities, we eventually deduced that those Nigerian scammers’ offers were indeed too good to be true, and our email spam filters got better anyway, and we installed antivirus software, and stopped downloading attachments, and so on, until eventually we felt safe.

Kind of like how they must have felt in the Free Love 60s and 70s, right after birth control, but right before AIDs.

So too did we have this exceedingly brief vespertine moment of naïveté in the 2000s, culminating in Steve Job’s bombshell. Then it all changed.

So imagine that your first and only computing device was an iPhone. Imagine that iCloud was built into your operating system from the outset, and that storing things on it wasn’t unusual.

You installed any app you desired without a thought. You went about creating accounts, setting profile photos, typing in your email address, and re-using the same password over and over mindlessly, like the way your mind automatically tells you to apply the brake when you come upon a red light. Your security disposition would be, compared to us neckbeards from the mouse and keyboard era of computing, lax.

I see young people flit from one app to the next, careening over UX speed bumps intended to protect them, heedless of the rights they’re granting to their contacts, their location, their password. They fly by as though stop signs don’t apply to them. And maybe they don’t, at least until there are consequences.

I want data positivity to reframe how people encounter these screens as interfaces of empowerment! These screens — ugly, confounding, in need of simplification (and trust me, not for lack of trying)— give us a chance to decide whether an app gets access to our hard-won data. Once we grant access, it ain’t easy to turn back.

Those permission screens are the condom-thin layer that prevent apps from revealing your location, spamming your contacts, or leaking your password to a hacking ring when their unencrypted, unsalted user database is compromised.

Consider Snapsaved.com: they promised to discretely save Snapchat users’ pics, and when they were hacked, they Deepwater-Horizon-spilled 13GB of photos and videos all over the net. Who in their right mind would use such a thing?, you might ask? Well, 98,000 of your peers, apparently. But I’m not the least surprised. What is a defining feature of Snapchat (i.e. “disappearing” photos) is also a glorious target for hacking. And because the way we talk about user security and privacy turns so many users off—the message clearly isn’t sinking in.

The Data Positive Person

How might a data positive person approach Snapsaved.com? Ah, now we get into the specifics!

As you know, I’m a strong believer in the determinism of language. The way we talk about the world shapes how we encounter it. “Security” and “privacy” are defensive words; they imply that we need to defend against something that might be taken away from us: our liberty, piece of mind, etc. To be “data positive” is to resist the inherent anxious posture imposed by such defensive language. So let’s try an exercise…

I am data positive when…

…I get context before connecting.

If I’m data positive, I use services that get better the more they know me, through my contacts, my location, and my attention (i.e. via notifications). But I’m also careful. I read the reviews, check out other apps by the same company, check out the app’s website, search social media, and generally attempt to gauge the trustworthiness of the app before I install it or let it touch any of my goods.

If the app has low ratings or complaints, or the parent company’s other apps have bad ratings, or if the app’s website looks suspicious or doesn’t have a privacy policy (what, you don’t read those?), or if reviews on social media are negative (or if no one’s said anything at all!), then I’ll reconsider whether the risk is worth the potential upside. If not—I’m out. No app installed, no access granted.

…I actively set privacy boundaries.

App makers have gotten savvier about asking for only the permissions they truly need—their MVP (minimally viable permissions). Most apps these days will also tell you that they won’t post on your behalf or without your express permission. This is relatively new, and a good thing.

But other access grants don’t produce any evidence of use, making it seem like they’re no big deal. As I said, I’ve watched plenty of friends blow by permission screens without reading them — not because they don’t care, but because they’re distracted or focused on completing a specific task. Totally understandable, but unsafe computing is still unsafe computing, even if it’s just once. Comprendé?

So next time you’re confronted with a request for access, take a moment to consider what you’re being asked. Bear in mind that these interfaces are designed to get a “yes” from you, whether you comprehend their implications or not. It’s up to you to stop and consider what’s at stake. The first time an app asks you to grant it access is a getting-to-know-you moment; if it’s asking for too much too soon, say no. Good relationships take time, and a patient app maker knows that will only ask for restricted access when necessary, taking the time to explain why need it. Anything less is shady.

…I revoke access with impunity.

Let’s say I’ve evaluated an app’s credibility and decided to use it, granting it the access it needs. But then I want to change my decision. What do I do? Fortunately, it’s now fairly easy to restrict app access after the fact, either through my device’s system settings, or through a service’s website (e.g. on Google, Facebook, Twitter).

I do this pretty regularly—especially if an app sends me just one useless notification. My attention and my data are too valuable to let an app waste my time. If they don’t treat the access I’ve granted them with sufficient respect, there are plenty others who will.

…I self-audit my data exhaust regularly.

Data exhaust consists of app activities, automated access receipts, and other streams of information that result from me using an app or service.

For example, whenever I charge something to my credit card, my bank makes a record of that transaction that I can access online. Social networks do this too, and give you access to an audit log of apps that have access to or have written to your account.

Some of the contributors to my data exhaust are private, and some are not. Taken altogether, my data exhaust paints a clear picture of what I do, where I do it, and with whom. Some of this stuff I want to hold on to, as it can be useful later (or prove interesting with apps like Timehop or Gyroscope). Some of it though, I want removed. In those cases, I’ve got to put in effort and review my data exhaust, or else it’ll stick around permanently.

…I practice good data hygiene.

Good data hygiene is both reactive and proactive, requiring that I clean up my existing data and produce rich and accurate information as I go. Good data hygiene leads to better and more personalized experiences, including better ads and recommendations.

Reactively, I use tools like Google History to review my Google search activities and remove the irrelevant items that I don’t want Google to retain. This helps to improve my Google experience by narrowing the information they use to deliver suggestions to me. Similarly, on Foursquare I add my tastes and write reviews for the locations it suggests so that its recommendations get better.

When I create new posts, I proactively add location, tag relevant people, use hashtags (though I’m ironically inconsistent), and otherwise add as much context and metadata as possible. Why? Because these are all signals that either the service can use to learn about what’s important to me, or that I can use to search for this stuff later. For example, if I’m looking for that sweet photo I took from a hot air balloon in Cappadocia, I just need to visit my Instagram photo map and there it is:

How else could I have found it?

…I’m mindful of what I tweet, Google, selfie, & say.

Building on good data hygiene for my private data, I am intentional about what I post publicly. First, because I assume no shared context on the internet, and second, because what I post will be used to target, optimize, personalize, advertise, segment, retarget, synthesize, and otherwise analyze me to discern my preferences so that algorithms can put things in front of me that I might want to see.

This is not, fundamentally, a bad thing. But it does mean that I reap what I sow. What I click and what I say will be used to determine which content and ads are shown to me — sometimes with eerie omniscience (i.e. whenever I’m retargeted).

That’s not to say that I don’t post ridiculous, stupid, or otherwise casual commentary. But I also post links to things that I like — because I want to train the algorithms to serve me.

And take note: computer vision has gotten very good. The photos and videos that we share are just as useful as the text posts; the algorithms don’t care. They know exactly what’s in our selfies!

…I look out for my friends, family, and co-workers.

It’s not enough to take care of my own app access and data hygiene; being data positive, like being sex positive, is a social movement. When I see my friends, family, and co-workers sharing things that they may not realize they’re sharing, I let them know. Or when I think they might be overreacting to fears about their privacy being invaded or accounts being hacked, I work to help them understand what’s going on, their options, and to weigh both sides of a decision. I find it helpful to explain why an app may be asking for access, and then explain how the data being accessed may be used — and what the inherent risks and benefits may be.

It’s not my job to make these decisions for the people in my life, but considering how much time I’ve spent considering my own decisions, I figure it’s the least I can do to pass along the insights I’ve made.

…I promote data positivity.

So even if I don’t make the decisions for my friends, family, or co-workers, I do have a perspective, which is that the future will benefit those who start capturing, storing, and improving their data exhaust like… yesterday.

Just think about your credit report (*shudder*) — but with a much higher degree of fidelity and accuracy. Trust me when I say that you really want good stuff in there that really represents you. Whether you want it to be or not, there is a digital identity report being created for each of us, and the more accurate it is, the better off you’ll be (presuming you’re a mostly law-abiding person).

Helping others to understand all this — the people you know as well as the service providers who make the apps you use — is part and parcel to advancing data positivity.

The #datapositive future

So, there it is. Hopefully you’ll agree with me that this idea is worthwhile, and worth talking to people about. We’re still early enough in the digital revolution that we have the chance to set the terms by which we participate. But that window is closing, rapidly.

On the whole, I think that most companies would prefer to do the right thing. They’d be willing to include their customers in that process if there were a way to find common ground. But without the right language or a constructive attitude, all this talk of “security” and “privacy” and “hackers” and “breaches” gets bland pretty fast.

As I see it, the #datapositive future offers something much, much sexier.

This is the second of two posts on privacy and being #datapositive. The inspiration came from my Thoughts on Google+. For the last decade, I’ve worked on internet identity, security, and social web technologies at Google, Mozilla, and the OpenID and OpenWeb Foundations.

☞ If you’re interested in hearing more from me in the future, sign up for my newsletter or follow me on Twitter.

☞ If you found this interesting, provocative, or useful, please click “Recommend” below. This will help to promote this piece to others.