Even in the quietest spaces of the internet – locked, semi-moribund Livejournals, ephemeral Snapchat duets – we talk because we want someone to hear. It’s easy to lay that instinct at the feet of modern technology, but I think it’s pretty fundamental. Humans, for the most part, ache to be heard.

But we also want to be protected. We want to be heard by the people to whom we intend to speak, but not by others. Carefully policing the boundaries of what we share – checking for changes in privacy policies, scrubbing identifiable data from secret images and files, meticulously locking down potentially sensitive information, poring through every end-user license agreement looking for ways a company is trying to screw us before signing – is one way of guaranteeing that safety on social media. But it doesn’t feel safe. It feels paranoid and uneasy. And it conflicts with the desire to get talking, get sharing, get connected.

A few weeks ago, I watched some brave saps share their teenage diaries onstage as part of the “Mortified” series, a regular reading series in American cities of embarrassingly overwrought juvenilia. I’ve been to a number of these events, and it’s almost always clear that the writers’ teen selves envisioned, if not in precisely this manner, some future in which their intimate writings would be available for public consumption – perhaps assigned to schoolchildren after their tragic death, perhaps published as part of their “collected works” in the wake of inevitable fame. In our most private diaries, in texts whose exposure would humiliate and betray them, we always write as though for publication. We always write as though someday, someone will hear.

When privacy leaks come to light, I tend to think about these diaries, because they usually draw out a strain of familiar responses: If you didn’t want it out there, why did you share it? But sharing isn’t the issue: the problem is when a technology company hands you a diary, gets you to write as though someone is maybe listening ... and then pushes you out onstage, without warning, to read it to an audience you didn’t choose.

There is a constant tension in our online diaries – our Twitter accounts, Facebook profiles, Snapchats, Vines, Instagrams, Secrets, Whispers and whatever is next – between wanting to be open and wanting to be safe, between the limited openness and the complete safety we imagine ... and the openness and safety that exist in reality.

These tensions cause many of the strains that mar our relationship with internet privacy. Sometimes it’s the tension between people who think they’re talking to their friends on Twitter and reporters who note that those words are technically public. Sometimes it’s between users who expect complete anonymity from companies like Whisper and apps that require that privacy to be actively reasserted. Sometimes it’s texters seeking an intimate environment versus $10bn startups like Snapchat whose storage solutions are large-scale and vulnerable. In all cases, the hurt and the discord comes from a mismatch between the platform we want and the platform we have, between the privacy we believe we should have and the privacy that companies believe we can reasonably expect.

It’s become a chestnut that startups like Twitter and Facebook think of users as product, not clients. Let’s say, for the sake of argument, “Fine.” Let’s assume that, at some degree of popularity, social networks become like factory farms, aiming to process as much living flesh into money as possible in a day’s work.

Most factory farms now know that this is best done by keeping the cattle calm – this is the entire linchpin of animal scientist Temple Grandin’s work. Grandin uses her understanding of cows’ fears and feelings to design slaughterhouses that didn’t scare them. That is the kind of consultancy in which industrial farms are interested, because cows treated humanely are easier to deal with and stir up less outside controversy. But social media companies don’t have to slaughter us to profit from us – it’s actually best for them if we stick around indefinitely, being fitter and happier and more productive users. What do the apps have to lose from trying to understand our desires and anxieties?

I don’t mind if social media companies profit from users, as long as everything is transparent and above-board. (I mean, I mind it as much as I mind capitalism in general, but I don’t find it unusually evil – or notably different from, say, TV stations profiting from viewership.) Only one of our naturalized internet urges – the urge to share – can be directly monetized, so a venture capitalist might be tempted to ignore their somehow unnatural urge for privacy. But you have to keep an eye on the balance if you don’t want us “products” to get fed up, make a fuss and sometimes walk away.

Managing that balance means offering transparent, simple, granular control to users. That means control over whomever they’re talking to – not burying the most basic of human protections in interminable agreements, not shuttling between opt-in and opt-out, not changing privacy settings on the sly. Show users both the stage and the curtain, and then let us raise it when, and if, we are ready.