The union Faceban saga took a new and depressing turn this week, after many people found they were prohibited from sharing a link to the activist site supporting unions’ industrial action on the 30th June – j30strike.org – We’re now contending with Facebook pre-bans!

When they tried to post the link (and later also short link site redirects to the site, and even posts discussing the site), they got a popup message saying the site had been reported and they weren’t able to share it. This persisted for some time before Facebook relented and let the site be shared by users.

This isn’t a new development – vexatious complaints to Facebook (or indeed pretty much any other commercial social network) can be ludicrously powerful. Facebook’s revenue per user is pretty minuscule, so their legions of users can only be serviced on the cheap. A few years back they only had around 100 customer service operatives to moderate tens of millions of active users’ content, and I imagine if the situation’s changed, it’s for the worse.

These beleaguered souls are kept busy removing the alleged paedophiles and terrorists that could seriously cause problems for the company, and so decisions over the rest of us are taken in the first instance by machines. This is how a machine mistook Derek for a spammer, and vexatious reports got Ms Wagstaff evicted over her ID. A few trolls or political opponents decide to badmouth a piece of content they don’t like, and it’s gone, regardless of how many good users it pisses off, as quite frankly it’s cheaper to lose a hundred users than spend 15 minutes on having a human resolve a dispute.

MIT’s Chris Peterson has a very interesting post on what he thinks are the mechanics behind this one, and it seems very plausible indeed. It looks like the Facebook auto-banning process has become a lot more efficient – Once it’s made a decision (right or wrong), it now works to tie off the possibility of that problem arising again. It makes sense – if something genuinely offended people, it’ll likely offend some other people the next time it gets posted. They might as well save their users the upset by pre-banning the content for subsequent shares.

I recently (rather slackly) moderated a large union event community on Facebook, and it’s the first time I’ve really had to deal with a severe case of troll-rot. People who seemingly have nothing to do all day but revisit a page to give the same right wing perspective on it over and over, even going to the huge effort of creating new identically presented accounts nearly as fast as you ban them. Clicking ‘report’ for the sake of it isn’t exactly hard to do, so once you add to this gibbering mass, the unscrupulous political (or employer) opponents of your campaign, activist content on Facebook is in a very vulnerable place.

I can’t really see a practical way out of this in the commerical framework. A community run network like Wikipedia can rely on a kind of user court to help weed out the vexatious complaints, but I doubt a commercial network like Facebook could get the goodwill from power users (though they did manage to get their translations done for them, so I concede I might be very wrong!) to undertake this kind of work unpaid. But the commercial relationship you enter into with Facebook is getting clearer and clearer all the time, as they strip out user-generated apps in favour of more passive, predictable and easily monetised content, and as their efforts to monetise your data by (generally rather clumsy) stealth never cease to surprise.

We pay Facebook by commoditising our identities and relationships for them. We can’t really complain (even if I’m in the first rank doing so…) when we come up against economic realities of their service – Their need to turn a profit and buy larger yachts simply trumps concerns about free speech for the millions who’ve over-invested their communications in the network.

So I can’t really see a way out of this (short of getting our own First Amendment this side of the pond). Facebook is shaky turf for activists, and getting shakier – as a mixture of the vulnerabilities of algorithms and the profit drive push us towards consuming cat photos and fan pages, and away from hard news and activism. It’s one more worrying facet of the Filter Bubble argument (I’m still shuddering from Eli Pariser’s compelling London lecture on his scary new book earlier this week). As Peterson puts it: “Think about the incredible, suffocating centralized power the Facebook filter represents to controversial opinions“.

And I guess in any case, the Wikipedia model works as the number of updates (and hence complaints) is much much smaller (still obviously colossal, but not on Facebook’s almost unimaginable scales). Will a Diaspora node manager (or UnionBook‘s dedicated volunteer crew) ultimately fare that much better on their smaller turf (and smaller resources) than Facebook’s thin blue line of customer service moderators?