“Follow me, and I will make you fishers of men”. Thus spake Yeshua, of BC/AD fame — legendary figure with genuine cult following and literal godlike status. Here inuring his would-be disciples to spread a subsequently wildly successful meme into the world.

“Join the Internet Research Agency, and ye shall be phishers of men”, the of course lesser known clarion call signalling an alleged Russian interference campaign leading up to the 2016 US elections, as outlined by Renee Di Resta. The notable phishy spear hauling a flappy John Podesta (of email infamy) aboard — fuelling fireworks and WikiLeaks hijinks to great Trumpian effect.

The big one aside, on Sam Harris’ podcast Di Resta goes on to sketch a far more sinister and intriguing gambit, namely the wholesale manipulation of vast swathes of the US populace via the tools of social media. The effects of which bled out of cyberspace into meat-space from physical altercations to the magical X mark. The extent to which these carryings-on had a material effect on said election (or indeed Brexit) may be an interesting question but is not the one which concerns us here. Neither is Facebook’s (un)witting complicity in facilitating massive manipulation and disinformation campaigns — or their leaking of enormous troves of private user data, as concerning and arresting as these points may be.

Our gaze turns instead towards the cannon fodder and unwitting contagion vectors of the story, the substrate upon which all manner of human-cognition chemical reaction was performed — namely, us. You and me, and your friends and your family.

“Not even the most heavily-armed police state can exert brute force to all of its citizens all of the time. Meme management is so much subtler; the rose-tinted refraction of perceived reality, the contagious fear of threatening alternatives.” ― Peter Watts, Blindsight

As the apocryphal adage goes, the users of the internet’s bounty of free services are in fact “the product”. The analogous quip in the 1970s — “Television delivers people” — betrayed the fact that the game was then already about getting eyeballs to advertisers — the real customers. Roger McNamee, early Facebook investor and Zuckerberg adviser, now turned social media Cassandra, has gone further saying “You are not the customer. You are not the product. You are the fuel they burn to run their profit engines”.

This gets closer to how I see our substrate role in this Attic tragedy. To borrow more from the imagery which will permeate this piece, we’re not the customers, or the product but rather the servers upon which the relevant automated scripts run. We are the meat-space platform upon which the cyberspace tools — on behalf of any number of well-heeled actors — ply their unending, tireless and extremely profitable trade.

Stories abound of Silicon Valley moguls not allowing their children too much access to many of the tech tools and gizmos they have created for us, the baying masses. These point to insider knowledge of the dangers of the casino-inspired, auto-engagement encouraging addiction-coding and dark UX patterns which continue to drive many of Tech’s business models. In the attention economy, mindshare is the critical metric and as with love and war, all is fair in its steady accumulation — including hacking the very brains hosting the minds whose share so many actors are interested in claiming.

Breathless articles were once written about how cellphones & tablets would replace laptops and other digital media. Instead the fight for mindshare means all we have done is increase our total usage, as Mary Meeker’s research continues to show.

In the business sphere — that mindshare maximises customer lifetime value (by dint of eyeballs served ads), justifies the customer acquisition costs and turbo-charges market capitalisations. It serves political ends also, acting as a multiplier for more power and clout. Just as exploiting vulnerabilities in the coding behind the smart contracts powering The DAO could lead to extraordinary profits, so exploiting the vulnerabilities of human brains can also lead to massive economic and political gain. So much so that untold resources are now spent in performing this very feat and it will only increase as we continue to pile into the cyberworld en masse — exposing age-old evolved vulnerabilities to the crafty exploits of those with the mind to do so (and that’s before the AIs get involved!).

If we are the server then, ideally access to said server should at the very least be secure. Yet Big Tech has spent the better part of the last decade trying to ensure that this couldn’t be further from the truth. Given our evolutionary history, human minds are largely optimised for an environment which no longer exists — and thus our cognition is imperfectly suited to the modern world. It contains bugs and vulnerabilities which often make us, not just irrational but predictably irrational (as Dan Ariely explores in his book of the same descriptive name). If you know our cognitive weaknesses you can exploit them, either in benign fashion like the joy of the magic trick playing on our misperception and misdirection — or for darker effect.

Folks like Tristan Harris, who used to work as a design ethicist at Google, were well versed in how companies draw from the literature on addiction, performative magic, social engineering, persuasive design and behavioural economics to nudge unsuspecting users into ultimately destructive (yet insanely profitable) patterns of behaviour.

“The average person checks their phone 150 times a day. Why do we do this? Are we making 150 conscious choices? One major reason why is the #1 psychological ingredient in slot machines: intermittent variable rewards . . . Addictiveness is maximized when the rate of reward is most variable.” — Tristan Harris

In short, Big Tech has spent years exploiting the vulnerabilities in the human operating system and developing the infrastructure for continued future exploitation of these vulns. The problem is that as with any vulnerability that’s discovered in the wild and exploited by one party — it becomes open to other parties, with different reasons for wanting to exploit it. The Valley created the attention economy and now other actors are exploiting it — as many on Sand Hill road grapple with conscience and the implications of their early decisions.

The Hack

A few steps which often characterise the compromising of a network

- The adversary (often the thin end of the wedge in a much broader, multi-faceted attack being perpetrated by numerous actors) gains access to a system via social engineering or phishing. Perhaps learning about an organisation, then targeting its members and trying to coax them into unwittingly giving up their credentials by navigating to a fictitious website or executing malicious code on their system by opening a file they shouldn’t have

- Once access is gained to a node on the target network, the hacker will then look to exploit any known (or fresh — aka 0day) vulnerabilities in order to gain administrative access to the system via privilege escalation or more social engineering

- The adversary may install a rootkit or RAT (remote access tool) to maintain privileged (yet secret) access to the system in future so as to conduct operations over a longer timescale, including mapping out the network further — and looking to gain access to other subsystems within it

- Over time the adversary carries out their nefarious ends (e.g. stealing emails, monitoring keystrokes to access crucial payment systems etc)

- They may elect to cover their tracks and leave the system without a trace, or depending on their motives — cause damage to the system, even holding its critical operations and contents ransom for cryptocurrency payment

You can see the same flow happen with humans as well — we can have vulnerabilities exploited via meatspace 0days, and have memetic rootkits uploaded for future exploitation. Not so much in the sense of a Manchurian candidate, being activated as a remote agent years later with a word sequence way (ala Bucky Barnes in Marvel lore), but in far subtler and more difficult to detect ways.

To wit; on the 12th of September 2017, a British multinational PR, reputation management and marketing company headquartered in London called Bell Pottinger declared bankruptcy and went into administration as the consequence of a scandal arising from their activities in South Africa. Specifically, they were shown to have embarked on a two-year campaign sewing racial discord to facilitate the state capture of that country by powerful business interests. This included the creation of fake websites, fake bloggers, amending Wikipedia and other public entries favourably for their client, writing speeches for ANC Youth League members and generally promulgating a propaganda campaign playing on socio-economic divisions.

Notably, not only did they use Twitter bot-farms to push particular narratives into the public sphere, but also helped to shape discourse and push a term into the public consciousness — White Monopoly Capital. Once this term had been safely installed via a Zeitgeist firmware update, it could be remotely invoked by whichever political figure required it — causing predictable reaction amongst many members of the populace. In the aftermath of the scandal, even with the term having been shown to have been uploaded into the consciousness of South Africans (bleeding out of cyberspace into meatspace via newspapers, radio and conversation), it persists as a contemporary term until this day — with a catchy acronym, WMC.

Knowledge of the exploit isn’t enough to stop its effectiveness.

Another popular technique of the erstwhile internet ner’do’well is installing special code on a number of internet connected devices (e.g. “IoT” gizmos like smart fridges, thermostats and home speakers), which when invoked give the adversary so-called command and control over a large number of devices, which can then be used to send a flood of requests or spam to a particular website or computer — and overwhelm it to the point of rendering it useless or incapable of fulfilling it’s basic function. This is known as a distributed denial of service attack and it can be devastating, rendering even major websites and services unavailable to anyone wanting to use them. This network of connected devices under the adversary’s control is called a botnet — from robot network, and it has interesting meatspace parallels.

Bell Pottinger and others as evidenced by the Brexit and Trump campaigns, also found a way to install carefully curated and placed scripts in the public consciousness which could be triggered when necessary — resulting in the activation of what is essentially a human botnet on social platforms like Twitter. The nefarious actor in this case can remotely activate an online mob responding to a trigger (e.g. WMC) and run a DDoS attack on opponents, silencing them and effectively banning certain views & voices from being expressed on public nets — through intimidation, doxxing, threats of violence or worse.

Of course, this self-same botnet can be activated not just online but increasingly IRL, whether that be directing a mob to picket a journalist’s home, cause damage to H&M stores or at the extremes — turn up to commit actual murders as has been the case in Myanmar.

We thought when it came to “cyberattacks” and the exploitation of cyberspace to wreak societal havoc, this would come by way of attacks on physical infrastructure — whereas it’s become increasingly clear that the territory on which much of the battle rages is ourselves. In respect to the new information wars, unless you’re directing the resources or doing the warring, you’re now the territory upon which the skirmishes take place. The players contending for mindshare being all from governments, non-state actors like ISIS and the big money makers. Perhaps we may have to add AIs to that list soon enough, but for now — as far as we know — they currently don’t act of their own volition.

“Manufacturing consent begins by weaponizing the meme and utilizing the censorship algorithms of Google, Facebook, Twitter and YouTube.” ― James Scott, Senior Fellow, The Centre for Cyber Influence Operations Studies

As with traditional antivirus software, detection and subsequent eradication of the latest memetic malware and rootkits appears fairly ineffective, what with some of the code seeming to be polymorphic — that is to say, it mutates and changes over time evading our archaic System 1 detection engines. Yet as our minds’ attack surface continues to increase via our interactions with the digital world, detection and purging becomes ever more vital.

There is the oft-heard lament of the pained computer security professional regarding humans being the weakest point in network security and their various strategies for “installing antivirus on users’ brains”. While true, this has always been predicated on adversaries wanting to gain access to the systems which users interact with. The new modality is about gaining access to the users themselves — for monetisation, political clout and real-world action. Humans may be good at manipulating each other, however algorithmically-driven scaled mass-manipulation is 100x times more effective.

How then to protect oneself from the onslaught? It’s a non-trivial challenge to be sure, one which begins with the initial admission that indeed one is housing a myriad of potential cognitive vulnerabilities and taking the steps to patch these, making it more difficult for the existing exploits to easily take root. Essentially taking active steps to minimise one’s own attack surface. Getting pwned by exploits for which mindspace patches are available, is just bad opsec.

One way is by establishing trusted sources and secure connections. We’ve all seen the “Are you sure you want to download this file?” message when a file’s contents can’t be independently verified. Ensuring what you’re consuming isn’t disinformation is becoming extraordinarily important (ask many Indians…or folks in Africa). And that’s just text-based “fake news” — we’re already entering the realm of synthesised voice and video “deep fakes”.

Twitter blue-ticks by in large had been a strong signal that the person tweeting is indeed who they say they are. “So then why is Elon Musk telling me to send him cryptocurrency?” you may ask. That heuristic evidently, until further notice has been somewhat compromised, requiring additional vigilance when seeing a message “signed” by someone with a blue tick say.

On the subject of trusted sources then, those purveyors of information and perceived nuggets of wisdom which are beyond reproach and whom we trust implicitly. When you’re a child, your parents — and as you get a bit older — almost any adult around you serve as trusted sources. This list then decreases over time. You don’t subject information coming from a trusted source to the same level of scrutiny as another. And yet, one never knows if the supposed truth packet coming from your trusted source may contain a nasty memetic trojan.

Still — while the infrastructure of the internet is slowly rebuilt on cryptographic and privacy principles, we have a period where there is a tendency towards what Venkatesh Rao speaks of as the Cozyweb (and Yancey Strickler calls The Dark Forest) — the more closed-form pyjama-web of handshake-signed WhatsApp groups, internet newsletters, Slack and Telegram channels, invite-only messages boards and so on. These are spaces which Strickler says are becoming the few where “depressurized conversation is possible because of their non-indexed, non-optimized, and non-gamified environments.” Though some of this does go some way to addressing the trusted source problem, it remains to be seen how this can be addressed at scale and in the public square.

Try also to stop yourself from being an attack vector or being part of a botnet with a few simple actions, for instance asking yourself a question or two before you hit that retweet button next time:

· Who put this information out and for what purpose?

· Who benefits from my reacting in one way or the other?

· Will the world be worse off if I hold off on my hot-take for another 24–48 hours while more data is gathered? (Think of a dot painting where the picture emerges and is built as each additive point is committed — in forming strong views/opinions).

· Am I acting as signal or noise right now?

· How did I arrive at this conclusion?

That’s limiting the additional damage you may be doing as part of a broader net. What about periodic self-system scanning? There’s an internet meme invoked when you’re say, in the middle of telling an arresting story or answering a question and completely lose your train of thought: “Brain.exe has stopped working”. Repurposing said meme, one needs to periodically run some mental antivirus on Brain.exe to purge it of existing infections. This is best achieved through self-reflection and asking honest questions of oneself.

An example would be periodically restating and rechecking your assumptions on any topic of import to your life. Another is by running the sub-routine — Expose self to adversarial opinions which I do not hold and see if my long-held belief stands up. A great algorithm and mental model here is the concept of steel-manning. This is essentially the opposite of the strawman logical fallacy — the one where a person doesn’t refute their opponent’s argument in a debate but rather a modified version of it which they find easier to attack. Steel-manning involves fully understanding and appreciating the other side’s point of view, then trying to improve it to its best possible version before looking to engage it. If the refutation still holds water, you can be quite sure your basis for holding the opinion is still sound.

Further, ask yourself “Am I in a filter bubble here?” i.e. in a situation where you may have become separated from information that disagrees with your viewpoints and become effectively isolated in your own cultural and/or ideological bubble. Is there a firewall around the bubble, further insulating your views from potentially antagonistic ones? Disabling the firewall may be as straightforward as dispassionately consuming news/media from alternative sources periodically — or indeed attempting to do some ideological steel-manning as a sanity test.

Fundamentally, the idea is about knowing how you’ve arrived at a conclusion and interrogating what evidence (if any!!) could potentially shake that belief. In the cybersecurity world, one now assumes “the hackers are already inside” — so the question becomes, how do you stop them from doing further damage? How do you stop yourself from helping them?

That’s the question left for us… because the brain hackers are already inside. The answer, as the Greeks intimated millennia ago starts with knowing thyself, for the unexamined life leaves the mind running unpatched legacy firmware ripe for exploitation by all those who care to.