Share this article

#FutureFest is known for its eclectic and passionate speakers, but there were few at this year’s event as effervescent as Douglas Rushkoff. The New York City native’s delivery, powered by the paradoxical pair of nervous energy and righteous conviction, suited the divisive content of his talk like a glove.

Given the possible consequences of his view of the future — digital apocalypse, planetary destruction, societal isolation — both nerves and confidence are valuable assets in their own way. Rushkoff’s speech centred around two ideas of what the future actually is, mirroring something of a political battle.

What is the future?

On one side, you have the vision of the future as something happening in the moment — society creating the future as an act, as something created by doing. His experiences, whilst developing an academic interest in the subject amidst the rave culture of the Nineties, involved “realising that the future was this thing we were building in the present moment, together.”

From this perspective the future is simultaneously happening and being created in the moment, in the things that we are doing and their consequences on our society. It is something to participate in as a human collective to make it the best we can: “the future is a team sport.”

Opposing this, you have the notion of the future that Rushkoff considers to be “a place for speculation” rather than a creative space. This vision sees the future as an event in itself; a “pre-ordained” reality that culminates in society’s collapse. Once society has crumbled and ‘the future’ is here, those who have placed winning bets will have the best tools to maintain their position at the top of what is left.

Those who subscribe to this view are seeking advice from people like Rushkoff as to how their position can be sustained — or at least how they can survive — after the futurist tsunami hits the placid beach of civilisation. Do I buy land in New Zealand or Alaska to survive in after the Event? Can I use technology to maintain control over my security guards or are robots needed to defend the perimeter?

To these individuals, the future isn’t something to be created and shaped; it’s something to be survived. Rushkoff captures this mentality in ‘the Insulation Equation’; “how much money do I have to earn to insulate myself from the reality that I am creating while I am earning that money?”

As for the idea that the money they earn could be used to create a reality that they don’t have to insulate themselves from… well, “they don’t really have much faith in that.”

So how does this apply to tech?

This isolationism isn’t really surprising, given the distribution of wealth across the West. In 2016, the top 10% of wealthy individuals in the UK controlled over 50% of the country’s wealth and in 2010 the top 25% of those in the US controlled a terrifying 87%.

There is an infinite number of examples of a selfish individualism that allows such people to acquire and hoard wealth, with “humanity [seen] as the problem and technology as the solution”; for every Bill Gates, there are ten Martin Shkreli.

You can see this vividly in the technology that we develop. Some people create 3D-printed prosthetic limbs for children, but many more put lives at risk by price-gouging epi-pens. Climate change, vaccinations, sustainable energy, weapons development… there exists an infinite number of battlefields where the humanist potential of technology is fighting with human self-interest.

It would be overly simplistic to reduce this battle to a binary decision between humanist success and personal gain, and tech has near-limitless potential to provide both. It is perfectly possible for all of us to be on the same team, if we believe in trying to better humanity as a group as opposed to our little corner of it.

Rushkoff acknowledges this, too: “find the others that agree, and then find the real others… because they’re not others. They’re just us, thinking some weird shit.”

Overcoming this individualism and working purely collectively can only serve to make that group stronger — and if that group is Team Human, it follows that we can make life better for everyone with the technology that we develop. Rushkoff’s analogies for this are certainly food for thought: “Evolution favours the herd… the cattle that walks off to express his individuality is the one who gets picked off.”

If the megalomaniacs retreating to their post-apocalyptic New Zealand compounds end up lynched by disaffected security guards for not giving them the combination to the food stores, they only have themselves to blame for climbing such a vulnerable pedestal.

It certainly seems logical — at least, from our lowly standpoint as part of Team Human — that working collectively to better the reality we live in has the potential to make such an exercise entirely unnecessary.