We catch up with the PokerStars Pro to discuss futurology, AI and the Libratus poker matches.

Liv Boeree

We have a shared interest in futurology – do you think we are heading towards utopia or the Matrix?

Liv Boeree: So by Matrix, I assume you mean some kind of dystopian future where we spend all of our time (against our will) in a fake simulated environment? I think that specific outcome is fairly unlikely compared to all other possible futures, although I could imagine the further development of VR could result in more and more people voluntarily spending more of their time in some kind of second virtual world and less in our “real” world, which would be kinda Matrixey I guess.

Obviously I hope we're heading towards a utopia (which for clarity, I'll define as “a future of maximum happiness, minimal suffering for all sentient beings”), but I don't think that's an especially likely outcome either, because there's a lot of things we'll need to figure out for that to happen, and we don't have much time.

We are at a very interesting tipping point in terms of societal stability – we're doing a great job of screwing up our natural resources and environment right now, and seeing as that isn't really getting fixed we're going to need some smart technologies to solve the problems that will arise from that – but we also need to develop these technologies safely enough so that some kind of accident or misuse doesn't occur.

So to summarise, I think we're unlikely to end up in either the matrix or utopia anytime soon – more likely somewhere else in between. It's definitely an exciting time to be alive to see how it all goes down.

"AI could fall into the wrong hands"

"The future is exciting and scary"

What technological changes should we be the most scared of, and the most excited about?

Liv Boeree: I'm excited about any tech that will enable us to be healthier, smarter, more sustainable and more peaceful. That's a long list, but to name a few, here's some that'll hopefully become mainstream soon:

efficient atmospheric carbon recapture to mitigate climate change

gene editing for removal for genetic disorders

VR/AR improvements – think of the educational and fun possiblities if VR really takes off.

cultured meat - to eradicate the suffering and environmental damage caused by current animal agriculture

Technologies that I just would love to exist at some point but are still far away:

cold nuclear fusion – a potential clean free energy source

supersonic electric planes – to make travel quicker and less environmentally bad

neural laces – because being able to connect mentally to the internet would be awesome

a superintelligent AI (i.e. smarter-than human AI) that's really really benevolent and knows how to make us all work together happily to build utopia (wouldn't that be nice).

Technologies I'm scared of: some of the above because they could either fall into the wrong hands, or handled poorly and cause some kind of terrible accident. This especially applies to superintelligent AI, which I'll talk more about later.

"I'm not concerned about AI in poker for the next few years at least"

"I'm not surprised Libratus won"

What are your thoughts about Artificial Intelligence in poker, and perhaps the Libratus matches?

Liv Boeree: I'm not surprised that Libratus won; it was just a matter of time before an AI would be able to calculate better, closer-to-GTO strategies than a human. However I'm not too concerned for the future of online poker, especially not for the next few years – they only created a headsup NLH AI, and it required a huge multi-million dollar computer and a team of computer scientists to achieve it – the main PR/funding incentives for another team of researchers to do full ring etc are now gone because the Carnegie Team have gotten all the accolades of being the first.

Of course, as computation costs come down, it may become easier for individuals to cost-effectively build a similarly good one themselves. How long that will take I don't know, but I expect it to be well >5 years away still, and even then we are only talking about heads-up bots so I don't think it's something to worry about for a while.

"AI could solve all our problems"

"How can we control a superintelligence?"

What are your concerns about AI in general?

Liv Boeree: So the limiting factor holding back advancement is human intelligence – the smarter we are, the easier it is to achieve our goals. What this means is that we as a species are hugely incentivised to make ourselves more intelligent, and that's why AI is so appealing, and many teams around the world are racing to build them.

Right now, most of us are working with an IQ somewhere between 80 – 140. Imagine if you could create a superintelligence that had an equivalent IQ of >500, and you had it at your command. You could use it to understand and figure out things our brains can't even begin to comprehend – it (and you) would be the most powerful agent on Earth and could solve many, if not all of our problems if it's done right.

The trouble is, when something is way way smarter than everything else, trying to control it becomes almost impossible for the less smart creature. Can chimpanzees (IQ of around 40) control and outsmart humans (IQ of around 100)? Sure, a chimp is physically stronger than a human, and yet we can still build cages around them, tame them to trust us, or kill them using one of our many weapons if we so choose. Chimpanzees as a species can do none of these things to us – they can never control us.

So how can we confidently expect to control something which could have a much bigger gap in IQ to us? We could be mere earthworms in comparative intelligence to it. And because of the inevitable control problem, it's absolutely crucial that a superintelligence's goals are aligned with ours before it becomes smarter than us – if it isn't programmed in exactly the right way, a superintelligence can and will come up with ideas and outcomes we cannot imagine, and many of those may not be what we want. I don't think it's that likely that someone will intentionally build an “evil” AI (although that is a possibility too), but it is likely for some kind of accident to happen that could make things very bad for biological life on Earth.

That's why, despite all the other existential risks we're facing over the next few decades from nuclear war, climate change, disease etc, I still think the AI safety issue is the biggest and most neglected problem. Incentives to build are so strong, progress is happening so fast, money is pouring in to expedite it even more, and yet there's comparatively little research being funded to make sure it's done safely.

"Many poker players understand the problem better than most"

"AI is a small, but dangerous, gamble"

At first I was surprised that REG charity supported a number of charities that aim to highlight the dangers of AI, but then it really made sense. Can you tell me a bit more about these charities?

Liv Boeree: So there are only a handful of research organisations/charities that are focussed on the AI safety problem and these are MIRI, Foundational Research Insitute and Future of Humanity Institute in Oxford.

Because the AI safety problem is so hard for most people to conceptualise, it is a very neglected and underfunded cause area, and because the potential impact is so large, these organisations' research has huge potential value. Because REG's core philosophy is about maximising effectiveness in donations, then it would be a mistake for REG to not fundraise for them.

I guess the hardest sell for charities like these are convincing people there is even a problem?

Liv Boeree: Absolutely – intuitively, it's a very difficult concept for people to understand because it's hard to visualise how an unknown entity can be more intelligent than us, or how it could pose a danger. Even then, many people who can conceptualise it still don't think it's worth worrying about because it feels far away and/or not that likely to ever happen. However, some poker players do grasp the magnitude of the problem because a) they're more technologically savvy than average, and b) because they're familiar with EV thinking – and in the case of deciding whether or not to donate to AI safety research org, that is an EV calculation, where the win/loss isn't just money, it's calculating the impact of your money for averting a global catastrophe sometime in the future.

Good poker players know that even if the probability of something is very small, the potential impact can still be large enough to make a gamble profitable.

Stay tuned to PokerStrategy.com as we will be talking to Liv again soon about her work with REG Charity.

What are your thoughts on AI in poker and life in general? Let us know in the comments: