Host: Benjamin Thompson

Welcome to the 600th episode of the Nature Podcast. This week, we’re talking about improving post-earthquake predictions…

Host: Nick Howe

And the issues with deep learning in artificial intelligence. I’m Nick Howe.

Host: Benjamin Thompson

And I’m Benjamin Thompson.

[Jingle]

Interviewer: Nick Howe

On 24 August 2016, a magnitude 6.2 earthquake struck central Italy. The earthquake caused the death of 297 people, but this event wasn’t the end of the quakes that struck the region. Two months later, the area was hit by an even bigger magnitude 6.6 earthquake. Many buildings and important monuments were destroyed by this larger earthquake but fortunately, the hardest hit areas had been mostly evacuated, following the previous tremors. When an earthquake hits, there’s always the chance that a larger one could follow, but working out the probability of that happening is not easy. To get an idea, seismologists looked back through records to see what happened after previous earthquakes and use this data to estimate the likelihood that an even bigger earthquake will follow an initial one. But such events aren’t common. For instance, after a moderate earthquake the chance of an even larger one happening is estimated to be around 5%.

Interviewee: Stefan Wiemer

Most of the time, people then don’t take immediate action because 5% chance is a little bit hard to say what to really do within a certain time, so I don’t think people are totally unprepared but they are not well enough prepared.

Interviewer: Nick Howe

This is Stefan Wiemer, a seismologist at ETH Zurich. He’s been trying to work out if, after an earthquake hits, there’s a way to better predict if something bigger is coming. And this week in Nature, he’s co-authored a paper that offers a way to make predictions about upcoming tremors more certain, by using something called the b value.

Interviewee: Stefan Wiemer

It’s a parameter that seismologists know well and they can measure it with a certain degree of precision and it’s also related to the stress level, so when you have higher stresses in the Earth’s crust, in a very simplified way, you expect that this b value is actually lower and when you have lower stresses, so it’s not so critically stressed, you have higher b values.

Interviewer: Nick Howe

After an earthquake hits, there’s a lot of seismic activity in the region. By using this to find the b value in historical data, Stefan was able to find a pattern. If the b value increased after an earthquake then the likelihood of a big follow-up event would be lower.

Interviewee: Stefan Wiemer

But the more interesting thing is really that the sequences that are unusual, that have a lower b value where this parameter drops, they are then the ones that are followed by even bigger events.

Interviewer: Nick Howe

This predictable change in b value allowed Stefan to create a traffic light system to suggest the threat of a bigger follow-up earthquake. As you might expect, red means that a bigger quake coming is more likely.

Interviewee: Stefan Wiemer

If the b value really has dropped by more than 10% from before, we would call it a red event. If it has increased by more than 10%, we’ll call it a green phase. And sort of in between, we call it an orange phase where it’s still a bit undecided how things will develop.

Interviewer: Nick Howe

Stefan tested this traffic light system on data from 58 previous earthquakes and found it was pretty good at predicting what was going to happen next.

Interviewee: Stefan Wiemer

From these events we only missed one, so that’s good news that we didn’t have right, so in total we call it a classification accuracy of 95 per cent, which is a pretty good value, I would say.

Interviewer: Nick Howe

Stefan also showed that this system can be employed quickly. Sometimes, as with the case in Italy, these quakes can be months apart. But even when they were hours apart, the traffic light system still worked. But, of course, Stefan’s traffic light system has only been tested on archive data, and would have to be tried out in the messy reality of a real earthquake, where quality data may be hard to come by. Emily Brodsky, a seismologist at the University of California, Santa Cruz, who wasn’t associated with this work, thinks we won’t really know whether Stefan’s approach works until it’s used on a real earthquake.

Interviewee: Emily Brodsky

I think that as is common in earthquake science, they had to base this analysis on very, very few examples and so it’s really not clear yet whether or not it’s going to work.

Interviewer: Nick Howe

Stefan agrees that the real test will be with an actual earthquake. He also points out that not everywhere in the world is currently equipped to use these traffic light predictions as it requires a comprehensive seismic monitoring network. He suggests that there are only really good enough networks present in California and Japan and perhaps some parts of Europe. Also, although the traffic light system got it right on almost all of the historical data, the occasions where a bigger earthquake followed a smaller event were rare, so there were only a few red events in the dataset. Emily points out that this is a small number of test cases.

Interviewee: Emily Brodsky

I mean basically, they had two cases that it really appeared to work for in historical retrospective analysis, and they had a third case – Tohoku earthquake – where it might have worked but it wasn’t clear.

Interviewer: Nick Howe

The other major limitation of Stefan’s system is that even if it can predict that a bigger quake will happen, when it will happen is completely unknown. However, Stefan hopes that this will help inform the incredibly difficult decisions that authorities have to make following an earthquake.

Interviewee: Stefan Wiemer

Decisions for civil protection is a difficult thing because you need to consider many things for people – what are the buildings that are there, are the buildings safe in the first place or not so safe, what’s the population density, what are the alternatives, is it cold and snowy outside or not, can you put people somewhere? So, civil defence decisions are very difficult and we don’t do them. We only can give input for civil defence that may help them to take decisions, and the best way we can help them is to describe what we know in probabilities or scenarios of things to come in the future so that we can then say, okay, this is a more likely scenario. It’s more likely that there will be another strong event up to the north in the next so many days and weeks, so take this into consideration when you take your decision.

Interviewer: Nick Howe

That was Stefan Wiemer of ETH Zurich. You also heard from Emily Brodsky of the University of California, Santa Cruz. You can find Stefan’s paper along with a News and Views article written by Emily over at nature.com.

Host: Benjamin Thompson

Later in the show, we’ll be discussing the winners of this year’s Nobel Prizes – that’s coming up in the News Chat. Right now, though, it’s time for the Research Highlights, read this week by Anna Nagle.

[Jingle]

Anna Nagle

The Central American country of Costa Rica has recorded outbreaks of vampire bat transmitted rabies every year since 1985. Rabies is a lethal viral disease that affects both humans and livestock, and the vampire bat is a significant reservoir of the virus. Not much is known about how rabies persists in Costa Rican vampire bat colonies. To get a better idea, researchers studied genetic sequences taken from domestic animals in the country that had died of the virus over a fourteen-year period. They found that the virus is maintained in the country by repeated introductions of different viral lineages from North or South American countries that circulate briefly in bat populations before disappearing. The team behind the work suggest that these recurring rabies introductions mean that current culling efforts in Costa Rica are unlikely to prevent rabies transmission. Read more on that over at the Proceedings of the Royal Society B.

[Jingle]

Anna Nagle

The pink octopuses of the northern Pacific Ocean follow a simple rule: the deeper they live, the wartier they are. Scientists had puzzled over the dramatically different skin textures seen in the octopods, which had led some to doubt whether they even belonged to the same species. To test a theory that geography was somehow at the root of these differences, scientists analysed 50 specimens collected from the region, noting every wart and sucker from the top of the creatures’ heads to the ends of their arms. They found that octopuses living in shallower waters were generally larger and had smoother skin, while deep water dwellers were smaller, wartier and had fewer arm suckers. DNA analysis revealed that all were members of the same species, so quite why these differences exist is still something of a mystery, but could have something to do with differences in oxygen saturation or food availability. Dive into that research in the Bulletin of Marine Science.

[Jingle]

Host: Benjamin Thompson

Whether you’re aware of them or not, AIs are everywhere. They help recommend you songs or movies that you might like, they’re the brains in your smart speakers and they can even recognise your face to unlock your phone. In many cases, AIs work via a process called deep learning. Using this technique, they can recognise patterns, as freelance writer William Douglas Heaven explains.

Interviewee: William Douglas Heaven

I mean let’s talk about cats and dogs. You show it lots of images of cats and dogs, it will recognise commonalities that all of the cat images have versus the dog images, so when you show it a new image, it will be able to output either this is a cat or this is a dog.

Host: Benjamin Thompson

William has written a news feature for Nature this week about how some of these deep learning systems can be quite brittle. Reporter Geoff Marsh caught up with him and started by asking him for some examples of where these systems have gone awry.

Interviewee: William Douglas Heaven

So, it was only four or five years ago a paper made a splash that showed that just tweaking a few pixels in an image of a sloth and it will tell you that it’s a Ferrari. And often with these examples, you find that it is more certain of its wrong answer than it was about its right answer when it was just an undoctored image of a sloth.

Interviewer: Geoff Marsh

And that’s what the researchers call these kind of rogue images – adversarial examples.

Interviewee: William Douglas Heaven

Adversarial examples, yes. People have also printed patterns on glasses and hats to make facial recognition systems not work.

Interviewer: Geoff Marsh

That’s a good one if you’re on the run.

Interviewee: William Douglas Heaven

That is a good one if you’re on the run. Some people have even shown that you don’t need to doctor an image at all, that some images look just strange enough to the AI that it completely gets it wrong. So, one nice example is a dragonfly sitting on some sort of textured pattern and it says it’s a manhole cover. It’s focusing on the textured pattern in the background rather than the dragonfly.

Interviewer: Geoff Marsh

Silly robots.

Interviewee: William Douglas Heaven

Silly robots.

Interviewer: Geoff Marsh

It’s silly to us because what we pick out as the salient features of an image may be completely irrelevant to how it’s learnt to recognise a person.

Interviewee: William Douglas Heaven

Right, we instantly look at an image of a thing like dragonfly and think that’s a dragonfly – we don’t even sort of focus on the background pattern – but the neural network doesn’t have a sense of what object in the image is the important thing. One research group just put little rectangular stickers on a stop sign. They showed that a neural network could be fooled into completely misreading it as ‘speed limit 45’.

Interviewer: Geoff Marsh

There is something enjoyable, a sort of sniffy superiority that we have of laughing at silly robots and AI doing stuff wrong, but when you put it like that, actually there’s nothing funny about this. These fallibilities, if you like, are wide open to sabotage by kind of malicious entities.

Interviewee: William Douglas Heaven

Yes, so it seems that the people at the heart of this, the people really working on these systems, are aware of the shortcomings. Where it becomes dangerous is given the wave of hype that we have behind AI and machine learning, and deep learning in particular, companies and businesses are picking up these systems thinking they’re some kind of magic bullet to solve whatever problem they have, and I think it’s at that level of application that perhaps people aren’t fully aware of how brittle they are, how they can work very well in 99% of cases but that once case where something suddenly throws it off could be disastrous.

Interviewer: Geoff Marsh

I mean these things are trained by just feeding them thousands and thousands of images or examples of sound or whatever is it they’re trained on. Do we need to just be doing that better or is there something just inherently kind of breakable about that fundamental system of learning?

Interviewee: William Douglas Heaven

Well, it does depend on who you ask. One way to train them is to sort of build in the adversarial attack to the training so that you not only generate it on these millions of images, you would also have an adversary generating the spoofing images and you throw those into the training set, so you sort of expose it to the spoofing images from the start. But it’s sort of like an arms race because there’s nothing stopping a new adversary coming along and saying, ‘Oh, all I need to do to fool this newly trained system is to generate new kind of examples.’

Interviewer: Geoff Marsh

How can we augment deep neural networks to be a bit more human?

Interviewee: William Douglas Heaven

The early attempts to build AI systems were to sort of encode the way humans think with mathematical or logical rules. The machine learning paradigm completely undercut that by shoving that to one side and just saying we’re going to build the structure of these neural networks, and with enough data and enough computing power, which was only possible in the last 10-15 years which is why we’ve had this revolution, from a blank slate it will teach itself. So, the idea now is to bring back some of this rule-based thinking and combine it with machine learning so it’s not starting from just a blank slate. You can also do this by using things that your deep neural network has already learnt. If we can reuse some of that previous learning in an approach generally known as transfer learning then we have an advantage. Now, of course, this is something that humans clearly do somehow. If you and I had never seen a giraffe before and we suddenly see a giraffe on safari then we’re not going to need to see hundreds or thousands of examples of that giraffe before we recognise the second one. That’s probably because we have a very good idea of what animals are – we’ve seen lots and lots of previous animals – and so when we’re seeing that giraffe, we’re not just seeing something bizarre and new and random where we have to sort of collect all the information in the scene and make sense of it. It’s an animal.

Interviewer: Geoff Marsh

Having said everything that we’ve said, what’s the verdict for deep learning? Are people saying it’s so vulnerable, it’s so brittle? Do the risks outweigh the benefits or are they here to stay?

Interviewee: William Douglas Heaven

Well, there are definitely some detractors who think they’ve had their time and we’re not going to get to our vision of AI and human-like reasoning with the current paradigm, but there aren’t actually any real suggestions or real alternatives that have been shown to be anywhere near as successful as deep neural networks. Detractors might say this is a short-term fix to a bigger problem, but we’re waiting for them to show us something better. But it should put the brakes on the hype. There should be more awareness of the brittleness of these systems, especially as they become more widely used for more important life-changing decision making.

Host: Benjamin Thompson

That was William Douglas Heaven. You can find out more about the issues facing AI over at nature.com, which is where you’ll find William’s feature.

Interviewer: Nick Howe

Finally on the show, it’s time for the News Chat and this week it’s Nobel week and I’m joined by Flora Graham of Nature Briefing fame. Hi, Flora.

Interviewee: Flora Graham

Hi, great to be here.

Interviewer: Nick Howe

Well, great to have you and it’s an exciting week. I thought we’d start off by talking about the Nobel Prize in Physiology or Medicine and for this prize, everybody held their breath.

Interviewee: Flora Graham

That’s right. The prize was all about oxygen, how the cells in your body respond to when oxygen levels go up and down. As you can imagine, this is relevant for all kinds of things from when you play sports to when you have a stroke and even how cancer tumours grow.

Interviewer: Nick Howe

So, I guess the first thing people will be thinking is who were the winners of this prize?

Interviewee: Flora Graham

Well, this prize was split three ways. There was a cancer researcher, William Kaelin, and a physician scientist, Peter Ratcliffe, and a geneticist, Gregg Semenza, and it’s interesting to note that the three also won another very prestigious award, the Lasker Award, back in 2016 for the same work.

Interviewer: Nick Howe

So, this prize was split almost in two. So, two of the researchers worked on something called erythropoietin.

Interviewee: Flora Graham

That’s right. It was Semenza and Ratcliffe who studied that hormone and that is a hormone that turns out to be crucial for stimulating the production of red blood cells, and that’s in response to low levels of oxygen.

Interviewer: Nick Howe

And Kaelin, what was his work on?

Interviewee: Flora Graham

Well, he’s a cancer researcher and he was looking into a genetic syndrome that tends to make people more susceptible to cancer, and what he found was the hormone that Semenza and Ratcliffe investigated was related to why these cancer tumours seem to grow more frequently in these people.

Interviewer: Nick Howe

And so, obviously, this is a really important thing, like how cells sense oxygen, but what’s sort of like the broader remit of this? How will this impact society?

Interviewee: Flora Graham

It seems like there are a lot of varied implications for the impact of this research. Just for example, people are looking into whether they can harness this effect to stop how quickly cancer tumours grow.

Interviewer: Nick Howe

Moving on then, the next prize was in physics, and this was about the evolution of the Universe.

Interviewee: Flora Graham

That’s right. This prize was a little bit interesting because it’s split between two arguably maybe quite different aspects of the field. One was about exoplanets and specifically the first discovery of an exoplanet around a Sun-like star, and the other one is for some very wide-ranging theoretical cosmology.

Interviewer: Nick Howe

So, let’s start with the exoplanet part of the prize then – who were the winners here?

Interviewee: Flora Graham

Well, that half of the prize was split between two astronomers – Michel Mayor and Didier Queloz – and what they did was they discovered an exoplanet orbiting a star called 51 Pegasi. That was the first exoplanet seen around a Sun-like star but since then, that has really kicked off a whole field of exoplanet research and now there are thousands and thousands of exoplanets that we’re aware of and it’s a very exciting, growing area of astronomy. Everything from gigantic, superhot gas giants, all the way down to now we’re starting to see more and more rocky planets that possibly are even more Earth-like.

Interviewer: Nick Howe

And the second half of the prize went to a Canadian, which I’m sure you’ll be happy to hear.

Interviewee: Flora Graham

Very exciting, yeah. We’re two for two Canadian Nobel Physics winners. So, James Peebles is a cosmologist who has done incredibly influential work all about the evolution of the Universe following the Big Bang, so things like how the cosmic microwave background evolved, how all of the kind of chunks and bubbles and blobs that became the galaxies and the clusters that we know and love evolved from this fairly uniform initial condition.

Interviewer: Nick Howe

So, from the emergence of the Universe to the modern day, the Chemistry Nobel Prize is about something that the modern world is pretty much dependent on.

Interviewee: Flora Graham

Absolutely, it’s probably something that every listener has in their pocket right now in their mobile phone. It’s a lithium-ion rechargeable battery.

Interviewer: Nick Howe

So, who were the winners for this important prize?

Interviewee: Flora Graham

Well, the prize was split three ways again here to John Goodenough, Stanley Whittingham and Akira Yoshino, and they did work over many years separately, really coming up with the idea behind lithium-ion batteries, developing the technology that made them possible and affordable in commercial sense and finally, really implementing them so that they’re at the stage where they are today where they can power all sorts of renewable energy as well as the gadgets that we all know and love.

Interviewer: Nick Howe

Yeah, and this was sort of an incremental improvement of this technology that started with Whittingham and then the others expanded on this.

Interviewee: Flora Graham

Yeah, it’s interesting. Whittingham actually was working for the oil company Exxon back in the 1970s, and Goodenough has written in Nature about how at the time, there was an oil crisis, there was really a desire to get away from fossil fuels and there was an understanding that if we’re going to move to renewables, we need a battery to store that energy that comes from wind power and solar power. So, it’s really interesting how that work back in the 1970s really has a lot of echoes in the modern day.

Interviewer: Nick Howe

So, those are winners for the Nobel Prizes for science this year, and listeners may be aware that there’s something in common with all of these.

Interviewee: Flora Graham

Well, it is an issue that the Nobel Prize has traditionally been awarded to a lot of men, and it’s something, especially in the last couple of years, that the academy that awards the prize has been looking into ways to find a more balanced roll of nominees and that will lead to laureates who more accurately reflect the makeup of the people who are doing research.

Interviewer: Nick Howe

Yeah, there was a news story a couple of days ago about how the Nobel Committee has made sure there’s more nominations for women, but it doesn’t seem to have translated into winners.

Interviewee: Flora Graham

Well, I should say there’s no quota or anything like that for women, so they’re not ensuring there’s more nominations for women, but what they are trying to do is change the makeup of the group that nominates people and also to really just explicitly prompt those nominators to think about things lie diversity of country of origin, field of research, as well as gender and national origin and all that kind of thing.

Interviewer: Nick Howe

Thanks, Flora. Listeners, you can find out more about the new laureates and their work over at nature.com/news. But that’s not all. We’ve also been able to catch up with two of the new laurates – Didier Queloz who won for Physics and the oldest ever Nobel Prize winner, 97-year-old chemistry laureate John Goodenough. You can find our chats with them as podcast extras in the same place you found this show. And that’s all for this week. If you have any thoughts about this show or any of the last 600 episodes or you just want to say hello, you can reach us on Twitter – we’re @NaturePodcast. Alternatively, you can send us an email – we’re podcast@nature.com.

Host: Benjamin Thompson

And to celebrate the 600th episode of the show, we’ve put together a little Spotify playlist of some of our favourite episodes, and we’ll tweet out a link to that later in the week. I’m Benjamin Thompson.

Host: Nick HoweAnd I’m Nick Howe. Thanks for listening.