Across the globe, floods are among the most frequent and costly natural disasters in terms of human impact and economic loss. Weather forecasts have become more accurate and precise, providing reliable localized precipitation totals, storm duration estimates, and even storm movements. However, localized flood predictions are lagging behind.

We know flooding is going to reoccur and being able to predict floods depends on improving forecasting models and acquiring up-to-date good quality and high-availability data. I want to dive into why predicting floods is so difficult and look at how big data is changing flood monitoring and forecasting.

Flood Monitoring and Forecasting is Hard

Floods are notoriously difficult to predict. These are the main issues.

Lack of Historical Data

We need historical data to understand the risks and probability of flooding for consequential rivers and floodplains. Unfortunately, past data only goes back 100 years or so, and only for particular streams and rivers. We are aware of earlier events of significant historical or geologic importance. However, they are only estimates and we do not use them to model future events.

Complex Data and Models

Another challenge is acquiring and modeling complex combinations of hydrological, meteorological, and topographic datasets and their interactions. Many of these require real-time availability to ensure maximum lead time for flood forecasts and warnings.

Accurate models and predictions depend on the drainage basins response to rainfall, snowmelt, and any other inputs of water. This is dependent on the existing hydrologic conditions, topography, and the weather. Any human-induced or natural changes to the drainage basin impacts its response to flooding.

Urban Environments

Cities have high densities of buildings, people, roads and are usually located near large bodies of water. They are constantly changing and evolving. This is the perfect combination for various flooding scenarios like flash floods and storm surges.

During heavy rainfall events, less absorptive surfaces that make up roads and buildings rapidly direct the water into city drains and river channels. The speed and amount of excess runoff can overload city drains and streams causing rapid flooding.

The lack of lag time between precipitation and flooding makes it particularly difficult to predict, and flooding can happen without warning.

Innovative approaches to Flood Modeling and Forecasting

Various challenges exist when implementing innovative solutions to flood modeling and forecasting. This is due to the large network of existing monitoring stations and the many datasets that come together to feed various intricate models. Change takes a long time to roll out as downstream effects impact many teams and usually require buy-in and collaboration across several departments or organizations.

Advances in these complex environments may seem slow, but agencies, researchers, and companies are coming up with innovative approaches. Here are a few examples

Internet of Things (IoT)

IoT is the next logical step to monitoring floods in our connected world. Sensors are becoming less expensive, more reliable, and can serve real time data.

The Flood Network, out of the UK, is an IoT setup where citizens purchase their own sensors and add them to the existing network of flood monitoring devices. Sensors report water levels every 15 minutes send alerts when water levels are high. Flood Network also works with local organizations and shares data with modelers and forecasters to improve response. It has been quite popular among cities in the UK and many new cities are joining the initiative.

The US government is also investing in IoT. Teaming up with business partners, they aim to design, develop, and test a network of inexpensive flood inundation sensors. The sensors are meant to be a part of a scalable wireless mesh network that rapidly measures and reports rising water and flood conditions to operations centers, first responders, and citizens. They want to deploy around 300 devices for testing in early 2018. Sensors are expected to cost less than $1,000.

Big Data and Machine Learning

Floods and weather go hand in hand, therefore more accurate and precise weather predictions enable better flood forecasts. Big data is a major part of this. Weather agencies like the US National Oceanic and Atmospheric Administration (NOAA) create tens of terabytes of data a day from satellites, radars, ships, weather models, and many other sources.

Big data technology has drastically improved the forecasting and lead time of storm movement, intensity, and duration.

Companies are also trying to cash in on more precise weather predictions. Hyperlocal forecasting is all the rage right now.

We are seeing hyperlocal weather apps like Dark Sky providing users with down to the minute forecasts with great visualizations of temperature, clouds, and precipitation. Start-ups like ClimaCell are tapping into weather dependent industries like construction and aviation promising military-grade forecasts.

The Weather Company, owned by IBM, has invested heavily in Deep Thunder weather technology. Deep Thunder uses advanced physics to produce 24-84 hour forecasts for areas as small as 0.1 square kilometers. They also use machine learning and cognitive techniques to create weather impact models. These are aimed at helping businesses predict how weather impacts consumer buying behavior, insurance claim validity, and even how many repair crews are required after major storms.

Hyperlocal weather forecasts are a major step towards better flood predictions. Even today many countries and agencies are able to identify potential flash flooding, providing warnings in advance. Hopefully as these technologies mature we will see drastic improvements in flood modeling and predictions at finer scales.

Crowdsourcing

Over the years crowdsourcing has proven successful for many applications from traffic reporting, marketing, and even providing funding for new products. Scientists and governments have also realized the potential of this non-tradition data collection technique and are applying it to flood reporting.

Models and forecasts rarely provide accurate or timely information at the street level because flooding in urban environments is difficult to predict. Obtaining real-time information from the people on the ground is critical for emergency management during the flood event. The data can also be used post flood to assist in damage assessments and to improve existing flood models. Here are some examples.

MITs RiskMap.us allows residents to add information to a publicly available web map. Users can use various social media channels to direct message a chatbot. They obtain a one-time link to submit information like location, water depth, photo, and description.

Risk Map tested their system during a large flood in Indonesia in early 2017. During the pilot over 300,000 users visited the public website in 24 hours. The map was also integrated into the Uber application to help drivers avoid flood waters.

Another example of crowdsourcing is iSeeFlood.org. It provides users with an iOS and Android app that collects flood observations with a goal to help aid emergency management and improve existing flood models. The app incorporates social media data to enrich content on the public web map. It uses machine-learning and natural language processing techniques to identify if tweets are relevant to live flooding events and parses out the text for location info or flood characteristics.