When you concentrate two years worth of fundraising into seven hours, every second counts. That’s the reality for Comic Relief, one of the U.K.’s most notable charities. Held every two years, Comic Relief’s Red Nose Day encourages the public to make the world a better place in the easiest way imaginable: by having a great time.

For this year’s fundraising event, Comic Relief turned to Google Cloud’s technology partner Pivotal to host its donation-processing systems. The platform also automated management of the underlying cloud infrastructure. Cloud services from Google Cloud Platform (GCP) were used to run Pivotal Cloud Foundry during Red Nose Day. In advance of the 2017 event, the charity was forecasting peaks of several hundred transactions a second for its online donation system. The stakes couldn’t have been higher.

We’re happy to report that Comic Relief raised over £73 million (and counting) for its marquee event! We caught up with David Laing, director of software engineering at Pivotal, to discuss running Pivotal Cloud Foundry on GCP for the 2017 event.

What kind of scale were you expecting for Red Nose Day?

Comic Relief does most of its two-year fundraising cycle in a seven-hour window. The donation system needed to scale with 100% uptime and reliability. It’s your classic elastic, spin-up/spin-down use case for the public cloud.

There are more than 14,000 call center reps that take donations via phone. The reps log donation details in the system. We also expected up to 100,000 concurrent web sessions, where individuals donate online. We expected nearly a million donations in all, with up to 300 donations a second.

What kind of apps did you run on Pivotal Cloud Foundry?

These were cloud-native applications, authored by consultancy Armakuni, in conjunction with Comic Relief. The apps used horizontally scalable, stateless microservices. Capturing donor information and processing their donation immediately is critical. This core availability requirement drove the architecture to have layers upon layers of redundancy. We hosted three independent shards of the full system in different datacenters spread over four countries and two continents, balancing traffic between them using DNS. Each shard then load balanced donations to multiple payment providers. Choosing availability over consistency and an “eventually consistent” architecture like this prepared us to continue to take donations in the event of multiple system failures. An async background process collected all the donation information to a central reporting shard.

What was it like working with GCP’s services?

At Pivotal, we love the performance and rapid provisioning of Compute Engine. The automated usage discounts on Google Cloud are so refreshing. You don’t need engineers to parse through consumption data to minimize your bill.

The load for Comic Relief is highly variable, with major consequences if performance suffers during traffic spikes. Unlike other clouds, GCP load balancers don't require a call to technical support to pre-warm. This saves our cloud admin's time and allows us to survive unexpected load increases. It gives us peace of mind knowing that GCP load balancers are built for scale, and backed up by the largest network of any cloud provider. In our experience, Google Cloud is able to handle traffic spikes that might stress other cloud providers.

We used Stackdriver Logging in our weekly capacity tests. We really liked its tight integration with BigQuery and Google Cloud Storage. Having the telemetry data stored in a massively scalable data analysis system helped us to analyze and pinpoint problematic areas ahead of time.

Identity management is another area where GCP shines. Since we already use G Suite for our corporate identity management, user management to all the GCP services was effortless.

How was the deployment of Pivotal Cloud Foundry on GCP?

Both Pivotal and Google have invested a lot in making Cloud Foundry and GCP work well together.