Like most other global banks, Barclays has faced challenges in adapting to modern digital banking. Most banks still operate on mainframes, which means that there is a single point of failure and any sort of glitch or outage can leave customers without access to their services for hours, if not days.

Last year Barclays made the headlines with two outages, which caused its customers problems.

However, Barclays is using NoSQL database provider MongoDB to accelerate its digital banking capabilities, with future plans to even let customers carry out some of their banking transactions on Facebook, under the new PSD2 EU directive.

Speaking at MongoDB Europe in London this week, Barclays’ director of data optimisation and simplification, Bala Chandrasekaran, explained how the bank is using MongoDB as an operational data store to extract and replicate data in near-realtime away from the mainframe.

By creating a near mirror image of the transactional data inside of MongoDB, which is quicker to react and more resilient, given its distributed nature, Barclays can not only scale up its digital services, but also reduce the risks it previously faced.

Chandrasekaran explained to delegates that Barclays has 6.5 million customers on its mobile apps, which log in at least once every 15 days. On average, people log in 26 times a month. He said:

As the saying goes, too much of a good thing can be a problem. Most of our critical applications run on mainframes and that creates a single point of failure. If it goes down, which it does more often than you think it should, it just brings the service down. You end up with no way to help customers.

Moving away from the mainframe

However, after doing some customer journey mapping, Barclays found that 92% of traffic across all channels was triggered by just 25 transactions. And 85% of those 25 transactions, were read-only. This means that most of the time, customers are coming to the bank to simply find out information, e.g. check their balance, view transactions.

Knowing this, Chandrasekaran realised that Barclays could make a read-only copy of the data available when the mainframe goes down, inside an operational data store. He said:

A data cache that would sit between our channels and our mainframes. The idea was to get a subset of data that would support those 25 customer journeys and allow them to transact with us.

This data cache would be built on the following design principles:

• The need to be able to get the data outside of the mainframe as quickly as possible, so that Barclays can provide you with the most recent data possible. There’s no point in Barclays giving you data that was from end of day yesterday.

• An API approach that created a layer of abstraction, so that the channel doesn’t know that it’s talking to the ODS or the mainframe, it asks a question and gets the data back.

• Got to be resilient and secure.

Chandrasekaran explained that after the company carried out the procurement process, that MongoDB would be best suited to the project. He said:

A lot of technical conversations are really religious conversations. A lot of opinions. But something like this suits a NoSQL architecture. You’re not going to get it with a relational database, In memory caches could have done it, but they were relatively expensive at the time. Decided to go with MongoDB and since then we have been able to launch a number of different use case.

Making it useful

The best way to think about this is an operational data store that is literally mirroring, or in synch, with the old-school mainframe. A request comes to Barclays for some data from a customer, and the time taken to pull that data from the mainframe to to the user, via MongoDB is a maximum of 2 seconds. But is more likely to be around 800 milliseconds.

This has meant that Barclays can now provide better services, such as giving its users a full set of their transaction history, as opposed to just the last 300 transactions that was possible before. This is because the MongoDB architecture is scalable and the mainframe isn’t under as much pressure. The operational data store now has over 13 billion transactions held in 114 million documents, from over the past 35 months.

The progression of the data store has meant that Barclays can now begin to move away from MongoDB as just a resilience layer to reduce pressure on the mainframe, but actually as a core operational data store. Chandrasekaran said:

If I have the balance of the mainframe, the transactions of the mainframe replicated on the ODS, why can’t I use Mongo or the ODS for the first port of call for transactions? Why not flip this around? Rather than having the ODS as the resilience layer for the mainframe, why not make the mainframe the resilience layer for the ODS? Use the ODS to serve up the landing page as well, because this data is actually near realtime. [Instead of] almost 15 or 16 different lookups across multiple systems, it is now a single lookup on a single document in the ODS. By Q2 next year this should be rolled out. [We are] offloading processes out of the mainframe.

Chandrasekaran also touched on the bank’s future plans, specifically about how this system could provide the scalability to allow Barclays to expose the data held in MongoDB to social networks, such as Facebook. This would mean that customers could carry out some of their banking activities on the social network.

This has been made possible by an EU directive called PSD2, which enables banks to let third party providers manage their customers’ finances. However, little has been given away about how the banks could allow their customers to use the likes of Facebook for their finance management. Chandrasekaran said: