Following my twelve or so years at MSC/FINEOS, I returned to Irish Life & Permanent in 2006 as Group Enterprise Architect. Reporting to the Group CIO, in a business running a highly federated IT organisation with a Shared Services infrastructure unit, that role was akin to being a senior diplomat or ambassador: working closely with the federation to encourage standardisation and co-operation without drawing too many hard lines or creating unnecessary diplomatic disputes. There were many things I liked about that role: one was the chance to learn about everything that was going on in each business unit. For me, good ideas aren’t centralised at the top of a hierarchy; they are everywhere: where I found someone doing something clever I told others about it, encouraging them to learn (as I was at the same time) and emulate or improve on the idea. Another was the chance to offer help and even roll my sleeves up if the work interested me.

Shortly after I joined, my boss Brendan Healy asked me to meet with leaders in IPSI, our Third-Party Life Assurance administration business, to discuss some ideas they had about migrating their core Life Administration package, CLOAS, from an IBM mainframe to a Windows environment. I met with the CEO Denis McLoughlin and Administration director Bobby Scannell as they outlined their idea.

Meet CLOAS

CLOAS is a COBOL-based life insurance package developed by an Australian software company Computations in the late 1970s/early 1980s. Beneath the COBOL lay a set of COBOL and Assembler ‘Control Programs’ which provided a range of business and technical smarts including database I/O, screen handling and General Ledger integrity. The ‘application’ code effectively made API calls to the Control Programs to do the heavy lifting and in effect made the applications both database- and operating system- independent. All very clever for the time. Any of the complexity of dealing with data and operating system activity was beneath the water line. By the time I got involved the software was owned by Computer Systems Corporation (CSC).

I was a little familiar with CLOAS already: back in 2004 while with FINEOS I had done a couple of weeks consulting for the main Irish Life Retail business, on an independent assessment of batch performance issues they were having after a series of significant migrations to their instance of CLOAS (which they had selected and implemented with encouragement from IPSI and its alumni). In this process I got to understand the basic structure of the application where the granular functional application units are known as PEXes (or Programme Executables) and the main runtime facilities are known as REXes (or Runtime Executables).

At our meeting, Denis and Bobby told me that a genius called Mike Burgun, an Ozzie who had worked on the original CLOAS development, had successfully developed a PC development environment version of the CLOAS control programs which he dubbed COBRA. Being the (remarkably) small world that Insurance IT is, Mike & I had a shared friend in common in the UK and we were godparents to this mutual friends’ kids so I had met Mike socially and knew him by professional reputation from my short stint on CLOAS performance in ‘04.

Running a life administration system on a mainframe was an expensive endeavour. The IBM hardware and software licencing are expensive relative to more open systems and third-party add-on software is priced accordingly – so mile-for-mile you pay a lot for the compute platform.

What they wanted to do was to lift CLOAS and run it in a Windows / SQLServer environment and leverage Mike’s learnings from COBRA to do this. There was also a vague notion that this had been done somewhere else and in fact we later confirmed than ANZ in Sydney had successfully ported their version of CLOAS to an Oracle / AIX / MQSeries architecture.

For me this conversation was simply kicking the tyres. I’d got burned on one project in FINEOS where I’d over-committed without really considering the scale of the ask and the full scope of the delivery, so I was cautious. I also knew that batch performance was one of the mainframe’s clear strengths and that a relational database like SQLServer or Oracle didn’t have the raw throughput of the IMS database that mainframe CLOAS used. That database had been developed for the Apollo space programme in the 1960s when efficiency was paramount! Batch performance had also been a concern for us at FINEOS where relational database efficiency and considerations like tuning Java virtual machine performance had occupied us on many benchmarks of our own Life system.

I outlined my concerns: if they decided to do this (and I thought it was risky and more likely to fail) then batch performance would likely be our un-doing, so we’d need to test that carefully upfront. They pointed to the success of Marlborough Stirling’s Lambda package which was successfully running decent size policy books and was written in Foxpro.

IPSI was a relatively small company within the Irish Life & Permanent Group, so what they were really hoping for was that “Group” would be interested enough to provide some R&D funding to make this CLOAS migration happen as it might benefit the aforementioned-Retail business that ran a much larger scale operation. There was little interest in providing Group funding for a high-risk, speculative investment and the conversation petered out.

More on mainframes and Irish Life

According to the excellent TechArchives, Irish Life got itself into the IBM mainframe business in 1969. When I started there for my first stint in 1988, we were still hearing stories of the major operating system upgrades from DOS (not the Bill Gates’ version) to MVS in the mid-80s and the courageous decision to buy a plug-compatible mainframe from Amdahl (rather than IBM) shortly after that. In line with the adage ‘nobody ever got fired for buying IBM’, there was allegedly (or apocryphally) much fear, uncertainty and doubt sowed in the minds of Irish Life directors around the risks of buying a knock-off at a reduced price but those warnings were ignored and ultimately unfounded.

If you haven’t worked with an IBM mainframe then let me give you a sense: the older IBM 3081 onsite when I joined was water-cooled to avoid the chipset melting down while in use. Back-in-the-day these machines were large, heavy (another story of a neighbouring business in our Abbey Street campus installing a mainframe on an unreinforced upper floor leading to the mainframe crashing through the floor into the car-park beneath), impressive feats of engineering using the very latest in IBM research and technology to deliver lightning performance relative to anything else at the time. All interactions with the mainframe felt ‘solid’. The systems software was steeped in history with elements funded by NASA for the Gemini and Apollo programmes and carried acronyms like CICS (Customer Information Control System), MVS/XA (Multiple Virtual Storage/Extended Architecture), VTAM (Virtual Telecommunications Access Method). For the geek I was in the late 1980s (as opposed to the one I am now!) there was something reassuringly reliable about the mainframe.

And then client/server started to happen and I was the kid that specialised in getting PCs talking to the mainframe, playing with screen-scraping, writing ‘gateways’ to CICS (the mainframe’s online transaction server), thinking about conversions between mainframe EBCDIC character encoding and PC ASCII, file transfers etc, etc.

Back in the early 90s the Head of Information Services, Brendan Tolan together with my then boss Mick Reidy and the wider IS leadership team (Yvonne Sheerin, Maurice Devitt and Martin O’Malley) published a strategy to get Irish Life off the mainframe by 1995. While that didn’t happen it did result in the investment in a PC on every desk, rollout of Windows 3.x, token ring LAN, Lan Manager, the early versions of Microsoft Office, a significant Oracle database investment, offloading of a number of significant workloads to applications written in Visual Basic and C++ and the installation of Imaging/Workflow and a variety of packages such as Oracle Financials.

The riskiest piece which was the replacement of the core inhouse-written policy administration systems with a packaged application via “Project 97” didn’t pan out for a variety of reasons (in a twist of fate many of the excellent team who worked on that ill-fated project later de-camped to FINEOS and became the core of the expanding Lifewise team over there).

In the late 90s after the merger between Irish Life and Irish Permanent to form Irish Life & Permanent, Irish Permanent’s own life company Irish Progressive merged its’ business into Irish Life. IPSI (Irish Progressive Services International) was a spin-off Third-Party Administration (TPA) business created earlier by Irish Progressive which continued to manage third-party life books. Irish Progressive and IPSI both used CLOAS as the underlying package. After the merger, Irish Life ‘Retail’ (the arm of the organisation that services individual customers) began to look at driving efficiencies throughout the organisation and the case for replacing their good-but-not-great in-house systems with a flexible package was re-opened. Even in the late 90s there were very few (if any) proven client/server packages operating at the scale of Irish Life’s business. After a rigorous selection process and perhaps influenced by the internal knowledge of and advocacy for CLOAS, Irish Life Retail chose to implement CLOAS for future New Business and then proceed to migrate its’ legacy book to CLOAS all of which was completed between 2001 and 2005.

In its’ implementation Irish Life developed a Java browser-based frontend and virtually all of the satellite systems (General Ledger, Sales support/CRM, Quotes etc) other than the core policy administration were situated off-mainframe.

The key business advantage was that the improved flexibility, configurability and tailored browser-based frontend allowed Irish Life to run a more efficient business at scale. The costs of the mainframe environment remained a substantial part of the IT cost base (seen a necessary evil) and the choice of the underlying datastore IBM IMS caused ongoing operational headaches.

IMS/DB ‘database’ was designed for the Apollo space programme in the sixties and had several structural limitations. High volumes of updates could cause the underlying performance to degrade heavily, requiring frequent re-organisations and significant, ongoing housekeeping. Large batch jobs could slow down to unacceptable levels if this tuning hadn’t taken place and sometimes unexpected overruns would take place which would impact customer service the following day. It wasn’t that IMS was flawed per se, just that the data volumes (hundreds of millions of records) with extremely high update and insert frequency were stretching it beyond its original design parameters.

The summary at Irish Life was a business that was largely happy with the core business functionality but running on a high-cost platform with a sometimes-ill-tempered database.

Changing faces

In late 2008 there were some personnel changes and a new General Manager of IT and Finance for Irish Life Retail: Denis McLoughlin, latterly CEO of IPSI. I had a courtesy welcome meeting with Denis in late November and asked him how I could help him in his new role.

“Do you remember that conversation we had with Bobby about CLOAS on Windows a couple of years ago? Could you have a look at that for me?”

Now: that was a little bit different. Here was the Head of IT for our largest division saying he’d like to have a look at this, and under the right circumstances fund a business case to do it.

At that time in our Group we ran two independent IBM mainframe instances and one Unisys mainframe. The other IBM mainframe instance ran the HOGAN mortgage lending platform for our bank PermanentTSB and the Unisys mainframe ran the core transactional banking.

Cost-reduction was a core focus of the group as the global down-turn began to take hold so as Group Enterprise Architects do I decide to run a three-track investigation process to look at cost-reduction options and technical feasibility of re-platforming the three core mainframe instances. This included an RFP process with re-platforming service providers and the incumbent application and infrastructure vendors.

I could bore you with all the details of that first phase evaluation, technical, economic analysis and benchmarking but the net outcome was:

CLOAS IBM mainframe and Unisys core banking showed good outline business cases to make something happen.

When Unisys understood that we were crazy and determined enough to go through with a re-platforming plan they found a way to come up with creative alternative pricing that delivered the required saving without the risk and hassle of re-platforming (but with the benefit of knowing that if the technical risks around the Unisys platform grew at any stage we had a route out). Asysco, the vendor who we would have done this re-platforming with, had it gone ahead, were very impressive.

The HOGAN case was poor for a couple of reasons. It was a relatively light user of the mainframe (roughly half of the MIPS or metered units of CLOAS) and the “control program” layer called Umbrella had 750,000 lines of IBM assembler versus the 30,000 or so in CLOAS) so the economics didn’t stack up.

In general, the economics of IBM mainframe replatforming are down to Number of MIPS vs Lines of Code. Little code, lots of MIPS: big saving potential. Lots of code, little MIPS: low saving potential.

Having achieved a good result in Unisys mainframe cost reduction with little risk, we got to turn our attention to moving from a high-level business case for a CLOAS re-platforming to rolling up the sleeves to dig deeper.

The high-level business case for that initial CLOAS decision was put together by Shay Browne, Denis’ Assistant General Manager for IT, with an estimate that we could invest €5m to save €1.5m per annum on our operating costs. We then set our minds to carefully assess the more detailed risks and create a safe path towards implementation.





Starting the Odyssey





Before I start, you might wonder why we didn’t simply consider moving to a new, more modern Life system. Simple really. A decent life system replacement and migration will set you back €50 - €100 million. We had already “been there, done that!” less than ten years earlier and were happy with the results and had a modern service-oriented, web system sitting in front of this mainframe application – so it was a classic case where re-platforming for €5 million was a more attractive option.

Early in 2009 we started to examine what would be involved in getting Irish Life out of the mainframe business. Denis assigned Shay to oversee the project, my role was to provide a level of independent technical Due Diligence and general consultancy in line with my Group role.

We had just signed a four-year enterprise deal with IBM, so we were allowing until the end of that agreement to have replatformed. Not being under massive schedule pressure allowed us the luxury of ramping up spend slowly and managing risk and spend.

Getting rid of a mainframe isn’t all that easy. It wasn’t just a question of getting rid of CLOAS and its IMS database. Everything else that sat on the platform would need to get replaced or de-commissioned in order to make the required savings. Our first step was to do an inventory which resulted in a scope including:

CLOAS and its’ IMS database and related batch jobs

Hundreds of batch JCL jobs

Some other Legacy CICS COBOL applications that were converted from the Cincom MANTIS 4GL at an earlier stage.

Hundreds of old PL/1 programmes ranging from utilities to business application code

Our core document production facility Pitney Bowes DOC/1 and hundreds of related jobs

Core utilities like our online Report Management System for dealing with job output, backup utilities etc

We also had to consider how we’d recover old mainframe backup tapes and other artefacts when we no longer had a mainframe.

The big red flag in my mind was the Cloas Batch: “File Maintenance” or “FM”. As mentioned earlier this was problematic on the mainframe and some specialised monthly versions of the batch took 11-13 hours to run meaning that if anything went wrong the system, which was “read only” during the batch cycle, might not be available for regular customer service activities the following day. My sense was that if we couldn’t crack that problem then we weren’t going to do any of it.

Phase 1 : Cracking CLOAS Batch





We resolved to spend a small fraction of our budget over a 6-9 month period (in the region of €250k) to address the issue of whether or not CLOAS batch could run fast enough on a Windows / SQLServer platform. This would allow us to recommend next steps in time for the 2010 budget cycle.

Our first steps were to licence Microfocus COBOL Enterprise Edition and gain a Control Program source code licence from CSC. In both cases we negotiated an approach that would allow us to pay a little for development and more later if we got to production. This would allow us to fail without throwing away large sums of money, given that this was (at least in hindsight) a risky venture.

Three wise men were assigned to work together on getting a working proof of concept of our longest-running CLOAS batch job (the monthly Expenses run). Dave Cooper, Mick Lynch and I (otherwise known as the Perfectionist/Realist, the Pessimist and the Optimist). We were ably assisted by Mike Burgun, who provided consultancy and education around how to approach the Control Program re-write and some internal ‘gotchas’. We also had some old CLOAS Control Program documentation which outlined (or hinted at) some of the nuances in the underlying Assembler.

Given that I was providing high-level guidance and consultancy it’s probably surprising (to me) how much code I ended up writing on this project. I had always loved writing code ever since my first program at age ten and getting my first computer at thirteen. After a short while in FINEOS, I had stopped writing code other than for fun: there were lots of more professional developers with elegant coding practices and I decided to stick with the conceptual and worked through Powerpoint and Visio. It was probably a bit like an amateur golfer being a bit embarrassed playing with Tiger Woods and preferring to carry his clubs instead. I still was willing to dabble in proofs of concept and was a productive coder, particularly good at getting something technically tricky to work, rather than honing code to perfection. This was a perfect opportunity to indulge this side of myself again.

Mick was the technical owner for mainframe CLOAS so he understood how it worked under the covers and could extract data and other artefacts easily. His role was to explain how it worked today and why it would be too hard to get it to work in Windows (the pessimism), my job turned out to be to rapidly hack together the code that would show how the problem could be solved (the optimist) and Dave’s job was to shake his head in dismay at my poor workmanship and take my 80% solution and turn it into a well-crafted production-ready solution that solved the remaining hard problems (the perfectionist). I still had a day-job to do, so much of my coding happened at the weekends where I’d get long stretches of uninterrupted time to code. This led to some frosty exchanges with my wife who, while delighted that I was enjoying this creative outlet, felt I was a little obsessed at times!

The basic idea was to take 3 million lines of unchanged COBOL application code and re-write the underlying control programmes in C# as needed to emulate the 30-50,000 lines of original Assembler code. It was a chunky enough piece of work and the trick would be to approach it in bite-size chunks. Those initial ‘bootstrap’ chunks ended up including:

A COBOL copybook (data layout) parser that populated a metadata structure from which we generated:

first Oracle and then later SQLServer DDL;

C# class wrappers for any of the COBOL data structures;

That same metadata was used within a flatfile EBCDIC to SQL database loader plus tooling to compare and confirm the integrity of data extracted from mainframe file extracts and loaded to databases.

A C# version of the DataBase Input-Output Control (DBIOC) which converted IMS-style calls to initially Oracle (PL)SQL and later Microsoft (T-)SQL.

EBCDIC to ASCII (or codepage of choice) conversion utilities

Packed decimal C# class

Scott Hattingh and Mark Rodgers also supported us initially in getting the mainframe COBOL compiling in the Microfocus COBOL suite, and working on wider technical issues. In hindsight a little external consultancy would have made life a little easier in terms of picking appropriate compiler options to produce .NET bytecode while accepting all of the IBM mainframe COBOL syntax. Where changes were required, we came up with a nifty way of using comments to toggle the Windows- or Mainframe-only version of the code, but that was only rarely needed.

Taking this “agile” approach of hacking together the 80% solution meant that in days and a short numbers of weeks we had a full copy of the mainframe data in our development database, with a workable version of the DBIOC (The CLOAS DataBase Input Output Controller) now working against a relational database and application code running. We’d hit missing functionality and fill in the blanks and after a couple of months the batch job was running with errors and not producing exactly the same results as the production mainframe version but it was encouraging all the same.

With ongoing tuning, the performance of the code was improved and the accuracy of the results got closer to the correct numbers on the mainframe.

By the end of the proof of concept cycle we had single-threaded batch performance that was roughly 25% slower than the mainframe. We felt it was close enough to say this was likely to be viable as we thought it would be possible to multi-thread the batch to reduce the runtimes (which would have been much more difficult to engineer in IBM Assembler on the mainframe, and we had CPU cores to burn, at relatively low cost, in the Wintel world).

We also did some basic testing with online transactions: comparing a range of CICS online transactions to those running in an IIS web server under .Net (where we got to write some useful transaction replay and result comparison tools)

At the end of our nine months we had many lessons learned but were confident and proud that the basic theory of replatforming CLOAS was viable and, with a fair wind and ingenuity, performance could match and maybe even improve on the mainframe. That statement itself was a major relief and achievement.





Phase 2 : Filling in the gaps and sorting out everything else





Our learning and success in phase 1 allowed Shay and Denis to ask for approval to spend a further million to flesh out the remainder of the Control Programs, fully test the online functionality and consider the wider issues like how the overall mainframe batch job and JCL would work in this new world.

Shane Tallant came onboard to provide full time project management capability and he did an excellent job of ensuring no small detail was missed in terms of scope and driving the growing team forward to deliver the goods, instituting daily standups and borrowing other techniques from Scrum to drive a decent cadence of delivery. He slowly added support from his wider mainframe CLOAS team including Glenn Higgins, Ciara Costelloe, Adrian Tierney and others.

Dave C focussed on the hard work of productionising the CLOAS control programs and multi-threading CLOAS FM and I took some time out to look at the mainframe Job Control Language (JCL), writing tools to:

Parse and analyse JCL file and input dependencies

Automatically download and create the required pre-requisites for batch jobs and files in the Microsoft mainframe JCL emulation environment

Download and convert files of various types (Sequential, VSAM indexed etc) into their Windows equivalents.

Writing first-cut Powershell utility wrappers for some replacements of mainframe utilities e.g. FTP, DOC1 (probably overusing Powershell in hindsight!) and trickier stuff in C#.

Download the daily mainframe batch schedule and automatically schedule parallel test versions

Test tools to run, execute and compare Windows version of jobs to the most recent mainframe run of those jobs

That was great fun with lots of automation. It is great to write something once that can process thousands of different artefacts quickly.

In parallel with this other shared services colleagues Jim Dalton and Stephen Reynolds were ensuring we would have a fit-for-purpose Windows infrastructure to land our full-scale test and production systems on. Jim did an excellent job writing the utilities to allow us to start sending Report Management output to a Windows version of the Mantissa RMS product well in advance of the actual migration so that we would have a full year’s history available on the Windows platform as we went live with our migration.

They also directed significant performance testing on our target platform alongside Dave Cooper. As well as application performance tuning there was a massive focus on testing and optimizing performance at the compute and storage layers as well. Dave Cooper gave the infrastructure team a primer on Apache JMeter and they were quickly enabled to be self-sufficient in repeatedly running extensive performance tests. There was lots of tuning done as we iterated through the tests. All of that was again driven by the perfectionists on the Dev side (Dave) and on the Ops side (Steve!).

And last but not least, there was effort invested in further developing and testing the Cloas online interface, allowing the pre-existing Cloas Browser (then J2EE) and a variety of MQ/Series and other interfaces communicate with the new COBOL.Net Cloas backend. After some high-level tuning we were good to go.

At the end of that year we had high quality CLOAS control programs; we had confidence we could out-perform the mainframe for CLOAS batch performance and had a credible solution to general mainframe JCL processing and some other outliers. In summary, we were feeling good.

Phase 3 : The final mile





In early 2011 we committed to the final mile, the last slog, which would last just over two years. This would involve:

Completing the re-platforming or rewrite of every remaining outlier such as legacy PL/1 programmes, Easytrieve reports, some old CICS COBOL resulting from an earlier Cincom Mantis to COBOL conversion etc.

More significantly a large-scale end-to-end test programme led by Sally Fagan, assisted by Dave Crowley

The test programme relied heavily on using parallel runs alongside production to meet our goal of “functional equivalence”. The idea here was that we were re-platforming like-for-like functionality so if every input was identical then every output should also be identical.

We used a mix of inhouse-written and off-the-shelf tools (e.g. Redgate SQL Data Compare) to provide a wide range of ‘before’ and ‘after’ comparisons with discrepancies being investigated in detail to identify issues such as:

Unexpected sorting (or collation) differences between EBCDIC and ASCII character sets (rarely an issue) [and Microfocus utilities allowed the EBCDIC collation sequence to be applied to ASCII data]. We had issues in the CICS COBOL mentioned above where the converted code used hardcoded EBCDIC hexadecimal values to represent keyboard keys pressed, which needed to be identified and fixed.

Reports or other output with report date (or more typically timestamps) that wouldn’t match. Here we used masking utilities to change dates and times in both input and output to something generic like DD/MM/YYYY or XX/XX/XXXX so that they would match.

The key to the testing effort was to ensure that there was enough test coverage. For example the team would pick a suite of monthly mainframe jobs and perhaps spend 2-3 months re-running them in the early phases to identify and eradicate anomalies, so that over time they would get closer to processing them in real-time, and in turn get closer to perfection.

We migrated our batch production schedule from the mainframe scheduler called OPCA to the distributed platform equivalent called Tivoli Workload Scheduler or TWS. It was critical to get this right. There is a very real financial impact to the company associated with batch processing failures. In true DevOps spirit the Ops guys had learnt a thing or two from their Dev counterparts and the same sort of rigour that was applied to the application testing was also applied here with utilities developed to automate generation of the required schedule components in TWS, execution of dummy runs and comparison with the actual outputs.

Ultimately while there was a lot of satisfaction in writing clever technical solutions to emulate mainframe capability and improve performance, it was this testing effort that determined whether our creative works would ever make it into production.

Move to Production





On Friday 15th February 2013 the last Cloas batch ran on the Irish Life mainframe and Cloas Nua was born and has continued to run on Windows ever since.

The legacy of the original decision brought several significant benefits:

The decision to multi-thread the Cloas FM batch reduced nightly batch runtimes from extremes of 11 hours, to mainframe-busting times of two hours or less , changing the support overhead for the entire application suite. Achieving such runtimes may have been possible with the mainframe had we invested in a much more expensive range with higher throughput, but the replatformed Cloas made this possible on commodity hardware with 80%+ infrastructure cost reduction.

, changing the support overhead for the entire application suite. Achieving such runtimes may have been possible with the mainframe had we invested in a much more expensive range with higher throughput, but the replatformed Cloas made this possible on commodity hardware with 80%+ infrastructure cost reduction. During 2012 we entered into agreements to migrate significant additional Life policy volumes onto this Cloas instance, bringing the volumes from 650k to over 1 million policies. We were able to do so without upgrading the target state hardware platform at all and indeed this platform has supported the entire business since go-live. These additional policy volumes would have required a significant (and expensive) mainframe upgrade.

The skills profile of our Application Development staff changed considerably (from mainframe COBOL/IMS database) to COBOL/C#/Microsoft SQL Server making it easier to train and retain new team members and support multiple platforms. The shift out of IMS into SQL/Server alone made a massive difference to productivity and problem-solving.

We derived significant reuse from the investment in tools and utilities developed during replatforming, and the automation culture encouraged during the migration project has resulted in CloasNua being one of the best examples of Continuous Integration/Continuous Development and Automation Testing platforms we operate in our business: a significant anomaly in a world where ‘systems of record’ rarely enjoy that flexibility.

Our ex-mainframe developers now operate in a modern, interactive debugging environment allowing for quicker break-fix and integrated debugging between COBOL and C#, enhancing productivity.

Our sister company IPSI was able to re-use our investment in CloasNua to replatform one of its CLOAS clients from mainframe to Windows, in less than twelve months and a fraction of our spend suggesting massive and effective re-use of our R&D efforts.

This journey was, up to that point at least, by far the most rewarding of my career: given the range of involvement from high-level concept and feasibility to hands-on creativity in cutting code and problem-solving, to watching our beautiful baby being delivered into production. The team morale and drive for getting this great collaboration into production was second-to-none and still a feeling and time I hark back to. It really doesn’t get much better in a working life.

Lessons Learned

Great executive sponsorship is essential: without that initial vision for the potential upsides of what was essential a technical project, this would never have happened.

Be willing to fail: The organisation was entirely willing to throw away the first €250k investment, and open to the 1-in-4 chance of the subsequent million being throw away. This created great psychological safety for the team. The stakes were high but not terminal.

Success breeds Success: Delivering so well on a task that was complex, large and full of unknowns did wonders for the confidence of all involved. It left them better able to handle the uncertainty that comes with large IT projects (like subsequent migrations) – we moved from a ‘there will be lots of problems’ belief to a kind of ‘when there’s a problem, we’ll fix it’ belief.

Be willing to think big: This was the first time we used ‘industrial’ testing – parallel running huge volumes of transactions & automatically comparing the outputs. The success of the approach heavily influenced our approach to subsequent migrations (and was key to the quality of the migration outcomes)

Have a clear goal: Sounds ridiculous but this helped enormously. Our goal was simple: to implement or develop solutions for every mainframe component that were as good as or better than their mainframe counterparts. We didn’t even have a finalised list of the components when we started out but as we picked them off and discovered new ones that was always the focus. It resulted in loads of small improvements on the infrastructure side such as better database maintenance procedures; more automated application copylive; better performance and capacity measurement and management; improved monitoring and many more

Get some early wins: we replatformed the Mantissa Report Management System (RMS) and our DOC1 jobs a year before Cloas moved to Windows. That gave the team confidence in the work we were doing and gave our operations people some experience of the new Microfocus Enterprise environment well in advance of the more complex Cloas go-live.

Give it time: The approach of hitting some key milestones and slowly increasing the burn rate reduced risk significantly but also gave us the elapsed time to work out good solutions to a variety of problems, and not need to solve too many tricky problems in parallel. Our committed contract timeframes were a constraint that provided us with this luxury and a significant upside.

Chunk it down: At the outset all of the problems that needed to be solved might have felt insurmountable had we not focussed on the one directly in front of our nose (e.g. Cloas batch times). Biting off and solving a problem at a time turned out to be a good way of eating the elephant. It also helped that we had a variety of mindsets and skillsets available to tackle different types of problems.

Flexibility and Collaboration: Having a mix of people who were all willing to work outside of their traditional siloes was invaluable. On top of that many Ops people got involved in development for the first time in a long time (or in some cases for the first time ever) and thoroughly enjoyed it. The outcome would not have been as good had siloes not been broken down early. The level of collaboration between Dev and Ops at this time was unprecedented and definitely critical to the success of the overall project.

Anything is possible … Dream it, and you can do it! With the right team and a variety of talents and risk appetites, not to mention essential executive sponsorship. The scale and ambition of the technical effort here enlisted and motivated a wide variety of participants and bonded us in a way that remains special even now, years later.

Microfocus case study with useful data and code volume statistics here