“There is always tension between the possibilities we aspire to and our wounded memories and past mistakes.” — Sean Brady

I talk to a lot of executives who are debating different migration approaches for the applications in their IT portfolio. While there’s no one-size-fits-all answer to this question, we spend a good deal of time building migration plans with enterprises using a rubric that takes into account their objectives, the age/architecture of their applications, and their constraints. The goal is to help them bucket the applications in their portfolio into one of 6 Migration Strategies.

In some cases, the choices are obvious. We see a lot of organizations migrating their back-office technology and end-user computing applications to an as-a-service model (“re-purchasing” toward vendors like Salesforce and Workday); a number of organizations will look for opportunities to retire systems that are no longer in use; and some organizations will choose to later revisit systems that they don’t feel they have the appetite or capabilities to migrate yet (i.e. the Mainframe, though You Can Migrate Your Mainframe to the Cloud).

In other cases, the approach isn’t so obvious. In my previous post, I touched on the tension between re-architecting and rehosting (a.k.a. “lift-and-shift”). I’ve heard a lot of executives — including myself, before I learned better — suggest that they’re only moving to the cloud if they “do it right,” which usually means migrating to a cloud-native architecture. Other executives are biased toward a rehosting strategy because they have a compelling reason to migrate quickly (for example, a data center lease expiry), want to avoid a costly refresh cycle, or simply need a quick budget win, which tends to be in the neighborhood of 30% when you’re honest about your on-premises TCO.

Somewhere in the middle of rehosting and re-architecting is what we call re-platforming, where you’re not spending the time on a complete re-architecture, but, rather, making some adjustments to take advantage of cloud-native features or otherwise optimize the application. This common middle ground includes right-sizing instances using realistic capacity scenarios that can easily be scaled up rather than overbought, or moving from a pay-for product like WebLogic to an open-source alternative like Tomcat.

So which approach is more often right for your organization?

Without talking to you about your specific opportunities and constraints (which I’m happy to do; just drop me a note), it’s hard to give a definitive answer, but I can highlight a few anecdotes that should help shape your perspective.

The first is a quote from Yury Izrailevsky’s blog. Yury is the Vice President of Cloud and Platform Engineering at Netflix, and is a well-respected thought leader in our industry.

“Our journey to the cloud at Netflix began in August of 2008, when we experienced a major database corruption and for three days could not ship DVDs to our members. That is when we realized that we had to move away from vertically scaled single points of failure, like relational databases in our datacenter, towards highly reliable, horizontally scalable, distributed systems in the cloud. We chose Amazon Web Services (AWS) as our cloud provider because it provided us with the greatest scale and the broadest set of services and features. The majority of our systems, including all customer-facing services, had been migrated to the cloud prior to 2015. Since then, we’ve been taking the time necessary to figure out a secure and durable cloud path for our billing infrastructure as well as all aspects of our customer and employee data management. We are happy to report that in early January, 2016, after seven years of diligent effort, we have finally completed our cloud migration and shut down the last remaining data center bits used by our streaming service! Given the obvious benefits of the cloud, why did it take us a full seven years to complete the migration? The truth is, moving to the cloud was a lot of hard work, and we had to make a number of difficult choices along the way. Arguably, the easiest way to move to the cloud is to forklift all of the systems, unchanged, out of the data center and drop them in AWS. But in doing so, you end up moving all the problems and limitations of the data center along with it. Instead, we chose the cloud-native approach, rebuilding virtually all of our technology and fundamentally changing the way we operate the company … Many new systems had to be built, and new skills learned. It took time and effort to transform Netflix into a cloud-native company, but it put us in a much better position to continue to grow and become a global TV network.”

Yury’s experience is both instructive and inspirational, and I’m certain that Netflix’s re-architecting approach was right for them.

But most enterprises aren’t Netflix, and many will have different drivers for their migration.

When I was the CIO at Dow Jones several years ago, we initially subscribed to the ivory tower attitude that everything we migrated needed to be re-architected, and we had a relentless focus on automation and cloud-native features. That worked fine until we had to vacate one of our data centers in less than 2 months. We re-hosted most of what was in that data center into AWS, and sprinkled in a little re-platforming where we could to make some small optimizations but still meet our time constraint. One could argue that we would not have been able to do this migration that quickly if we didn’t already have the experience leading up to it, but no one could argue with the results. We reduced our costs by more than 25%. This experience led to a business case to save or reallocate more than $100 million in costs across all of News Corp (our parent company) by migrating 75% of our applications to the cloud as we consolidated 56 data centers into 6.

GE Oil & Gas rehosted hundreds of applications to the cloud as part of a major digital overhaul. In the process, they reduced their TCO by 52%. Ben Cabanas, one of GE’s most forward-thinking technology executives, told me a story that was similar to mine — they initially thought they’d re-architect everything, but soon realized that would take too long, and that they could learn and save a lot by rehosting first.

One of my favorite pun-intended quotes comes from Nike’s Global CIO, Jim Scholefield, who told us that “Sometimes, I tell the team to just move it.”

Cynics might say that rehosting is simply “your mess for less,” but I think there’s more to it than that. I’d boil the advantage of rehosting down to 2 key points (I’m sure there are others; please write about them and we’ll post your story…) —

First, rehosting takes a lot less time, particularly when automated, and typically yields a TCO savings in the neighborhood of 30%. As you learn from experience, you’ll be able to increase that savings through simple replatforming techniques, like instance right-sizing and open source alternatives. Your mileage on the savings may vary, depending on your internal IT costs and how honest you are about them.

Second, it becomes easier to re-architect and constantly reinvent your applications once they’re running in the cloud. This is partly because of the obvious toolchain integration, and partly because your people will learn an awful lot about what cloud-native architectures should look like through rehosting. One customer we worked with rehosted one of its primary customer-facing applications in a few months to achieve a 30% TCO reduction, then re-architected to a serverless architecture to gain another 80% TCO reduction!

Re-architecting takes longer, but it can be a very effective way for an enterprise to re-boot its culture and, if your application is a good product-market fit, can lead to a healthy ROI. Most importantly, however, re-architecting can set the stage for years and years of continual reinvention that boosts business performance in even the most competitive markets.

While I still believe there’s no one-size-fits-all answer, I’d summarize by suggesting that you look to re-architect the applications where you know you need to add business capabilities that a cloud-native architecture can help you achieve (performance, scalability, globality, moving to a DevOps or agile model), and that you look to rehost or re-platform the steady-state applications that you aren’t otherwise going to repurchase, retire, or revisit. Either migration path paves the way for constant reinvention.

What’s your experience been?

Keep building,

- Stephen

orbans@amazon.com

@stephenorban

Read My Book: Ahead in the Cloud: Best Practices for Navigating the Future of Enterprise IT

Note: “Reinvention” is the fourth (and never-ending) stage of adoption I’m writing about in the Journey to Cloud-First series. The first stage is “Project,” the second stage is “Foundation,” and the third is “Migration.” This series follows the best practices I’ve outlined in An E-Book of Cloud Best Practices for Your Enterprise. Stay tuned for more posts in this series.

Both of these series are now available in my book Ahead in the Cloud: Best Practices for Navigating the Future of Enterprise IT.