At RedMonk we generally try to avoid coming up with new terms for technologies and trends – after all, where there is an available term in common use why not just adopt it?. The pragmatic approach means we often end up using terms that seem kind of silly (Ajax, noSQL, Serverless, Cloud even) , but it also means we avoid coming up with catchy little numbers like “high performance application platform as a service” (hpaPaaS).

The business of naming has changed a lot since we launched the firm, as the shape of the industry has. Tech today is a lot more playful than it was when it was driven by Enterprise Technology vendors. The Internet has changed everything, and it has certainly changed how names for things grow and spread. Developers are the new kingmakers. As one community complains vociferously that a new term is dumb, another is adopting and propagating it with glee.

Which bring me to Progressive Delivery. I have been waiting for a term to emerge to describe a new basket of skills and technologies concerned with modern software development, testing and deployment. I am thinking of Canarying, Feature Flags, A/B testing at scale. Advances in approaches to application and service Observability. On the technology side, Kubernetes and Istio bring new management challenges, but also opportunities – service mesh approaches can enable a lot more sophistication in routing of new application functions to particular user communities.

What is Istio for? “Istio lets you connect, secure, control, and observe services”. But what’s it for? At Google Next recently Istio was definitely falling into dessert topping and floor wax territory. But routing services and deployments, observing them, and rolling out changes to particular populations, sounds like a good basis for (another) revolution in software delivery. GitOps -management by pull request, described by Weaveworks, is a related concept. Here is Stefan Prodan writing about GitOps Workflows for Istio Canary Deployments.

A couple of months ago I was talking to Sam Guckenheimer, product owner on the Visual Studio Team Services team. He was describing the Microsoft approach to testing, which considered the “blast radius” when delivery application changes across different communities – how many users would be affected, in what kinds of ways. Canaries were therefore policy-based – with a roll out to a particular subset of users, for example internal users in a particular geography first, before gradually widening the blast radius, eventually cutting over all users to the new version. It struck me that Sam was describing something useful, and when used the term “progressive experimentation” I had a bit of an epiphany.

Later that day, I casually slipped the term Progressive Delivery into a conversation with Adam Zimman, vp of product at LaunchDarkly, the feature flagging service, and it clicked. He really liked the idea, which is hardly surprising given LaunchDarkly is about the management of new features, where organisations can deploy subsets of functionality to some users, before gathering feedback and delivering them more widely. Adam has been working on a post about the idea as well – Progressive Delivery, a History…. Condensed.

Continuous Integration/Continuous Delivery has been extremely useful in taking us forward as an industry. CI/CD is the basis of everything good in modern software development. But I do feel there are some new aspects and practices that we don’t currently have a name for.

Folks like Jez Humble have done an incredible job of pushing the state of the art forward, codifying Continuous Delivery. Arguably some of the ideas I’m considering when I think about Progressive Delivery have been around in CD thinking for a long time – Here’s Martin Fowler on blue-green deployment, writing in 2010. One of the interesting points in the post is the consideration in the post that the two target environments should be “different but as identical as possible”.

A great deal of our thinking in application delivery has been about consistency between development and deployment targets – see the promise of Docker and then Kubernetes. A core aspect of cattle vs pets as an analogy is that the cattle are all the same. The fleet is homogeneous.

But we’re moving into a multicloud, multiplatform, hybrid world, where deployment targets are vary and we may want to route deployment and make it more well, progressive.

A recent post by the Target engineering team is very interesting in this respect. Target runs Kubernetes in the cloud, but also in every retail store. It built a tool called Unimatrix to handle deployments.

“The various store development teams have different requirements in terms of what stores they deploy to first and what capabilities they are targeting. So, the problem of enabling continuous delivery to the stores wasn’t quite as simple as “give it to Unimatrix and a minute or two later you’re deployed everywhere”. We needed a way to differentiate stores with distinct capabilities so teams could build their pipelines specific to their needs.

Within Unimatrix, we implemented a facade for Kubernetes namespaces as a way of grouping stores by their unique facets. Some of these groupings include things like the types of registers that are in the stores, the regional markets these stores are in, whether the store has a certain IoT capabilities, and so forth. To date, Unimatrix exposes 27 different “namespaces” for the entire fleet, and teams can choose which group of stores they deploy to first, depending on what they’re doing.”

I was talking to IBM Distinguished Engineer Jason McGee recently and he agreed that progressive delivery made sense. IBM runs tens of thousands of Kubernetes clusters for clients, geographically distributed, with quite different target configurations based on, for example, whether the cluster is running on prem our off.

This detailed post explains more about how IBM relies on feature flagging technology to manage its Kubernetes deployments.

“So LaunchDarkly is actually what does the magic for us, which allows us to manage these thousands of deployments across many, many clusters. So for every service that we have, we have a feature flag. So in this case, I’m going to pick on armada-billing because they actually do things the right way. And so armada-billing has a set of rules. And we’ll scroll in here. So for every rule that you’ll see, for example, we have a rule that says if the cluster name starts with dev, roll out this version. Or if the cluster name is stage of south roll out this version.

And the cool thing about this is that since everything is decentralized and we don’t have like a single central chain server, central deployment server pushing code out, we could theoretically push out new code to every single one of our clusters within 60 seconds because each of these Cluster Updaters is running independently. And every 60 seconds they’re checking for updates. So if we wanted to we could actually push out new code within 60 seconds to every single one of our clusters. So we could update 1,500 deployments all at once. Should we? No. Could we? Yes. But what we’ve chosen instead is kind of, and this is kind of what we settled on, is that we’ll roll out by region.

So well actually pick on the Australians first. We’ll roll out stuff to Sydney. Make sure it doesn’t break there first. That’s kind of our canary test. And so we’ll roll out stuff to Sydney. Make sure nothing breaks in the carts. This is after we’ve already gone to our existing stage environment. But we do find is that, you know, there’s no place like production home to test stuff in. Because we’ll post off the stage. It works great. We’ll put into pre-prod. It works great. And will push out a production. Bam. “Oh, we forgot about this or did this.” So what this allows us to do is to just easily … We test stuff in Sydney. The cool thing about the process here is doesn’t really … We can go either way. So we can go forward or backward releases.”

This architecture allows for more social autonomy across the distributed teams.

“We kind of talked about the squad autonomy and letting them do their own thing. It’s really up to the squads how they actually want to roll to code out. We, again, we’re kind of setting best practices. “Let’s do it by region. Let’s automatically push out the dev when the stuff is built.”

Other reasons people might want progressive deployment might be business policy or compliance-based. We might only want to roll out functionality to a particular geography because of regulations for example. This might not just be for testing reasons, but driven by business requirements. Progressive delivery potentially allows for a different discussion with the customer – taking us beyond simple alpha, beta and GA thinking.

Some folks working on Progressive Delivery related technologies– IBM, Launch Darkly, Microsoft, and Target. Turbine Labs is also notable, building a platform for “shaving the monolith”, using Envoy as a service router:“Stand up new infrastructure, split traffic, and compare performance in one place.” Observability is crucial in progressive delivery, and Honeycomb is doing really interesting work there.

RedMonk has some history with the ideal of progressive changes. Back in 2008 we had our friends at Crowd Favorite build us a plugin for WordPress, which allows for progressive licensing, where the license a piece of work is published becomes more permissive over time.

In conclusion, it may be that the last thing the industry needs is a new term for software delivery. It’s probably a bit mad to publish this the day before I go on holiday. My mentions may blow up with people telling me I am an idiot and just described Continuous Delivery. But it does seem we’re unearthing a new set of problems and a new set of opportunities, and I don’t feel like anyone has given it a name yet. So for now i will be using it, and seeing what kinds of reactions I get.

disclosure: IBM and Microsoft are both clients.