By Simon Caulkin

Google ‘change management’ and you get half a billion hits. ‘Change management models’ gets 17m. Yet perhaps never in management has so much been sought by so many to such little effect. Almost all of the models referenced have one unwanted trait in common. They don’t work. Seventy per cent of all large-scale change initiatives fail, according to the Harvard Business Review. When they involve IT, the failure rate, in whole or in part, is 90 per cent.

Why? Well, not coincidentally, there’s something else conventional models have in common: a starting assumption that when you launch change, or more fashionably ‘transformation’, you know where you’re going. Of course you do: what leader would admit she didn’t? So change is a matter of planning how to get to the appointed destination, with a schedule of carefully orchestrated quick wins, deliverables, milestones and communication campaigns to keep programme and people on track.

But there’s a snag. While life may be understood backward, as the philosopher Soren Kierkegaard put it, it is lived forward. This deceptively simple truth means that managing by foreordained result is both epistemologically and practically nonsense. In any body composed of interdependent moving parts, change happens not mechanically but through a series of interactions and feedback loops between the parts, which ripple out and alter the whole. The behaviour of the ensemble can’t be predicted in advance from that of the components, and vice versa. In other words, change is emergent – a result, not a cause.

This changes everything. The result is not just a different ‘change model’. It is a different way of thinking. Conventional change models come straight out of the command-and-control (aka central planning) playbook, decreed from above and driven down (‘cascaded’) through the organisation. In a systems view, change is better seen as discovery, proceeding not by way of an abstract plan, plotted to an arbitrarily fixed destination, but by open-ended investigation and iterative experiment leading to deliver ever-improving outcomes.

In this version of the process, change starts by establishing not where you’re going but where you are now. Like it or not, you start from here, facing forward. And the only way to start the process of discovery is to go and see for yourself.

There’s a Japanese phrase for this, ‘Genchi genbutsu’, much used at Toyota, and it turns out to be quite profound, as we’ll see. But in the 1980s, when a fledgling Vanguard was learning how systems principles applied to services somewhat on the fly, a more immediate priority, as John Seddon admits, was to keep one step ahead of the client. He recounts how a brilliant and mercurial mentor at the time noticed on one assignment how little front-line service agents could actually do for clients calling in with a problem – ‘what if we equipped them to deal with the calls that they are likely to get?’

It was a pivotal moment. To work out how to do that, the first step was to listen to customers’ calls live – a revelation in itself, since the most striking thing about them was how many were complaints about something not done, or not done properly, on the first contact (which is of course the definition of what came to be known as failure demand). Next, they had to turn that thought round and ask themselves what should have been done that would have made the follow-up call unnecessary – that is, what was the purpose of the service, from the customer’s point of view? Finally, they realised they needed to know what kind of customer needs were predictable and which only arose from time to time. Only then could they proceed to train operators in a way that would reliably improve performance.

‘Go and see for yourself’ turned out to be critical in two other ways. The first is that when approached in this manner, the root problem to be addressed (and hence the nature of the subsequent change) was never the one managers thought it was. The functional measures they were using – number of calls per shift, speed of response of the different functions – told them nothing about the experience of the customers, who naturally took an end-to-end view. As a result they were always surprised, and often dismayed, to discover that service that was excellent according to their (or regulators’) measures got a vigorous thumbs down from recipients. Conversely often the eventual benefits go far beyond the incremental gains required by the plan: huge increases in capacity by cutting unnecessary work and failure demand, steadily shrinking costs as customer service improves.

The second reason ‘see for yourself’ was essential was that the truth about the operational reality was so unpalatable to managers brought up on conventional methods, and who had so much invested in them, that unless they saw it with their own eyes they refused to believe it. It’s not that a systems view of work or organisation is harder to grasp than a conventional one; it’s that the two are so different that there’s no intellectual route map between them. They are parallel tracks with no connection. In other words, it’s impossible to convince a conventional manager to cross from one track to the other by rational explanation. They have to see it with their own eyes – the corollary being, once they have ‘got’ it, they have crossed a Rubicon: there is no going back.

There’s a rigorous discipline to ‘study’, but broadly speaking once customers have put them right about where they are, managers and front-line workers can jointly start to figure out what to do to meet the purpose of the service without recipients having to make follow-up calls to remind them. It’s only when the hypothesis has been tested in action and adjusted accordingly that it is possible to envisage what the redesigned process will actually look like.

As we’ve suggested, this modest, empirical approach to change brings two enormous benefits, one negative, the other positive. The negative advantage is that it prevents managers wasting large amounts of money and effort on top-down change programmes that are doomed to fail. Conversely, they can eventually lead to the kind of gains that no one would have dared to put in a plan.

Both of these are well illustrated by the case of IT. IT is usually presented as the ‘driver’ or less assertively the ‘enabler’ of large-scale change – as in the ill-fated NPfIT or Universal Credit in the public sector, and countless ‘digital transformations’ in the private. The assumption is that the IT system comes first and operations will automatically be more efficient if digitised (reflecting this, IT departments are now the custodians of major change budgets in many or most large organisations). But this is diametrically the wrong way round. When managers manage forwards, starting by learning how their system works, they usually find, again to their surprise, that a giant, all-singing, all-dancing IT system not only does nothing to solve the real problems – by locking in the old system it is a constraint rather than an enabler.

This is not to denigrate or downplay the importance of technology – provided it is kept in its proper place, which is last, and always as an aid to rather than replacement for human intelligence. As for any change, the order is: first, study the system (get knowledge); second, improve the service to the customer (redesign); third, ‘pull’ the IT that you need (so you use it all and don’t buy bells and whistles you don’t need).

This goes for heavily IT-dependent services such as banking and insurance just as much as for customer helplines or emergency services. If that sounds unlikely, consider the stories put forward by senior financial executives at a recent ‘Better Digital from Better Method’ event put on by Vanguard. Ironically, all involved transformation – but it was a transformation away from the industrialised, tech-dominated products of the past to a focus on customer needs.

Changing rules of the game meant an urgent need to experiment with the customer journey without having a full plan, representing ‘a profoundly new world, mindset and model for banking,’ said one bank CIO, emphasising de-automation, optimising flow and unlearning over technology in the new process. ‘If you think of the solution as a technology thing or opportunity, you’ll solve the wrong thing or make matters worse.’

‘We forgot that banking is not about current accounts, it’s about accessing money and buying a home,’ said another. ‘It was a cost-related, industrialised approach. We had a lot to unlearn.’ Now, he says, no one can touch anything unless they can show they understand how the system works and have experienced how the service is consumed. ‘Don’t digitise what you don’t need to. Our problems weren’t caused by technology, so how can it solve them?’

Another leader in banking confessed that having joined the bandwagon to ‘go digital’ and investing heavily in new digital services, managers discovered through studying that it led to increases in failure demand into its service centres. Calling a halt to the costly dysfunction, they set about doing what should have been the starting-place: studying customer demand, studying how well the bank serviced those demands (not very well), improving the way the demands were serviced and, finally, on the basis of thorough knowledge, ‘pulling’ IT into the designs.

‘Innovation isn’t about technology. It’s about solving customer problems, and using tech to do it where necessary,’ said a South African insurance CEO who after much heart-searching had cancelled a big IT systems investment because she could see it was simply a modernisation of the old architecture that would do nothing to attract new customers. The breakthrough moment was a ‘what if’ question that emerged from studying the system: ‘What if we thought of our business not as picking up the pieces when things get broken but stopping bad things happening in the first place?’ Out of that came a clever initiative to use advanced technology monitor customers’ heating boilers, triggering instant alert and repair in case of failure. ‘Insurance at the touch of a button! But it’s critical that the IT architecture supports the right measures.’

Change of this kind, as all the participants emphasised, isn’t a one-off event but a never-ending journey – which frequently ends in counter-intuitive places.

That counterintuitive perspective is developed normatively, by studying, getting knowledge of the ‘what and why’ of performance as a system. This is ‘understanding by looking backward’, or seeing the reality from a different perspective – after all we can’t expect different results if the thinking hasn’t changed – leading to the cognitive conviction that giving customers what they need is not, as convention would have it, a recipe for higher costs, but a more effective and lower-cost option. We might describe it as ‘living forward’ through adopting a design based in knowledge, being able to predict success without knowing, and often being surprised by, its scale. It is jumping from one (command-and-control) track to another (beyond command and control), never to return. What emerges is a service design that absorbs the variety of customer demand using new and fundamentally different controls which facilitate a constant focus on perfection.

Effective change starts with ‘study’, not plan. The consequence of gaining knowledge is that change is guaranteed to work, and deliver results far beyond what might have been considered possible in a plan.