I had a discussion a little while ago with someone about what Agile is fundamentally about. The true, inner core of it. I gave some rambling and clumsy explanation.

He gave a much better one: “Embracing change, and efficient collaboration”. This seemed fine at the time, but something started bugging me about it. It seemed to be missing something very important. I’ve realised what it is: it’s the secret Lean wisdom that Agile is built on.

Agile (sort of) comes from Lean Manufacturing

The Agile movement borrows a bunch of ideas from Lean Manufacturing. These ideas came from the Toyota Production System, which was influenced by the thinking of American management theorist W Edwards Deming.

Lean Manufacturing is a vast and complex body of work. But a super quick overview of what I see are the core ideas:

Production should be based on pull rather than push. That is, each part of the process pulls in a request or order or materials when it is ready to do work, rather than sitting there having work regularly pushed to it.

Quality trumps efficiency (or anything else, really). In fact, high quality means greater efficiency, not less. Build quality into the entire process, don’t try to stick it on to the end.

The people doing the work own the process and own the continual improvement of that process.

Small batch size is preferable to large batch size.

You can see some of these ideas in the Agile Manifesto. You can see some of them in the seven principles of Lean Software Development. But my favourite one of all, and the one that I think is the most crucial part of both Lean and Agile is the last one: Small Batch Size.

Small batch size is what it’s all about

The idea of small batch size is simultaneously the most simple and obvious, yet also the most confusing and counterintuitive of all these ideas. This is especially so in manufacturing rather than software development.

Deming’s (and then Toyota’s) insistence on small batch size was seen as complete madness by most industrialists at the time. Traditional microeconomics and engineering theory (and most common sense) said that large batch size is better, due to economies of scale. One hundred units of X costs less per unit of X than 10 units of X per unit of X.

In its early days, Toyota didn’t have the luxury of large batch sizes. It was a small company operating from a small industrial base within a country still reeling from the complete devastation of World War Two.

So they did small batch sizes, and they noticed something interesting. The cost per unit of X (cars, tires, whatever), might have been higher, but that didn’t the overall cost was higher.

Focusing on the cost per unit meant disregarding a lot of other factors in the process, and often these factors were positively correlated with smaller batch size. (The overemphasis on reduce the marginal cost of production is the biggest problem with Cost Accounting, according to Eli Goldratt’s alternative, Throughput Accounting).

Smaller batches are faster to setup and faster to wind down in an industrial process. They take up less space on a factory floor or a warehouse (even taking into account size; 10 lots of 10 units of X take up less space than 100 units of X because they can be arranged more flexibly; think of it like a Tetris puzzle, smaller pieces leave less gaps and create smaller structures).

But more importantly, small batch size means smaller and faster and later decisions. And decisions are more efficient if made late, not early, another counterintuitive idea.

Small batch sizes are more flexible. They mean less risk, less waste, and enable greater flexibility and customization. Toyota started becoming a much more efficient car producer than American firms, despite the fact that their batch sizes were much smaller.

Small batch size applies to software too

You might be thinking “that’s fine for making cars, but what does it have to do with software?”. Some ideas are not transferable from (Lean) Manufacturing to Software Development. But some are. And Small Batch Size definitely is.

Software has dis-economies of scale

Allan Kelly made a good point in this Slideshare: Software development has dis-economies of scale. If you buy two litres of milk, each litre costs less than buying one litre of milk. But building software is the opposite. Building one big chunk of software costs more than twice as much as building two half-parts of that software. Allan Kelly goes into more detail in this additional material.

It’s true and it makes sense

If that sounds ridiculous and impossible, it’s not. It’s because software development is fraught with challenges and risks unique to the field, and most of those come from the size and complexity of the thing you are building and the other things that it connects to.

Increasing the size of a software application doesn’t just linearly increase the overall size of the work. It adds extra complexity around regression testing, integration, data structure and setup, and technical debt.

The bigger your chunk of work, the more likely you are to make bad decisions, cause a mess and break other people’s things. Moving in small steps is always better.

That’s why people are doing Continuous X

Smart people in Agile these days are all about Continuous X. Continuous integration. That means many small frequent code merges. Continuous Delivery. That means many small frequent deployments to production. Continuous Testing. That means running many small tests on a regular basis, instead of one big ugly batch of testing at the end.

This is small batch size at work. It’s not just about the actual software coding, it’s about the whole process.

Small batch size is not just about coding, it’s about everything

The concept of small batch size should permeate the whole software development lifecycle.

You don’t just code in small batches; you test in small batches. You design in small batches. You experiment in small batches. You write stories in small batches. You build backlogs in small batches. You release in small batches. You make decisions in small batches. You scale up infrastructure in small batches. You gather analytics in small batches. You fund work in small batches. You talk to your customers in small batches. You hire people in small batches. You plan your roadmap in small batches.

There are no activities in software development that are not improved by decreasing the batch size. It is always more, not less efficient than large batches.

I would take Small Batch Size over anything else

It is not explicitly mentioned in the Agile Manifesto, and it is not mentioned at all in the Lean Software Development Principles (strangely), but I think Small Batch Size is really the most powerful and effective idea in Agile. I would take it over the others any day of the week.

If I had to choose between a project that could make changes all the time at any time, but only did one big bang release after 12 months, or a project that couldn’t change designs or requirements within an iteration, but had 52 one week iterations, I would take the latter any day.

Similarly for efficient collaboration. Small batch size doesn’t just beat these things, it makes most of their problems redundant.

Agile is about breaking work into small pieces

That’s because despite what some people think, Agile is not fundamentally about being able to change. It is about breaking up big projects, loaded up with uncertainties and risks and uninformed decisions, and breaking them into many tiny pieces. Each piece having a tiny fraction of the cost, uncertainty, and risk of the bigger project (even a proportionally sized part of the bigger project).

The ultimate goal is a what the Kanban and Lean purists call “Single Piece Flow” or “One Piece Flow”, where the batch size becomes as small as is logically possible.

At this point, for software, we are not even really talking about batches and discrete units at all. It is a more continuous (in the strict mathematical sense) state of affairs.

Last I checked, Amazon did 14,000 software releases a day (true story), and that was a couple of years ago. It’s probably well over 20,000 per day now. Is anyone really even noticing or counting? Each release is probably minuscule (changing a number in a config file, restarting a machine image in a farm of 10,000 virtual machines).

Can you imagine writing a requirements document for these changes? Putting together a release runsheet or a project plan? The human administration of these changes would be orders of magnitude larger than the actual changes themselves. This is true small batch size. The smart kids on the block are doing it, and you should too.