The Prime Directive of Agile Development

This article starts by stating, "Underlying all the agile development practices like TDD, Pair programming, continuous integration, and refactoring, there is a single unifying concept. Never be blocked."



I fail to see how TDD or Pair programming or Refactoring can help prevent blocks. TDD does help you in getting a solution to the problem, but is not guaranteed to come up with solutions for really tough problems. You could still be "blocked" or stuck even when using TDD



Similar arguments hold for Pair programming and refactoring. I fail to see how these practices help to prevent blockage.



The examples that are shown in this article illustrate problems that can be encountered using any methodology; and the solutions, too, do not depend upon the methodology or style of development. I could be doing a "waterfall" approach and specify that all interfaces between major systems should be stubbed first and then tested; only after successful testing should development of the individual modules begin. This rule, or discipline, has nothing to with Agile development per se.



Refactoring does not change the functionality of code. I do not see how refactoring can lead to removal of blocks. Often-times refactoring has to be done because someone forgot to think through the problem at any depth. In other words, too frequent refactorings are a symptom of insufficient planning leading to waste of time developing code that meets narrow ends, without looking at the problem in more generality.



As some wise person (Einstein, not exact words) said, "Keep it simple, but no simpler than is necessary." Excessive simplicity for the sake of simplicity often ends up requiring excessive refactoring.



While I do believe that agile practices are good, the examples given do not demonstrate the superiority of the Agile philosophy over any other philosophy. Merely stating that it is so does not make it so.



The car analogy is flawed because that is not what anybody would suggest. A better analogy would be if someone wanted to rip open an electronic toy because it doesn't work anymore, replacing the battery hasn't helped; and making the toy do something different is practically impossible to do given the way the toy is constructed. One then must rip it apart to make it do something it was not designed to do. If the software design does not permit the required adaptations, then the design must be re-evaluated. In other words, it is then legitimate to rip apart the software.



By the way, isn't software supposed to be designed in layers with loose coupling between layers? If so, what is the harm in ripping apart software and re-moulding it to meet new requirements?



The example of the DBA (Debeay) making changes that jeopardize the whole project is also flawed. Good database architects always provide an abstraction layer between the application and the database. Most commonly it takes the form of database views, with the strict rule that applications may only access data through views (or stored procedures), and must never access directly the underlying database tables. As such, structural changes to the database tables can be made transparent to the application users unless the changed requirements affects the domain model, too. The example merely sets up a straw man of an incompetent DBA to knock it down and suggest that Agile development would not have led to this problem, and therefore, Agile development is the one true way.



Note that the concept of views has nothing to do with any development methodology. It just happens to be what the experienced programmer would use anyway. The example will not convince anyone that they should move over to Agile development.



While Agile development has its merits, this article fails to bring them to light.



A very common example of where TDD fails to get us to a good solution is that of finding a sorting algorithm. Typically, TDD will lead to the Bubble Sort algorithm. Rarely will it lead to other sorting algorithms.



TDD will often fail to give good solutions whenever we can have several ways of solving a problem. In such cases, TDD will tend to produce a simple solution that may not be very efficient. Also, for really complicated problems, such as inverting a matrix in linear algebra, at best, TDD will end up producing a huge block of spaghetti code.



This leads me to think that TDD is not good for development of algorithms since developing efficient algorithms often requires knowledge of the problem domain. Algorithms here includes not just the scientific and mathematical algorithms, but anything involving complex processes.

Ravi, I take your point. Sure, you could make the above mistakes on a TDD system. And it took me a second to see why the "not blocking" really is related to "agile". But try it this way:



TDD does not *cause* an absence of blocks.

TDD does *demonstrate* an absence of blocks.



The key phrase above is "progress can only be made on an executing system that is passing it's tests". The whole argument is subtly based on the presence of TDD.



Similarly, continuous integration verifies that the whole team's integrated code is executing and passing it's tests.



(I'm not sure I can tie it to refactoring, etc., other than as useful tools enabled by TDD. But you get the point.)



Re: your second note. I guess I'll tackle a couple of those, too.

TDD is not a magic bullet. It does not replace other forms of knowledge, such as patterns and algorithms. It's just a way of specifying the goals of the software in development.



If you have a timing goal that isn't being met, make it part of your spec. Code the test. Then produce the *simplest* code that passes the test. If the Bubble Sort passes your timing test, heck, use it. If you don't have a test coded for it, how can you tell? You can test it once by hand, although then you won't know when something slows it down as your code evolves.



As for spaghetti code produced by TDD, well, TDD without refactoring will produce ugly code. It sounds like it's time for some refactoring.

Ravi,

I disagree with your statement about sorting algorithms; if at the end of a TDD cycle (i.e. a green bar) you have ended up bubble sort, and bubble sort doesn't satisfy your requirements for whatever reason (performance, say), you have obviously not written the testcase to support this requirement.



I also do not understand what point you are trying to make with your matrix example. Firstly, for complex algorithms, there is generally very developed and mature mathematics or CS theory which forms your (perfect, as it is generally provable) specification for the implementation. Secondly, if you have ended up with spaghetti code that you are unhappy with, then you have to refactor it, which is all part of the TDD process.

D'oh, sorry Chaz for the apparent duplication your points, I guess we started writing at the same time (great minds think alike, eh).

My point is that these features are not unique to Agile development. They can be done with other types of development. If the benefits claimed for Agile development are valid, one must show that either (a) such things can seldom occur with Agile; or (b) that such things are not possible using other techniques.



I have shown that TDD does not guarantee that you'll come up with the required algorithms unless you already know enough about the subject matter. In which case, whether you use TDD or something else is irrelevant.



The other statement I have made is that such benefits are possible using other methodologies, too. So why should anyone choose Agile?



Please note that I am playing devil's advocate here. If you want to convert others, you must have a compelling argument. Uncle Bob's argument is not compelling enough for me.

Jake,

No worries... guess I just hit the button first? :)



Ravi,

Got to run, so I'll be brief: I'm troubled by the phrase "required algorithm" in a TDD context. If it's a requirement, then you're not doing TDD until you have found a way to test it. I'm not sure how to test a required algorithm, though. It hasn't come up; I'm generally given rules rather than algorithms. I'm not sure TDD has a standard answer for that type of requirement.



I've found that business-type assignments where the results are more significant than the algorithms fare rather better with TDD.



And methodologies *never* replace knowledge. Without TDD's feedback, you can't prove the absence of blocks. With, you can. There's no magic bullet, but that's still a significant difference.

Ravi, what you have demonstrated is a tiresome, rhetorical argument. Nobody uses TDD or any other methodology to develop sort alogorithms - they call the damn framework's sort method. The complexity that we face every day is not doing matrix inversions, but coping with evolving requirements that not even the customer fully understands yet. So we look for ways to keep our system supple and changeable and to make continuous progress in the absence of perfect information.

Let me explain a bit more. One can have tests for all the code one writes without having to do TDD. These test suites can be very extensive covering all known requirements. But it is not "write test firsts, code later."



The important thing is to have well-tested software. My question is: What advantage does TDD give you over a test-centred approach? Why is it better to use TDD rather than the alternative I suggested above? After all, the aim is to have thoroughly tested software, with repeatable tests.



In that sense, I do not believe that the article convincingly states the benefits of TDD. Also, the premise that refactoring, TDD, and Pair Programming results in no blocks is given as an a-priori statement, an axiom. The examples given in the original article are straw-man type arguments that do not stand up to scrutiny.



The only thing that actually helps identify blocks quickly is the concept of continuous integration. But once again, this is not dependent on following an Agile methodology. Any style of development can stipulate that software must be "working" at all times. Making it a requirement and enforcing it are things that are part of Agile Development. (Does this mean that all other development methodologies are non-agile and somehow clunky? :-) )



In short, writing tests and integrating software continuously seem to be the keys to good software development. Must the tests be written following a TDD approach? Can not continuous integration occur with other approaches?



Please note that I do not believe refactoring or pair programming has anything to do with the title topic, about being blocked. They seem to have been lumped in so that they can bask in the reflected glory of TDD and continuous integration.



By "required algorithms" I meant any way of implementing that meets the requirements. It could be mathematical algorithms, processes, rules, etc.

Ravi, you make some good points, but I don't think Bob's original blog is trying to defend TDD or Agile as the only worthy methodologies. I think the thesis of his article is "Never be blocked". Tests and tiny steps can help with that, as do interfaces and not pulling a whole system apart. The point is to think and develop in a way that avoids blocks, not that one won't get blocked by using TDD or Agile. One certainly could, and many do.

The article isn't trying to prove anything. In a context of TDD and incrementalism, it makes a comment about blocking being reduced, and that being a good thing. Nobody would call it a sales pitch or a logical proof. It's just presenting an idea in a community with an implicit understanding of the context. Your disagreement with the context seems, well, unrelated. But I'm not trying to turn this into a "let's beat up on Ravi" session.



Back to the original article, I, for one, found it an interesting unification of several seemingly unrelated ideas: TDD, incremental development, continuous integration, sure, we see how those fit together. Fitting in non-blocking source control, reuse vs build, architecture vs simplicity, and stubs for incomplete features into a single more unified idea just makes sense. Sure, it's stuff we already like. But a more unified way to discuss it is still useful. I would state it as "constant incremental improvements", with today's emphasis being on the "constant".

Ravi, I read your last comments more carefully and I think I see your point a little better. So I'll answer it a little more clearly. Personally, I think TDD is an effective form of test-centered development simply because writing tests after the fact a) offers less help in designing clean interfaces and b) the tests don't always get written after the fact. And the focus on fast unit tests produces tests that provide feedback even before the continuous integration - in fact, it enables safer continuous integration. And I do see refactoring as an integral part of any test-centered development simply because duplication is inevitable as software evolves, and the presence of tests offers an opportunity to remove that blockage safely. We can make the short-term choices Bob describes, and then, when the long-term problem of duplication surfaces we can clean it up and know that the functionality is maintained. As for pair programming, I haven't tried it, though I'd like to. But I've been the go-to guy often enough that when an issue comes up I've wished that somebody else had the knowledge to work on the problem without interrupting me. That's a form of blocking.



I think the confusion is that these are all *implicit* in Bob's argument. I'm fine with that. I think he's spelled them out clearly enough in other places.



Does that make more sense?

It is weird that the example you give in the last paragraph is very similar to the problem we are facing right now. We have a core library that is used by many applications and currently there is no devoted team for that - in an agile spirit we allow all developers to add code to that library. However as the codebase grows, I also see a growing problem the overall design is getting out of control. I think it is caused by the fact that whenever someone adds something to the common codebase they only keep their own project's interests in mind and do not think about the overall design of the common code.



Refactoring should come to rescue here but as this is common code then any changes to the interfaces mean changes also in other applications using that library. As the number of applications is coming close to 10 in our case then it takes a lot of effort to track down all possible usages of a certain method. As this is quite time-consuming then it is quite often left undone (maybe with a TODO comment) as current projects' schedules are tight. The further this is pushed the harder it becomes to change anything and soon someone will come up with a bright idea like "we don't need that crap, we can write our own library faster".



One solution we are thinking of is creating a separate team for this but as you say this could lead into situations where they are viewed as an obstacle on the way and people still start going around them.



How would you solve this kind of situation?

Sorry, I forgot that this was a wiki. The strikeout in my previous comment is a mistake caused by using two hyphens for a dash.

Chaz, implicit in your comments is the assumption that TDD will lead to better interfaces. (I use "interface" in the more commonly understood meaning of the term before Java came and corrupted it with a keyword interface.) I am not convinced that is so. Thinking about the problem will lead to better interfaces.



Thinking leads to tests naturally. Hence, with our without TDD, a thinking programmer will produce better interfaces that are losely coupled.







I do think you won't get much sympathy for it here, though. Let me be clear that when I said "interface", I'm talking about a very similar sense to how Java uses it, but slightly more general: how one piece of code calls another.



I can counter that quite easily with: how can you tell the programmer thought about it? Read their mind to find out? Or look at the actual tests they produced?



Also with: thinking about the problem will produce a good interface based on what we know now. What happens when we know more? Will the "thinking" help the evolution towards a new goal as much as the tests will? Or perhaps you believe that it is possible to get the design "right the first time"? Trust me, if you do get it right, if everything was communicated and understood and analyzed and designed and built correctly, then the business will change its mind anyway when they use it for a few months.



You're welcome to disagree. Tell me what offers as useful a tool to support evolution and I'll listen. Tell me that evolution is unnecessary and I'll say: you're lucky to have that type of market.

Hi Uncle Bob!

I am a chinese translator of your blog, may be you know this website 'www.csdn.net', which is very famous in China.

But when I was translating this article, I find a word makes me confused, and that is,

"all progress stops because the feedback loop that tells us that our code is correct, is broken."

You know, from the context of your article, I don't think the word "correct" is correct, I should be "incorrect" instead, shouldn't it?



Thanks!

Aaron

Aaron,

You can read it either way. The point is that the feedback loop is broken. That feedback loop tells us that our code is either correct or incorrect.

Hi Uncle Bob!

Thanks for you of giving the direction. Sorry for my rude comment, and I hope you don't mind.

I think may be you just want to clarify the ability of feedback loop on this point, and that's what I misunderstood.

You know, I want to make those chinese techinicians to read your original thoughts seamless, and I hope my translation of your blog can do that, and eventually more and more chinese developers could benefit from agile methodology and become more and more professional.

So may be I have to bother you whenever I get troubles on the understanding of your articles, sorry for that.



Aaron

Aaron, No trouble at all. I did not think you were being rude. I am pleased to answer any questions.

Underlying all the agile development practices like TDD, Pair programming, continuous integration, and refactoring, there is a single unifying concept.Like a good pool player who always makes sure that the shot he is taking sets up the next shot he expects to take, every step a good agile developer takes enables the next step. A good agile developer never takes a step that stops his progress, or the progress of others.How do you know if progress has been stopped? Progress is stopped if you cannot execute the system. Progress is stopped if you cannot run the tests. In the agile world, progress can only be made on an executing system that is passing it's tests. If the system does not execute or if it's tests fail, all progress stops because the feedback loop that tells us that our code is correct, is broken. Writing code when you can't execute it is like driving a car when you can't see out the windscreen. The wheels may be turning, but you have no idea what direction you are going, or whether you are about to drive off a cliff. Keeping the system executing at all times is like keeping the windscreen clean. We cannot make progress unless we can see.Consider, for example, two developers working on a project. Jack says he will write the J module, and Bob says he will write the B module. As Bob writes his code he eventually gets to a point where he must call J. But Jack is not ready, and so Bob must wait. If Bob were a good agile developer, he never would have taken that step. He would have created an interface for J, and implemented it with stub code, so that he could continue to keep his tests running, and therefore continue to make progress.Consider a team of 4 working on a three-tier management information system. They have split up the current work by tier. Gerry and George are working on the GUI, Marvin is working on the middleware, and Debeay is working on the database. The very first thing Debeay does is change the schema of the development database according to the new features being added. But this breaks the system. The middleware can't run until it is changed to use the new schema. The GUI can't run until the middleware can run. Debeay has sprayed black paint over the windscreen, and the whole team is driving blind. If Debeay had been a good agile developer, she would have found a way to change the schema incrementally, adding new columns and tables without changing the old ones. Gradually, as the team changed the middleware and GUI to use the new schema elements, Debeay could eliminate the old ones. By the end of the iteration, the schema would be in it's final form, and the system would never have stopped executing.Keeping the system executing at all times is the prime directive. You never do anything to the system that will break it for more than a few seconds. If you have a huge architectural change to make, you find a way to make that change in tiny little steps that keep the system executing. You find a way to keep all the tests running. You never do anything that breaks the system for more than a few seconds or minutes.This is not an easy notion to grasp. Developers are used to making changes by tearing the system to shreds and then trying to reassemble the pieces. Many get a certain "high" from the experience. Indeed, I talked with one developer who felt that tearing the system to bits and then reassembling it was the essential quality of a programmer. It made him feel good that he could do it. It became central to his self-worth. I had to gently persuade him that his own feelings of accomplishment were tied to the risks he was taking as opposed to the benefits he was providing his employer. In effect he was bungee jumping and feeling good about his ability to survive. He was an adrenaline junkie, taking risks with his employers assets. This is not professional behavior. Agile development is a careful game of chess, not a reckless game of code-chicken. Good agile developers plot a careful path towards their goal, that keeps the system executing and passing it's tests at each tiny step.Some developers believe this is inefficient and slow. They believe that it is faster to just rip the system apart and then reassemble it into it's new form. There are times when it might be faster to do this. However, doing this is like driving to the store by pointing the car in the direction of the store, painting the winscreen black, and then driving in as straight a line as you can while ignoring things like stop signs and roads. When it works you get there faster. But a lot of things can go wrong along the way.The prime directive extends to all levels of the team and organization. Never being blocked means that you set up your development environment such that blockages don't happen. Good agile teams use non-blocking source code control. If Bill has a module checked out, Bob can check it out too. The first one to check in wins. If the core group is building a reusable framework for us, and we need some function that isn't ready yet, we'll write it ourselves and use the core team's function later, when (and if) it arrives. If the enterprise architecture supports our needs in such a convoluted and obtuse way that we'll have to spend days just understanding, then we'll avoid the enterprise architecture and get the features working in a simpler way. We will not be blocked.!commentForm