Software Developers, especially experienced, are currently in short supply.

The lack of software developers acts as a bottleneck for producing software solutions.

To address this issue, different mitigation strategies and processes are often introduced, to ensure “high utilization” and “high efficiency” of the software developers’ perceived primary expertise: “writing code.”

Although many of these strategies appear, on the surface, to increase throughput and, by implication, value creation, I contest that more often than not, the initiatives, processes and rules aiming to ensure “high utilization and efficiency” actually most often result in much lower value creation. To put it in other words; they reduce people’s effectiveness.

I have seen the same phenomena occur over 10+ years in different organizations, domains and in different roles. I have seen the same types of damages incurred repeatedly by the same actions.

By talking with other experienced developers (some of them much more so), I have grown confident that it is not only me experiencing these phenomena.

The consequences of these “utilization and efficiency” actions and initiatives (/traps) are very often:

Increased inertia and friction

Higher costs and longer development cycles

Inability to correct course

Greater risks

Collaboration issues

Reduced motivation

Increased negative impact of dependencies

and many other more subtle negative effects.

Utilization vs. Efficiency

Ensuring that a resource is highly utilized, i.e. is constantly “kept busy”, does not mean that the resource is efficiently utilized.

High utilization does not equal high efficiency.

On the contrary. One way to achieve high utilization of a scarce resource, is to ensure “it” works inefficiently. Then we don’t actually need to feed the resource that much work to keep it busy and utilized. We can even increase the utilization, by having the resource report their current utilization and workload frequently. The measurement ensuring high utilization, itself becomes a driver of high utilization (if you measure the amount of time idling and assuming the time spent reporting is not classified as idling).

An example of this, is spending a lot of time estimating. And then use these estimates to ensure at least the next X weeks are fully booked. By doing the estimates, people are often settling on a solution prematurely, instead of letting discoveries during the work guide what solution to choose. Of course, clarification tasks, spikes and similar are possible and can be very beneficial. This can then add the “sprint + 1” issue, where we do work in sprint 0, to be able to estimate it, so we can plan it into sprint 1.

Basically, estimates do not provide any actual value. Only the ability to plan, so “the developers have enough to do”. Or at more accurately, set up the developers to fail — because estimates and plans are most likely to fail. Unless the people doing the estimation, introduce enough padding…

When we use the estimation “top-down”, we end up nudging people to game the system and to put buffer into their estimates, so they are less likely to fail. (I can recommend this talk on the subject: NoEstimates)

There is a saying about software projects. They have two outcomes. Either they are on time. Or they are late. It is more or less unthinkable to have software projects finishing ahead of schedule. In other words, estimates in software development, often act as a lower bound for the work to be done and time to be spent. Not as an average.

We also have Parkinson’s Law:

“Work expands so as to fill the time available for its completion”

This has the corollary that when we estimate a task, we typically use the entire estimate for a task even when it could actually have been done in less time. At the same time we are forced to spend more time on the ones that actually do take longer time. I think the story point concept is an attempt to mitigate this. But what can these estimates actually be used for?

Is the estimation merely a driver of conversation?

Does estimation force us to select a solution needlessly early or prematurely?

Is the estimation used for projecting workloads and schedules?

If the answer to any of these is yes, then why do we bother estimating with story points?(as well as hours and whatever).

Are we still actually using the estimates to make us feel sure, that the “teams are fully utilized”? (and thus we do not risk them just slacking off).

(* Note that there is an exception here related to when you have an external paying customer that need to know some price-indication up front. That is not the type of estimation I am talking about here. That is an entirely different beast and subject on its own.)

Efficiency vs. Effectiveness

Doing work efficiently is not the same as doing effective work.

Efficient work is not equal to effective work

We have a tendency to introduce a horde of business analysts, UXers, architects, POs and others, in an attempt to ensure that the software developers, at all times have a list of tasks, defined at a granularity level and precision that the coding skills of the software developers, are efficiently utilized.

We keep them busy and doing what they (are perceived to) do best. We “offload” the developers from the work that “can be handled by others”. The developers are thus continuously actively, maybe even furiously, typing away on their keyboards or discussing software designs on the whiteboard. Their perceived primary (and unique) skill.

However, efficiently creating code for a solution, does not mean that the effort could not have been better spent creating another solution or coming up with an alternative approach.

Working super efficiently, e.g. implementing tasks from the FIFO queue, using a ton of shortcuts, macros, etc. does not necessarily result in effective output. For example if the specs that are being implemented are not a good match for the technology, codebase or simply software solutions in general. Or maybe the specs are simply not a good fit for bridging the gap between the specific problem being solved and the specific software or tech being applied.

If a business analyst, UX or Architect has a set of solutions that are all “good enough”, they still need to choose one of them to pass along to implementation (because that is often the setup/process defined). Whatever criteria they use for making this choice, whatever way they evaluate the different solution, ease of implementation (by identifying what solution matches or even compliments code, tech or software best) is likely not at the top of criteria. And may not even be a possible criteria for them to apply.

It is often only the technical people “on the ground” or maybe tech leads or architects, WITH a finger firmly placed on the pulse, that are able to identify what solutions or approaches can result in huge development reductions or simplicity gains.

It is extremely rare that non-technical people or even technical people without the finger firmly placed on the pulse, are able to identify “the good fit for purpose-and-foundation solution”. (though technical people, not with a finger on the pulse, still have a good chance at asking the right questions)

But for the developer to identify this, the developer needs to be aware of the X possible solutions. Or, preferably, be part of the solution discovery itself. Which means, understanding the root problem and main purpose. It is seldomly enough to just throw three possible solutions at the developer and ask: “Which one is fastest/cheapest”?

However, solution discovery is often viewed as a “common capability” (i.e. understanding the domain or business) not requiring technical competencies. And often it is viewed as “those nerds are unable to understand business or talk to customers or users and even if they could, they wouldn’t want to”.

So the work here is often picked up by non-technical people (or technical people without a firm finger on the pulse), and they “shield” the developers from the real world people (and vice versa). But in doing so, also deny the business or customer from huge gains in value produced or various cost-reductions.

Amount of work vs. size of work

It does not help going very fast if it is in the wrong direction or you overshoot you target.

Nor does it help to reduce the percentage of work done by a scarce resource, if you make the total work 10 times as big in the process.

The saying about making the pie bigger, to ensure all people get more with unchanged percentile distribution, kind of applies here, but in reverse. It does not actually matter that we increase the percentage done by the scarce resource, if we are reducing the absolute pie-size at the same time. So letting the developers responsibility encompass more of the development lifecycle, can still reduce in more value creation and effectiveness. And it is not only the size of the Pie, but how well crafted or well tasting it is…

I admit the metaphore is being stretched a bit here, but the point is, that often much more value creation can be gained, by reducing the size of the work that needs to be done to solve a problem. Not by increasing speed of the manual work being performed. And not by reducing the involvement of a scarce resource (the software developer) to an absolute minimum. And less-effort, more effective and more reliable solutions be found, enabling better delivery performance over time.

Resource vs. people

Often software developers are viewed as “coding resources” instead of individuals. This also applies to other professions. Doing this causes various issues.

It creates the image that the individuals only can do one type of task. It cements the perimeters of responsibility, influence, decisions and who is allowed to question what. You end up pretending that a hammer can not be used to bang in a screw or only a bottle opener can be used to open a beer and actually can not be used for anything else. In my view the bottle opener is a redundant specialization. If you have a beer, everything is a bottle opener.

Despite stretching yet another metaphor, the point is that there is a lot of overlap of responsibility, insights and competencies in software development. It is not always obvious whether to hammer it in or screw it. Thus it is difficult to know whether to use a hammer, screw driver or not do anything at all.

When you make people singlepurpose resources, based on their professions or roles, it will cause friction, it will likely reduce the psychological safety, and time will be spent on establishing and maintaining hierarchies and boundaries, instead of spending time on actually producing value or avoiding risk and overhead.

This timesink is best reproduced and highlighted by the marshmallow challenge. Just read these reasons why kids are better at the marshmallow challenge than adults:

They (the kids) get to work — while the adults tend to spend most of the time fighting for the leadership position and trying to establish dominance, the children start building right away.

and

(the kids) Stay focused on the goal — the kindergarteners stick to the main goal to build the tallest structure, while the adults tend to think about the way the structure will look, seeking some kind of perfection.

The latter phenomenon will be seen as the different resources with specific “roles” or “areas of expertise/responsibility” fighting for a perfection related to their domain or area, instead of everyone having a more holistic approach. I.e. developers optimizing for perfect code or UX’ers insisting on designs that introduce an unwarranted technical complexity.

People vs. Organization

In the previous text, I have to a very great extent, focused on the impact on value creation as seen from the “grunt” software developer.

However. There is an entirely different angle on this. Namely the organizational cost of keeping the software developers utilized, ensuring they work efficiently and with high velocity and precision.

The overhead of offloading the scarce resource can be huge, and can impact delivery pace, feedback-cycle time and basically work against the metrics we know improve software delivery performance (as fleshed out in a bit more details in a previous article “Value driven technical decisions in software development” ):

1Deliver soon, often and in small batches

The “role-driven-pipeline” of tasks will nudge people to work in bigger batches and increase lead time. The back and forth error-correction and feedback flow will exacerbate even more. Team dependencies and sacred sprints (to ensure utilization and efficiency) also plays in here. (Here is a relevant video about the latter)

2 Remove bottlenecks, blockages and stuff that delays

Where software developers are often seen as bottlenecks, I have much more often experienced other people/roles in a process being the actual bottlenecks. E.g. missing specs (but still require them), missing clarification, a pending meeting with the customer, etc. That is on top of the various stumbling blocks and speedkillers software developers create for themselves. (Looking at you Pull Requests … more to come on that, in a later article)

3 Psychological safety

One of the primary reasons why psychological safety is extremely important for performance is related to the friction associated with asking questions or call out “wrong behavior”.

In The Culture Code there is an example of a plane crash happening because the co-pilot did not challenge the first-pilot because of hierarchy. It was not psychologically safe or allowed to do so.

Very strict role definitions and processes create the same kind of implicit hierarchies that make people not question and double check. The “better-safe-than-sorry” is replaced with the assumption that something is the responsibility of someone else. It becomes more important to follow the organizational rules and processes, than producing progress.

4 Be close to the value delivered

To understand why we do what we do, we need to be close to the value we actually deliver. That means being close to the end user or customer. We need to see the actual fruits of our labour being consumed. I.e. Gemba.

The best experiences I have had as a software developer are related to Gembas. Seeing with my own eyes actual users using what I have been part of creating and building, in their day-to-day work. With all the great features apparent. And all the crappy ones, equally so…

At the same time, I have also seen many bad outcomes and value-creation-friction related to the software developers being “shielded” from the end user/customer (and often shielding the end user should from the developers). Often this is also framed as “not wasting developers’ time with unnecessary meetings and excursions” or other more vague reasons.

This counterproductive separation of people and restriction of contact not only creates, through the relay and handover approach, miscommunication, lost opportunities, organizational trench warfare and other problems. Nor does it not motivate the software developers as it could (and should), the lack of it also makes the software developers create, what I term, value substitutes which creates friction and works against many of the other best practices we know.

What should we do?

I have no beef with teams with mixed backgrounds, professions etc. We need cross-functional, cross-profession and cross-competencies teams. But we need to understand the central role of the software developer as more than “dotting the i’s and crossing the t’s” with code. Software developers solve problems. And to do that — they need to understand the problems being solved.

Ensure at least some of the software developers have access to (and are expected to) understand the deep context of the problems they solve. Ensure they are included early in the process, to figure out how to approach the problems being solved and capabilities being delivered.

Challenge the software development team to deliver specific value and solve problems, instead of implementing solutions and features.

Not all (technical) members on a team need to have in-depth knowledge of domain, problems, etc. and do not need to be part of all parts of the process/lifecycle of work. But all steps in the software development lifecycle, should have some developers with dirt under their fingernails participate.

There are multiple gains to be had by having the software developers participate in all steps of the process from start to finish, including, but not limited to:

Identifying low hanging fruits

Building in the right direction and course correct

Better situational awareness for solution evaluation

Deep context

Reducing risk

Quicker feedback and quicker iterations

Ability to reduce batch size

So to increase value delivery and reduce risk, ensure, at least, some of your developers are full process developers. Ensure all developers are able to participate in discussions in the full lifecycle of the process, even when they do not do all tasks.

Avoid bunkers, silos and avoid equating responsibility with influence and a place at the table. Everyone should be allowed to try to influence and provide input.

Allow anyone to chip in. Even when they sound stupid. Remember that there are no stupid questions, only rude responses.

And finally encourage people to have a holistic approach to software development and a decision making process equally so.

That is the list I have for now, which I hope to add to over time. Let me know if you think something is missing or needs to be fleshed out.