Over the last couple years, I’ve had the luxury of working with and mentoring quite a few beginners. While I’ve obviously witnessed my fair share of programming no-no’s, things are not as black and white as they may seem. There’s a handful of patterns and behaviors I’ve seen consistently throughout beginners. And while some of…

Over the last couple years, I’ve had the luxury of working with and mentoring quite a few beginners. While I’ve obviously witnessed my fair share of programming no-no’s, things are not as black and white as they may seem. There’s a handful of patterns and behaviors I’ve seen consistently throughout beginners. And while some of these behaviors are misguided and detrimental, many present a learning opportunity to senior developers. Contrary to popular belief, there’s quite a few lessons that can be learned from those with less experience, as they have a bias-less perspective (beginner’s mind).

Conditional inversion



One of the most common anti-patterns I’ve seen with beginners is something I like to call “conditional inversion” (it might already exist with a better name). Conditional inversion can happen when using branching statements to control or limit the flow of code. Below is an example of the inverted code:

function atleastOneOdd(potentialOdds) { if (potentialOdds !== undefined) { if (potentialOdds.length) { return potentialOdds.some((num) => num & 1); } } return false; }

exaggerated for effect (but I’ve definitely seen worse)



If you think the above code looks far too complex for such a simple task, you’re not alone. The perceived complexity is a result of nesting, as nesting strongly contributes to the “perceived complexity” of code. Here’s a functionally equivalent but far more readable implementation:



function atLeastOneOdd(potentialOdds) { if (potentialOdds === undefined) { return false; } if (!potentialOdds.length) { return false; } return potentialOdds.some((num) => num & 1); }

The lesson:

In general, less nesting makes code easier to read and maintain. As with all rules there are always exceptions, so make sure to understand the context before making a decision. Here’s a great thread which covers this specific scenario in more depth:



https://softwareengineering.stackexchange.com/questions/18454/should-i-return-from-a-function-early-or-use-an-if-statement

Knowing when to look

It’s important to remember that it’s not just that beginners know “less,” it’s also that they haven’t yet developed an instinct for what they should expect. After a certain number of hours spent coding (not literally), you develop a keen sense for what granularity of logic you can expect to be available from standard libraries and packages. A great example of this is a situation that came up with my girlfriend. My girlfriend is a computer science student, and I was recently giving her some feedback on code she wrote for a C++ assignment. As part of the assignment she needed to check if a given input char was a letter. This was roughly her implementation:



bool isLetter(const char someChar) { const char letters [] = { ‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’, ‘g’, ‘h’, ‘i’, ‘j’, ‘k’, ‘l’, ‘m’, ‘n’, ‘o’, ‘p’, ‘q’, ‘r’, ‘s’, ‘t’, ‘u’, ‘v’, ‘w’, ‘x’, ‘y’, ‘z’ }; int i = 0; for (i = 0; i < 26; ++i) { if (letters[i] === someChar) { return true; } } return false; }

I couldn’t even be mad. It’s such a direct and logical implementation. But as an experienced developer, instinct tells you that there has to be a solution for a problem this common. In fact, my immediate instinct was to take advantage of the numerical nature of c chars and simply check if an input char is within a specific ASCII a-z range. But due to my “programming intuition” I suspected that was still too much work for such a common problem. I told her to Google if there is a way to check if char is a letter. Turns out there is— isalpha —which I’ve probably used a hundred times but don’t remember (something that happens with enough programming—you can’t remember everything).



The lesson:

Developing an intuition for whether something already exists that solves your problem is critical for succeeding as a developer. For better and for worse, the only way to strengthen that intuition is to write more code in a diverse set of languages and paradigms.

Finding the most direct solution to a problem

Beginners are absolutely amazing at finding direct or literal solutions to a problem. Just take the isLetter example from the previous section. While my girlfriend’s solution is definitely not the most efficient (memory or compute wise), it’s very clean and gets the job done. While the tendency to gravitate towards the most brute force solution can be costly, most of the time it doesn’t actually matter. I’m not saying that you should throw performance to the wind and only rely on brute force solutions, but there is a lesson to be learned here. To understand that lesson, it’s important to look at one of the key differences between beginners and experts: what they’re trying to achieve.



A beginner is usually programming to learn how to program, while an experienced developer is usually programming as a means to achieve an end (work, personal project etc). In college, a computer science teacher will rarely give an assignment that has a performance requirement. This results in students learning programming without performance in mind. At first this may seem like a huge deficit in the way that we teach people programming, but I actually think it’s one of the few things that higher-level education does correctly. Worrying about performance is generally a symptom of a very deadly disease known as “premature optimization.” From my observations, once a programmer starts worrying about performance, it’s very hard for them to ever stop.



This wouldn’t be a problem if optimizing for performance didn’t take extra time, but it does. Combine that with the fact that performance almost never matters (this is coming from a low-level C/x86 programmer), and it’s clear why always optimizing for performance can be problematic. This effect is so potent, that I’ve actually seen beginners arrive at a working solution faster than a very capable and experienced developer simply because the beginner only cared about getting it working and the senior developer assumed that it had to perform well. The irony being that even after the senior finished the “performant” solution, you couldn’t tell the difference between theirs and the beginners.



The lesson:



The direct naive solution is usually enough, aka Occam’s Razorish



/** I’m assuming a lot of readers will take issue with this section because it may seem like I’m encouraging people to write bad code. That’s absolutely not the intention, and if anything, I think more time should be spent making code readable and maintainable and less time optimizing the performance. */



Everything has to be a class

One of the more frustrating behaviors I’ve seen consistently in beginners is the inability to code outside of the class paradigm. I blame this one on every college that only introduces students to Java in computer science programs. I’m generally language agnostic, but I always found the “everything has to be a class” requirement to be a particularly silly aspect of Java. We get it, somebody at Sun really liked OOP. The price for that obsession is that 20 years later, Java is still trying to reassemble itself into a language people actually want to use.



To understand why this is a real issue and not just a complaint about Java, we have to think about how beginners learn. Beginners tend to internalize concepts at the granularity they are presented. So when they use languages such as Java, which enforce the class paradigm, they tend to develop the understanding that the minimal unit of code is a class and the expectation that every variable acts like an object. Using another object oriented language such as C++ will quickly prove that the minimal unit of code does not have to be a class—it’s often overkill and cumbersome to write an entire class for primitive and stateless logic. Don’t get me wrong, there are tons of real life situations where these semantics and guarantees are desirable—in fact, it’s the reason Java is one of the most ubiquitous languages.



The problem is that these semantics paint a misleading picture for first-time programmers. When you zoom-out of the Java world, you’ll notice that very few programming languages are as object-oriented. This is why it’s so dangerous to learn programming with only one language. You don’t end up learning programming, you end up learning a language.



When I’ve discussed this belief with other developers, they often argue that these downsides are actually upsides because Java avoids having beginners shoot themselves in the foot. While that might be a great rationale for a Fortune 100 company that needs to hire a lot of untrusted programmers, we should not introduce programming to beginners like this.



The lesson:

The ability to generalize things is one of the most important tools you have as a developer. Starting out by learning a single language is completely understandable, but it’s absolutely crucial that you quickly branch out and diversify your understanding. Doing so will transform you from a pretty-good <Insert language here> programmer to a great developer.

Strong syntax and structure

Beginners tend to rely on consistent syntax and structure far more than experienced developers. This actually makes a lot of sense once you think about it, considering that humans are pattern-driven and code syntax and semantics is one of the most explicit examples of patterns. For an experienced developer, understanding what a function does (and how it operates) is usually a matter of going to its definition and reading the source code. It may seem impossible to imagine not operating in this fashion, but I almost guarantee that even the best developers didn’t start out this way. Pattern-matching is so fundamental to the way humans learn, that it extends far outside of programming. For example, English is notoriously one of the hardest languages to learn. When would-be English learners are asked what makes learning English so difficult compared to other languages, they usually point at the inconsistency and lack of reliable language rules.



/** FTR: One of the best arguments about Java for beginners is its strong and consistent semantics */



A practical programming example of this happened when I was recently helping a beginner write a C++ class that had a relatively complex internal state. Based on the goals/intended usage of the class, it made sense to have some of the methods directly mutate the state while others returned a copy. After writing quite a few of the methods together, I left the beginner to write the rest by themselves. When I returned later on, my student had not made much progress and communicated that they were stuck. Specifically, they had become confused about which methods were mutating the state and which methods weren’t. This is a situation that an experienced developer does not encounter, as they will simply look at the internal logic of the method and determine if state is being mutated. But a beginner is not yet fluent enough to quickly parse that same logic (even if they previously wrote it), and instead relies on the syntax and structure of the code. After reviewing the code, I realized that their struggle was partially my fault.



When we implemented the initial methods together, I made sure there was a good mix of mutable and immutable. What I hadn’t realized is that those methods presented a misleading pattern to a beginner. Specifically, every mutable method was a void function, and every immutable method had a return type.

class MyType { void addElem(int elem); MyType createCopy(); ... };

I had unintentionally taught my student a pattern that obviously is not true in practice. So the moment they needed to implement the mutable bool removeElem(int elem) or the immutable void printElems() , things fell apart. I say that this was my fault because at the end of the day, I was lazy. As an experienced developer, I didn’t rely on syntax and structure as much as the actual logic I was actually implementing. This is silly because the code I wrote could be improved as to leave zero ambiguity for beginners and remove the potential for bugs at nearly zero cost to me. How did I accomplish this? Through the use of the const keyword, which allows me to explicitly indicate if a method has the potential to mutate state:

class MyType { void addElem(int elem); bool removeElem(int elem); void printElems() const; MyType createCopy() const; };

The lesson:



It’s really easy to forget how other people might interpret the code you write. Stay consistent and utilize language capabilities that enable you to write digestible and explicit code.

Conclusion

Many of the initial patterns you embrace and adopt as a beginner will fall to the wayside as you’re introduced to more efficient and maintainable solutions. That being said, I hope this article illustrated that sometimes even the most experienced developers can benefit from a bit of “unlearning.” While beginners may not be as comfortable with advanced features and language idiosyncrasies, sometimes that’s a gift when the job just needs to get done.

Tags: career advice