Say you run a new team. You have carte blanche to implement any policies you want to make the people more productive and the code less buggy. What do you do? Careers have been built selling the answer. Take up pair programming! Switch to Haskell! Use UML for everything! These techniques get their own books and conferences. But are they worth the effort? How long till they take effect? Do they even work at all? ¯\_(ツ)_/¯ These questions are important, if not unique to software engineering. How do we tell whether something will solve our problems? We could talk to experts, but experts disagree. We could rely on our own experiences, but those are limited. (Nobody has tried and compared everything.) We could survey people, but “popular” is not the same as “correct.” (Almost half of Americans consider astrology a science.) So how do any of us really know what we know? Hopefully, as a field matures, scientific study and empirical research eventually replace folklore. Though we’re still in the early days of software engineering (compared to, say, mechanical engineering), few of the technical solutions we’ve studied impact software quality in a meaningful way. Static typing? One study, presented at FSE 2014, found no evidence that static typing is helpful—or harmful. Code standards and linters? Another paper, shared at ICSM 2008, found these can make things worse. Code review? Okay, now that, according to a 2016 article published in Empirical Software Engineering, actually works. But we can’t stake our team’s success on just “more code reviews.” We can’t stake our team’s success on just “more code reviews.” I strongly believe that technical solutions do help. But we often frame our choices of technical tools and processes as critical decisions, ones that make or break a team. Prominent industry voices may claim that Lisp is “a secret weapon,” that Python users are “unethical,” or that you must use TDD to be “professional.” But if these technical solutions mattered so much, wouldn’t we see that reflected in the research? Instead, we see minor or tentative effects in some situations, and major effects in none. Technical solutions are not a hill worth dying on. It’s not all bad news. Empirical evidence consistently shows that some factors do make a difference, factors that don’t just dramatically affect code quality—but us. We only have to broaden our idea of what really matters in making software. Instead of technical factors, we need to talk about human ones.

How much sleep do you get per night? When was the last time you worked more than 40 hours a week? Are you happy at your job? These are the questions that most impact software quality. Studies across disciplines consistently show that the difference between technical and human solutions is the difference between results that effectively state “We speculate there is a small impact” and “We are confident there’s a dramatic difference.” These findings make sense: Programming is an extension of our minds, and anything that compromises our minds will hurt our programming skills. So what is this evidence? Glad you asked. Some of the most clearly documented factors are sleep, hours worked, and stress. Here are just a few of the many, many studies out there. Sleep There are two kinds of sleep deprivation. We often think of sleeplessness as acute sleep deprivation (ASD): being awake for 24 hours or more. Most people already know that’s bad. Novice programmers, according to a 2018 study from IEEE Transactions on Software Engineering, lose most of their skills while experiencing ASD, and we can reasonably assume senior developers—a.k.a. other human beings— aren’t immune either. ASD also affects decision-making ability as well as long-term health, according to a 2007 Neuropsychiatric Disease and Treatment article. More subtle is chronic sleep deprivation (CSD), getting less than enough sleep several nights in a row. The 2007 article demonstrates that CSD reduces mental performance across the board. Worse, it can take several recovery nights, sleeping more than eight hours a night, to completely reverse CSD. Of course, degraded performance on tests doesn’t necessarily translate to degraded performance at work. That’s why studies have also observed sleep-deprived people in the workplace. The 2008 book Patient Safety and Quality: An Evidence-Based Handbook for Nurses states that sleep-deprived nurses make more serious mistakes, full stop. To add insult to injury, most sleep-deprived people don’t know they’re performing worse, according to the same 2007 Neuropsychiatric Disease and Treatment article. Individual developers can’t necessarily tell that they’re making more programming mistakes, which makes it harder to self-regulate. That means the dangers are all the more subtle, long-term, and easy to miss. Hours worked On a daily basis, we’ve got about eight hours of work in us. Possibly less. According to a 2017 report from the Institute of Labor Economics, call-center employees find their quality of service plummets after about four hours. There are indications that, when we work long hours, our productivity also nose-dives: For example, a 1980 Business Roundtable report found that construction crews working 50 hours a week declined to less than 80 percent productivity after 8 to 10 weeks. In other words, they only accomplished as much each 50-hour week as a well-rested, well-paced team would do in 38 hours. Productivity drops even faster when you do 60-hour weeks, with researchers estimating that, after two months of 60-hour weeks, construction crews will have cumulatively accomplished less than with two months of 40-hour weeks. And, according to a 2004 publication from the Center for Disease Control and Prevention (CDC), all that extra work wrecks your body. Stress Finally, consider stress. It’s harder to focus when you’re anxious or angry or distracted. It also stands to reason that you aren’t as productive or as meticulous when you’re stressed out. The National Institute for Occupational Safety and Health, a part of the CDC, has shown that stressed nurses are both significantly less productive and significantly more likely to make serious mistakes. Closer to home, Gamasutra did a 2015 study on crunch mode—which they define as extended overtime—in video game development. They found that games produced in crunch mode not only burned out their development teams but also performed worse on every other aspect measured—critical scores, overall sales, everything﻿—compared to games whose teams cut scope in order to meet deadlines, or opted to extend the production schedule. Meanwhile, a 2014 study published in PeerJ found that happy developers just straight-up solve problems faster.