Constraints such as those offered by let and const are a powerful way of making code easier to understand. Try to accrue as many of these constraints as possible in the code you write. The more declarative constraints that limit what a piece of code could mean, the easier and faster it is for humans to read, parse, and understand a piece of code in the future.

You know all of this just by reading the const declaration statement and without scanning for other references to that variable.

In addition to the signals offered by let , the const keyword indicates that a variable binding can’t be reassigned. This is a strong signal. You know what the value is going to be; you know that the binding can’t be accessed outside of its immediately containing block, due to block scoping; and you know that the binding is never accessed before declaration, because of TDZ semantics.

Note that this means that the variable binding can’t change, but it doesn’t mean that the value itself is immutable or constant in any way. A const binding that references an object can’t later reference a different value, but the underlying object can indeed mutate.

The const statement is block-scoped as well, and it follows TDZ semantics too. The upside is that const bindings can only be assigned during declaration.

A let statement indicates that a variable can’t be used before its declaration, due to the Temporal Dead Zone rule. This isn’t a convention, it is a fact: if we tried accessing the variable before its declaration statement was reached, the program would fail. These statements are block-scoped and not function-scoped; this means we need to read less code in order to fully grasp how a let variable is used.

Granted, there’s more rules to a const declaration than to a var declaration: block-scoped, TDZ, assign at declaration, no reassignment. Whereas var statements only signal function scoping. Rule-counting, however, doesn’t offer a lot of insight. It is better to weigh these rules in terms of complexity: does the rule add or subtract complexity? In the case of const , block scoping means a narrower scope than function scoping, TDZ means that we don’t need to scan the scope backwards from the declaration in order to spot usage before declaration, and assignment rules mean that the binding will always preserve the same reference.

The more constrained statements are, the simpler a piece of code becomes. As we add constraints to what a statement might mean, code becomes less unpredictable. This is one of the biggest reasons why statically typed programs are generally easier to read than dynamically typed ones. Static typing places a big constraint on the program writer, but it also places a big constraint on how the program can be interpreted, making its code easier to understand.

With these arguments in mind, it is recommended that you use const where possible, as it’s the statement that gives us the least possibilities to think about.

if (condition) { const isReady = true }

When const isn’t an option, because the variable needs to be reassigned later, we may resort to a let statement. Using let carries all the benefits of const , except that the variable can be reassigned. This may be necessary in order to increment a counter, flip a boolean flag, or to defer initialization.

Consider the following example, where we take a number of megabytes and return a string such as 1.2 GB . We’re using let , as the values need to change if a condition is met.

function prettySize (input) { let value = input let unit = `MB` if (value >= 1024 ) { value /= 1024 unit = `GB` } if (value >= 1024 ) { value /= 1024 unit = `TB` } return `${ value.toFixed( 1 ) } ${ unit }` }

Adding support for petabytes would involve a new if branch before the return statement.

if (value >= 1024 ) { value /= 1024 unit = `PB` }

If we were looking to make prettySize easier to extend with new units, we could consider implementing a toLargestUnit function that computes the unit and value for any given input and its current unit. We could then consume toLargestUnit in prettySize to return the formatted string.

The following code snippet implements such a function. It relies on a list of supported units instead of using a new branch for each unit. When the input value is at least 1024 and there’s larger units, we divide the input by 1024 and move to the next unit. Then we call toLargestUnit with the updated values, which will continue recursively reducing the value until it’s small enough or we reach the largest unit.

function toLargestUnit (value, unit = `MB`) { const units = [`MB`, `GB`, `TB`] const i = units.indexOf(unit) const nextUnit = units[i + 1 ] if (value >= 1024 && nextUnit) { return toLargestUnit(value / 1024 , nextUnit) } return { value, unit } }

Introducing petabyte support used to involve a new if branch and repeating logic, but now it’s only a matter of adding the PB string at the end of the units array.

The prettySize function becomes concerned only with how to display the string, as it can offload its calculations to the toLargestUnit function. This separation of concerns is also instrumental in producing more readable code.

function prettySize (input) { const { value, unit } = toLargestUnit(input) return `${ value.toFixed( 1 ) } ${ unit }` }

Whenever a piece of code has variables that need to be reassigned, we should spend a few minutes thinking about whether there’s a better pattern that could resolve the same problem without reassignment. This is not always possible, but it can be accomplished most of the time.

Once you’ve arrived at a different solution, compare it to what you used to have. Make sure that code readability has actually improved and that the implementation is still correct. Unit tests can be instrumental in this regard, as they’ll ensure you don’t run into the same shortcomings twice. If the refactored piece of code seems worse in terms of readability or extensibility, carefully consider going back to the previous solution.

Consider the following contrived example, where we use array concatenation to generate the result array. Here, too, we could change from let to const by making a simple adjustment.

function makeCollection (size) { let result = [] if (size > 0 ) { result = result.concat([ 1 , 2 ]) } if (size > 1 ) { result = result.concat([ 3 , 4 ]) } if (size > 2 ) { result = result.concat([ 5 , 6 ]) } return result } makeCollection( 0 ) makeCollection( 1 ) makeCollection( 2 ) makeCollection( 3 )

We can replace the reassignment operations with Array#push , which accepts multiple values. If we had a dynamic list, we could use the spread operator to push as many ...items as necessary.

function makeCollection (size) { const result = [] if (size > 0 ) { result.push( 1 , 2 ) } if (size > 1 ) { result.push( 3 , 4 ) } if (size > 2 ) { result.push( 5 , 6 ) } return result } makeCollection( 0 ) makeCollection( 1 ) makeCollection( 2 ) makeCollection( 3 )

When you do need to use Array#concat , you should probably use [...result, 1, 2] instead, to keep it simpler.

The last case we’ll cover is one of refactoring. Sometimes, we write code like the next snippet, usually in the context of a larger function.

let completionText = ` in progress` if (completionPercent >= 85 ) { completionText = `almost done` } else if (completionPercent >= 70 ) { completionText = `reticulating splines` }

In these cases, it makes sense to extract the logic into a pure function. This way we avoid the initialization complexity near the top of the larger function, while clustering all the logic about computing the completion text in one place.

The following piece of code shows how we could extract the completion text logic into its own function. We can then move getCompletionText out of the way, making the code more linear in terms of readability.

const completionText = getCompletionText(completionPercent) function getCompletionText (progress) { if (progress >= 85 ) { return `almost done` } if (progress >= 70 ) { return `reticulating splines` } return ` in progress` }

What’s your stance in const vs. let vs. var ?