In my first few days as a freshman, I met a classmate who claimed that he could code in any programming language I could name. Astonished, I challenged “what about that unreadable esoteric language where the handful of commands merely simulate a Turing machine!?” He dryly replied “yes it’s called brainfuck; I know brainfuck”.

I was brainfucked. It was no trick, he could legitimately code in any language after a trivially brief refresher. How could an 18 year old kid know every language?

A brainfuck interpreter, written in brainfuck by Daniel B Cristofani (not my classmate)

While I’m still impressed by his feat, I’m not so astonished. After learning a few more languages myself, I came to realize that they were all less diverse than I had anticipated. After some more study, I started appreciating some of the underlying models of computation and programming language theory that make most languages look like different versions of the same basic ideas.These days my standard advice for students is “aim to learn every language”.

Given that we’re approaching the new year and folk will have new year’s resolutions to learn Go or Rust or something, I want to encourage the alternative resolution: aim to learn every language! Hopefully this article will help you on your way!

Disclaimers :)

This isn’t actually about becoming competent with 500+ languages. It’s about understanding the common paradigms and implementation patterns so as to be confidently language agnostic

This is a long journey. You can make a lot of progress in a year, but depending on your current level it may take another ten

Some concepts may not seem relevant for a while, roll with it

Depending on your job and/or goals, this may not actually help you with your job and/or goals!

Why do it?

If you see yourself as reading and writing software for the bulk of your career, you owe it to yourself to be generally familiar with languages:

Even without picking your languages yourself, you will likely end up using a large number of them

Given the choice, being able to select the right tool for the job will make you more effective

As the popularity of languages ebb and flow, you will have a wider choice of jobs, companies and projects if you’re not limited by language choice

Many high impact projects require fundamental understanding of compilers and languages, from general purpose language implementations and libraries to DSLs, databases, browsers, IDEs, static analysis tools and more

To me, the last point is the most important. Ras Bodik would emphasize this when convincing his students at Berkeley of the importance of his compilers course:

Don’t be a boilerplate programmer. Instead, build tools for users and other programmers. Take historical note of textile and steel industries: do you want to build machines and tools, or do you want to operate those machines?

Step zero: stop calling yourself a Rails (etc) dev

This step is easy, but important. While you should be proud of yourself for mastering a language or technology, self-identifying as a specialist in any language creates a mental barrier to embracing another. Call yourself a software engineer and strive to live up to that title whatever the context.

Alex Gaynor loves Python enough to have served on the board of its foundation and contributed large portions of Django and PyPy, but that didn’t stop him from tolerating Classic ASP for a couple of years to help the USDS. He calls himself a software engineer, and so should you.

Step one: go meta

There is an old joke about an applied physicist who finds himself at a string theory conference. Turning to a theoretical physicist he asks “gee how do you all manage to think about things in 11 dimensions?” The theoretical physicist replies “that’s easy, we just image N dimensions and substitute 11 for N.”

Strong programmers use the same trick. You may see Go as a new and challenging language; strong programmers see it as a compiled statically typed language with garbage collection and CSP style concurrency. Swift is new and carefully designed, but a strong programmer can pick it up easily: it is just a general purpose compiled language with objected-oriented features like protocols, implemented with LLVM.

The meta level of languages is the undergraduate compilers class. Unfortunately, the name is misleading for two reasons: firstly, the class isn’t strictly about the mechanics of compilers; its main purpose for most students is to deeply understand languages. Secondly, some people see “dynamic” and “compiled” as opposites, so assume that a compilers class will not teach them about the implementations of their favorite dynamic languages… this is not true, there is a tremendous amount of overlap, and most compilers courses include a section on bytecodes and virtual machines. But the name “compilers” has generally stuck.

For those who never had the opportunity to take a compilers course, there are some great books and online courseware available. In particular I would suggest Alex Aiken’s course which was previously available on Coursera and now on Stanford’s own MOOC platform Lagunita. Berkeley’s CS164 is also a good option… unfortunately Berkeley has stopped publishing video from newer sessions, but Ras Bodik’s 2012 session is still available.

The canonical text in compilers is Compilers: Principles, Techniques & Tools, commonly called “the Dragon Book”. Like all canonical texts, it has both rabid fans and thoughtful detractors; my overall view is that it is the best single book, but expect to cover it in multiple passes through your career. Myles particularly likes Language Implementation Patterns by Terence Parr. It is written more directly for the practicing software engineer who intends to work on small language projects like DSLs, so you may find it more approachable than the Dragon Book as a first stop.

The “Dragon book”, still the best single book on compilers if you had to pick one

For those located in San Francisco who prefer more hands on instruction, you may be interested in Bradfield’s languages, compilers and interpreters course.

Step two: select archetypal languages

With a good theoretical foundation it will be easier to pick up new languages, but not easy enough to go off and learn 500+ of them. The trick now will be to identify and learn languages that are archetypal of the powerful ideas and common paradigms across all others. With a good selection, it should be trivial to then triangulate toward new languages.

Peter Norvig makes his own suggestion of what are the important paradigms, as well as their archetypal languages:

Learn at least a half dozen programming languages. Include one language that emphasizes class abstractions (like Java or C++), one that emphasizes functional abstraction (like Lisp or ML or Haskell), one that supports syntactic abstraction (like Lisp), one that supports declarative specifications (like Prolog or C++ templates), and one that emphasizes parallelism (like Clojure or Go).

This is a great starting point in my opinion, but you may want to go a little further to cover a broader field.

Firstly, I would suggest learning C as early as possible. It is so ubiquitous and influential, for better or worse, that it will make it much easier to learn other languages on your list (particularly C++, on Norvig’s). More rationale here, advice for self-teaching here.

I would also recommend learning an assembly language, MIPS for the easiest path or x86 for the most practical. This will probably teach you more about computer architecture than languages, but it provides a lowest common denominator, too, when reasoning about language implementation. Maybe one day I will recommend LLVM intermediate representation instead.

Norvig recommends learning a declarative language, but I would be more specific and say learn a logic programming language. This could be Prolog as he suggests, or miniKanren via the book The Reasoned Schemer.

In my opinion, a great choice for Norvig’s “parallelism” requirement is CUDA. This is parallelism at a much more dramatic scale than your 4 core CPU and exposes you to the new and interesting architecture of your GPU, increasing in relevance given its use in machine learning. This has different lessons to teach though than languages that emphasize concurrency, so Go, Clojure or Erlang may still be a good bet.

Array based programming is another very powerful paradigm. Norvig may have omitted it given that it has found most of its application in highly quantitative fields, but I feel that it is interesting to non-quantitative programmers too, as an example of a dramatically different set of starting primitives. APL/J/K/Q are mind-bending archetypal languages in this category, although Matlab/octave may be more accessible.

Not quite a paradigm, but it’s worth becoming familiar with a handful of narrow purpose languages, if nothing else to realize that writing a narrow purpose language may be a good solution to a problem one day. Frink is a personal favorite, AWK is another exemplar.

It is hard to stop there! Some would insist that Forth is essential so as to understand stack-based languages; I personally feel like we have enough exposure to stack-based languages through stack-based virtual machines in dynamic languages. Others will tell me I missed something else. This is not a definitive list! Hopefully it will get you started.

Step three: practice

It is easy to make a list of target languages, but it will take more work to build up your familiarity with them. If you are lucky, you will be able to use some of these for your work or projects. For the others, in my opinion you must combine both study and deliberate practice. Without study, progress will be slow; without practice, your understanding is less likely to stick.

A good way to start approaching a new language is to read relevant entries in Hyperpolyglot and Learn X in Y Minutes. These will introduce some of the key ideas of the language as well as start to remove the syntax barrier. If you already have a language in the same family under your belt, Hyperpolyglot’s side-by-side comparison can take surprisingly far.

Another worthwhile exercise is to seek out the design rationale for the language. This will make it easier to see the intended purpose of the language and give you some extra motivation to learn it. For instance, if you are wary of learning C++ or skeptical of Bjarne’s decisions, you should definitely read his history book The Design and Evolution of C++. There are similar motivating resources for most languages.

Understanding a language designer’s rationale before attempting to learn their language may instill the respect that you’ll need to open-mindedly explore its novel aspects. This book is a good example for C++, others exist for most languages.

After that it may make sense to either read a reference book or jump straight into solving some small problems.

It is hard to make general recommendations for reference books, but I would say seek out the oldest, most canonical books aimed at experienced programmers new to that particular language. These books will have detractors, but older books tend to be better at conveying the important ideas and design decisions behind a language, and are more often written by its designers and key implementers, whereas newer books tend to focus on applications or optimize for approachability.

The most rapid way to ramp up in a new language, in my opinion, is to find a set of small problems and progressively work through them. Exercism.io is a good source, and it may even have a test suite for problems in the language you’re targeting. Small problems are easy to fit in around work and other projects, can be calibrated to your level of difficulty, and on aggregate can cover most of the surface area of the language, removing the syntax familiarity barrier.

Once you feel familiar with the key ideas and comfortable with the syntax, I would suggest finding a larger project, but specifically one for which the language was intended. If it is a systems language like C or Go you may want to write a command line tool that makes a lot of syscalls, if C++ then perhaps a raytracer, if a scripting language like Python or ruby then a performance non-critical algorithmic problem like a tic-tac-toe AI, and so on.

Keep looking

Given the number of languages in the world, and the continued use of older languages like C, it’s easy to conclude that we have already invented all of the languages we need. This is a highly constricting response.

There is a huge gap between what we know to be computable and what we have managed thus far to instruct computers to do. This is not due to a lack of resources: between Moore’s law and the increasing accessibility of cloud resources, we have enough compute to do much more sophisticated and impactful work than we’re doing currently. The problem must be in our interfaces, being languages and our tools for using them.

Gerald Jay Sussman makes this point in his tremendous talk We Really Don’t Know How To Compute. He uses an existing (and old!) language in his examples, so may argue that our tools are not the problem. But for whatever reason, our current tools haven’t enabled us to address the opportunity he highlights of dramatically more effective computing.

Kanizsa’s triangle illusion. Sussman points out that it is trivially easy for the computers in our heads to see the triangle, but still exceedingly difficult to program our silicon computers to do so.

One of many interesting frontiersmen right now is Chris Granger, working on Eve. He did not set out to write a language so much as a tool for thought on the scale of the spreadsheet. The language has ended up being a critical and tightly coupled component of the platform; languages are so important to our ability to harness computers that that is no surprise.

Whether on not Eve becomes the next big programming platform, language-centric tools for thought will be part of our future, at least if we hope to harness all the compute now at our disposal. Like Ras Bodik, I would encourage you to be part of this movement, learn languages rather than a language, and use your foundational understanding to do high impact work.