As the Web platform matures, the code that powers our websites is built using increasingly complex processes.

A wide palette of languages is now at our disposal, with varied degrees of expressiveness, though under the hood they all end up transpiled to the same holy trinity of HTML/CSS/JavaScript: Sass, LESS et al to CSS; CoffeeScript, ClojureScript, Scala or C/C++ (via Emscripten) to JavaScript. We even compile yet unborn versions of JavaScript (ES6) to today’s JavaScript using traceur or es6-module-transpiler.

In parallel, we have learned to optimise the assets we distribute for performance: we minimise them to reduce payloads, concatenate them and inline dependencies to save on HTTP requests, add hashes to their filenames to cache-bust URLs.

However, the more transformations between the source we write and the code that gets served to ours users, the harder it is to inspect, reason about and debug.



The most common workaround is to run off as much of the original sources as possible during development. For instance, that means letting RequireJS load all the JavaScript files individually, rather than pre-assembling them into a single minimised asset as you would do in production. The additional benefit is that you can edit your code, reload your browser and see (or debug) your changes immediately without any intermediate build process.

Unfortunately, this approach falls short if any of the code needs transpiling to run in the browser, such as Sass or CoffeeScript. It also introduces a greater gap between the dev and prod environments, which requires more complex build and runtime setups, and increases the risk that these environments diverge (e.g. a bug only found in one and not the other). Crucially, it also implies that you have no way to debug the production environment, besides inspecting the generated source code, often obfuscated beyond recognition.

This isn’t a new problem. Historically, executing code involved compiling it to machine code. To aid debugging, the compiler would generate debug symbols mapping machine code instructions it produces to the corresponding higher-level source code (e.g. C, assembly language, etc.). This allowed tools like gdb or full-fledged IDEs to let you pause execution, insert breakpoints and generally inspect the program as it’s running using the source code you wrote as reference.

The equivalent on the Web is of course source maps.

Source maps map the positions (line and column) and names in transformed sources back to the original files. Both Chrome and Firefox developer tools support source maps for CSS and JavaScript files. (It will likely come to HTML too as HTMLImports gain support in browsers.)

With source maps enabled, you can inspect variables, add breakpoints, associate CSS rules with lines in the original files, etc, much like you would do if you had loaded all your pristine sources into the browser — regardless of the language they were written in!

But where do you get a source map from?

The key is to record the mapping for every transformation that is performed on a source file. Luckily, most of the common transformations offer the option to generate a source map (e.g. LESS, RequireJS, UglifyJS, etc.).

However, build processes often involve passing sources through a series of transformations. For instance, you may compile CoffeeScript sources to JavaScript, pass them through RequireJS to inline dependencies, concatenate extra libraries and minimise the result. A source map for this needs to represent the transitive mapping between the original files and the output files at the end of the chain.

As we saw in the last post about Plumber, sequencing transformations in Grunt isn’t particularly elegant as you have to manually coordinate the serialisation of intermediate files and their sourcing into the next step. Source maps are no different; you will have to pass them through the chain explicitly.

To make things worse, certain key plugins such as concat or hash do not support source maps at all. If you use any such transformation, you will loose the ability to produce a source map.

Gulp vastly improved the ability to pipe files through multiple operations. Unfortunately, this currently only applies to the source code, not the source maps. In practice, support is generally limited to single operations, as plugins tend not to support input source maps (e.g. gulp-uglify currently doesn’t).

As a result, source maps are often discarded as soon as the build process reaches a certain level of complexity — which is paradoxal, since complex projects is often where they would be most useful.

It shouldn’t have to be this hard.

In Plumber, source maps are supported by default by all operations. This means that regardless of what transformation you apply or in what order, the generated sources should always be accompagnied by a working source map pointing to your original files. In fact, unless you opt-out, a source map will be written out for you alongside each transformed file.

Consider the following Plumbing file, which combines LESS compilation, concatenation and minimisation:

var styleMain = [glob('src/stylesheets/main.less'), less()]; var styleLibraries = all( composerBower('pikaday', 'css/pikaday.css'), [composerBower('pasteup', 'less/module/comment.less'), less()] ); pipelines['css'] = [ all(styleMain, styleLibraries), concat('composer'), mincss, write('dist/stylesheets') ];

Running the pipeline above with Plumber generates the correct source map by default:

$ plumber css Run pipeline: css written to dist/stylesheets/composer.min.css written to dist/stylesheets/composer.min.css.map

In our work on editorial tools at the Guardian, we’ve been using source maps generated by Plumber to inspect and debug production CSS and JavaScript code compiled from a long chain of transformations. In the long term, we will explore the feasibility of unifying our development and production setups to both use the same compiled assets, and rely entirely on source maps for debugging.

As we compile more and more languages to the Web platform and pass them through increasingly complex transformation, we need native source map support more than ever.

This is why it is a key feature of Plumber, as part of the core principe of doing the right thing by default. I firmly believe that build tools should provide developers with all the instruments they need to work, without any extra effort, regardless of the complexity of their build pipeline.

Luckily, other smarter people are also looking this.

Gulp folks have also started exploring this issue using a similar approach. Following talks on the es-discuss mailing-list about standardizing source maps in ECMAScript 7, Nick Fitzgerald started a discussion on his blog around the future of source maps (AKA SourceMap.next), with a focus on improving the support for transpiled languages that don’t map well to JavaScript semantics.

In the meantime, if you want to have a play with automatic source map support, feel free to give Plumber a try!

Once again, thanks to Oliver Ash for proof-reading this blog post! All mistakes still mine.