One of the exciting new features in TypeScript is the ability to write your own plugins, or “custom transformers”. This was added in version 2.3 with little fanfare, but this extensibility is a powerful capability that expands the possibilities of the TypeScript engine. JavaScript developers have typically had to choose between Babel’s extensive plugin-based compiler with full modern EcmaScript feature transpilation, or TypeScript’s excellent type checking functionality. However, TypeScript has been rapidly improving and maturing, and is now nearly on-par in terms of modern EcmaScript transpilation, and with the new custom transform plugin system, all of these features are now available in one system.

Why use a custom tranform? Transforms allow us to add new functionlity or perform new tasks at the language or compiler level. This type of extensibility allows developers to go beyond the scope or ideas of the base compiler. Let’s consider some possible uses for this:

Add additional polyfill/normalization for more subtle EcmaScript normalization (such as improved for-of iteration or accessor handling).

iteration or accessor handling). Add runtime type-checking to verify expectations and contracts. This could leverage existing type information to verify type casts, or to implement additional contractual constraints.

Hook into the compilation process - not to do any code transformations - but to do tasks like documentation generation (TypeScript even provides AST representation of jsdoc comments), or we could do linting or other static analysis on parsed code to find potential bugs or security vulnerabilities.

Create or experiment with new language features. In this tutorial we will use a couple language extensions as examples.

We have actually written a few transformers, and we will use a couple as an examples that you can reference and build from. One is a transform for safe/existential property access through a function called safely , and the other uses decorators to define reactive properties and expressions that are compiled to reactive variables (using alkali).

Getting Started with a Compiler

Building a custom transform requires some basic understanding of compilers. Compilers, like TypeScript, parse source code (we don’t need to delve into tokenizing, lexing, and parsing), and produce an abstract syntax tree (AST). An AST is a typed tree data structure that provides meaningful representation of all the different parts of the source code, statements, functions, classes, expressions, etc. This AST is what the custom transform can effectively modify or transform to customize the generated code. With transpiling compilers like TypeScript, once that AST has been created and transformed, it can then be written out to the target files (or “emitted”, using TypeScript terminology), as valid EcmaScript (conforming to the specified target ES version).

The custom transform is given the AST, and then typically a transform will use a visitor to traverse through the AST. The visitor function has the opportunity to examine each node in the AST and perform actions on those nodes, like code verification, analysis, and/or determining if and how the code should be modified.

Custom Transform Boiler Plate

It can help to have a basic template to start with for your custom transform. This is your basic transform module:

import * as ts from ‘typescript’ export default function(/*opts?: Opts*/) {

function visitor(ctx: ts.TransformationContext, sf: ts.SourceFile) {

const visitor: ts.Visitor = (node: ts.Node): ts.VisitResult => {

// here we can check each node and potentially return

// new nodes if we want to leave the node as is, and

// continue searching through child nodes:

return ts.visitEachChild(node, visitor, ctx)

}

return visitor

}

return (ctx: ts.TransformationContext): ts.Transformer => {

return (sf: ts.SourceFile) => ts.visitNode(sf, visitor(ctx, sf))

}

}

Now we can start applying our logic in the custom transform. But before proceeding, let’s make sure we know how to use the transform.

Using the Transform

Right now, one of the rough edges of TypeScript custom transforms is in how they are used (hopefully to be resolved soon). At least at the time of writing (October 2017), there are no compiler options (for command line or tsconfig.json ) for specifying transforms, so typically the compiler API must be used to run transforms. The key part of the compiler API is the program’s’ emit method can be given the set of transforms you wish to apply in the transpiling process:

// create compiler host, program, and then emit the results

// using our transform

const compilerHost = ts.createCompilerHost(compilerOptions)

const program = ts.createProgram([entryModule], compilerOptions, compilerHost)

const msgs = {}

const emitResult = program.emit(undefined, undefined, undefined, undefined, {

before: [

myTransform

]

})

// now typically need to console log the diagnostic results

If you are using ts-loader with webpack, this is actually much easier, the loader supports transforms as arguments, so transforms can be supplied as a simple configuration of the loader.

// webpack.config.js:

const alkaliTransformer = require(‘ts-transform-alkali’).default

…

// exports.module.rules[]

{

test: /\.ts$/,

loader: ‘ts-loader’,

options: {

getCustomTransformers: () => ({

before: [alkaliTransformer]

}),

transpileOnly: true

}

}

Building the Transformer

Now that we have a starting point in place, we can start writing our own logic for our transformer. The normal strategy for a transformer is to implement the visitor function to look for particular AST nodes of interest. We can write simple if statements or switch statements to check test each node to see if it matches what we want to transform. The TypeSript API includes ts.is functions to test for every different node type. For example, in our safely transform, we check each node to see if it is a function call, and if so, we can replace or update the nodes as desired:

const visitor: ts.Visitor = (node: ts.Node): ts.VisitResult => {

if (ts.isCallExpression(node) &&

node.expression.getText(sf) == 'safely') {

// we have found an expression of the form safely(...)

// we can now replace or update it

...

}

// otherwise continue visiting all the nodes

return ts.visitEachChild(node, visitor, ctx)

}

Once we have setup our visitor pattern, we are now able to recurse into the AST, and find the nodes we want to transform. In this case, we are looking for a call expression, where the function to be called is an identifier safely . Our if statement will match on these nodes, and we can then perform our transformation.

The easiest way to perform a transformation is simply to return a new (replacement) AST node. The TypeScript module (imported here as ts ) , gives us a full API for creating nodes. Imagine in this case, that we want to convert safely(a.b) to a && a.b . Rather than returning the provided node which is call expression, we are going to return a “binary” expression, with the && operator. First, we will check to make sure the argument to safely is a property access expression, then we will create an expression with the && operator with the object part of the property access, and then the full expression (returned at run-time, if the first part is true):

const visitor: ts.Visitor = (node: ts.Node): ts.VisitResult => {

if (ts.isCallExpression(node) &&

node.expression.getText(sf) == 'safely') {

// get the argument to safely

const target = node.arguments[0]

// check to make sure it is a property access, like "a.b"

if (ts.isPropertyAccessExpression(target) {

// return a binary expression with a && a.b

return ts.createBinary(

target.expression, // the left hand operand is the object

ts.SyntaxKind.AmpersandAmpersandToken, // the && operator

target) // the right hand operand is the full expression

}

}

// otherwise continue visiting all the nodes

return ts.visitEachChild(node, visitor, ctx)

}

TypeScript provides ts.createThing APIs for generating every type of node that you might need. And the great thing about developing your transforms in TypeScript is that no additional documentation is needed, the module is typed such that your IDE should give you auto-completion hints showing you all the create methods that you could ever need.

While looking for a target type of node, and simply returning a new node is the most straightforward way to perform transformations, if you are modifying an existing node, you can “update” it instead of replacing an entire new node. TypeScript includes an entire set of update functions as well. By using an update function, additional source mapping information will be preserved. It is important to note that update doesn’t change the original node, but returns a new cloned node that retains information about the original code.

For example, if we wanted to leave the original safely function call in place, and only transform the arguments, we could write something like:

const visitor: ts.Visitor = (node: ts.Node): ts.VisitResult => {

if (ts.isCallExpression(node) &&

node.expression.getText(sf) == 'safely') {

// get the argument to safely

return ts.updateCall(node, // the original call node

node.expression, // the function reference 'safely'

undefined, // type parameter

newParameters) // whatever tranformation we want to do args

It is worth remembering that in most situations, you will want to make sure you continue to recurse into any nodes that will be preserved in the transformation, so they can be processed in case they contain code that should be transformed as well (if the transformed syntax can be nested). For example, rather than just using node.expression to get the call expression, we may want to use ts.visitEachChild(node.expression, visitor, ctx) to process any nodes that may have been used in the source expression.

Adding Variables

It is not uncommon to create transformations that will generate code that will need to introduce additional variables into the output code. In the example we have been looking at, where we creating “safe” access to a property, if we encounter an expression of the form expression.property , converting this to expression && expression.property is actually not correct. If the expression calls a non-idempotent function, the behavior will change by making two calls. Ideally, we actually want to evaluate the expression, storing it in variable, so we can then first use the result of the evaluation to check that it is truthy (or at least not null/undefined), then use it a second time to access the property:

var temp // declare this in somewhere in our scope

...

((temp = expression) && temp.property) // converted expression

In order to do this we need to use TypeScript’s variable API to generate a new variable, giving it a unique name and properly hoisting the declaration to the current scope. Generally, it is advisable to provide a name prefix that will improve the clarity of the output code. Here we could write:

let varName = ts.createUniqueName('safe')

ctx.hoistVariableDeclaration(varName)

And varName is now an identifier node that can be used in our generated output expressions. We can create an assignment expression to assign a value to it:

ts.createBinary(varName, ts.SyntaxKind.EqualsToken, expression)

Decorators

Decorators are an esoteric JavaScript syntax, still a stage 2 EcmaScript proposal at the time of writing, so why would we focus on this construct? Decorators are great syntax for writing new or experimental language features precisely because it represents an extension point for the language. And other languages have taken a similar approach, decorators are frequently used as an extension that can be used for compile-time extensions, just as easily as for run-time semantics. And this is probably exactly why defining this is as a standard with precise run-time semantics has been slow to emerge. But the semantics are clearly designed to allow decorators to be used in extensible way, preserving existing semantics where applied, and adding new “features”. Decorators are an unobtrusive syntax, they can be “inserted” such that the existing semantics of the code can be preserved with additional information added. Consequently, decorators really are naturally suited for language extensions, and compile-time transforms in TypeScript is the perfect place to use them. Let’s take a look at how.

First, by default, TypeScript will give us warnings about decorators being “experimental”, so we need to declare our intention to use them in our compiler options. We can set that in our tsconfig.json :

{

"compilerOptions": {

"experimentalDecorators": true,

...

Next, we can look for the presence of a decorator on a node, by simply checking for a decorators property, and iterating through the array for a decorator of interest. In our reactive transform, we used a @reactive decorator to mark properties, classes, and expressions to be transformed into “reactive” expressions or properties that could be monitored and react to input changes. Syntactically, decorators can be applied to class, properties, methods, parameters, and statements (although TypeScript will give an error on statements when emitting code, if they are not eliminated during transforms). Finding nodes with a specific decorator is straightforward, we add a check for a specific decorator in our visitor function:

const visitor: ts.Visitor = (node: ts.Node): ts.VisitResult => {

if (node.decorators && node.decorators.some((decorator, i) =>

decorator.expression.getText(sf) === 'reactive')) {

// found a node with a @reactive decorator

// may want to remove the decorator now, so it is not called

node.decorators = undefined

// do transformation of node

Try it out!

Custom transformers open an enormous field of opportunities for expanding and improving the capabilities, aesthetics, safety, and performance of TypeScript applications.