Towards Faster Dart Analysis

Or, novel uses of Dart’s analysis server

A few days ago I was busy working on a Dart utility I maintain, tuneup — github.com/google/tuneup.dart. Tuneup is a Swiss-army knife of Dart tools, including project creation, statistics, and error checking. I was working on the static analysis portion of the tool, the ‘check’ command. Running ‘tuneup check’ will analyze the Dart code in your project and print out any warnings or errors it finds:

Checking project dartdoc...

4 issues found; analyzed 36 source files in 12.2s. [warning] Undefined class 'Futures' at bin/dartdoc.dart, line 17...

[warning] The getter 'foo' isn't defined for the class 'Function'...

[info] The value of the local variable 'referenceCount' isn't...

[info] Unused import at test/model_test.dart, line 7...

This feature is powered by Dart’s package:analyzer. I need to periodically upgrade tuneup to bring in the latest version of the analyzer package, to keep it up to date with the latest analysis fixes and Dart language changes (lately, all the latest updates to Dart’s strong mode).

While planning to do this maintenance recently, I thought of an idea: what if I leverage Dart’s analysis server to do the analysis for tuneup? If tuneup delegates to the analysis server, then tuneup would always be up-to-date with the latest Dart SDK. Plus, the analysis server has recently had a re-write of its internals — a newly re-written analysis driver — with some significant performance improvements and reduced memory usage. It would be great for tuneup to be able to take advantage of this speed bump as well.

What’s the analysis server?

For a brief bit of background on the analysis server, it’s a tool shipped with the Dart SDK that provides static analysis services to IDEs. It powers our tier 1 offering — the Dart plugin for IntelliJ — as well as the community supported IDE efforts, like VSCode and Atom. It has a well-defined API , the analysis server protocol, which boils down to essentially starting the process and sending JSON messages to it over stdout.

Here’s a sample session in which we start the analysis server, ask it for its version, and send a shutdown command:

dart <sdk-path>/bin/snapshots/analysis_server.dart.snapshot <-- {"event":"server.connected","params":{"version":"1.18.0","pid":7551,"sessionId":""}} --> {"id":"1","method":"server.getVersion"}]

<-- {"id":"1","result":{"version":"1.18.0"}} --> {"id":"2","method":"server.shutdown"}]

<-- {"id":"2"}

The whole sequence from start to finish takes 470ms on my machine. The analysis server is generally used to power IDEs, so using it to implement functionality in a command-line tool is a bit novel, but thanks to its fast start-up time and performant analysis, very feasible.

A light-weight wrapper library

In order to make working with the analysis server’s API a little easier, I decided to borrow a library I wrote for the Atom integration. That library was created by parsing the API specification and generating a Dart library — a lightweight wrapper around the analysis server’s API. It raises communicating with the analysis server from writing JSON over stdio to programming against a set of classes and asynchronous method calls.

The code to start the analysis server, query it for its version, and shut it down, looks something like this:

Server client = await Server.createFromDefaults();



await client.server.onConnected.first;



VersionResult result = await client.server.getVersion();

print('version: ${result.version}');



await client.server.shutdown();

Making tuneup faster

The work to retrofit tuneup went something like this:

remove about 200 lines of code dealing with setting up an analysis context, determining which files to analyze, locating and parsing .analysis_options files, sdk_extentions, embedder.yaml files, and various bits of accumulated configuration metadata

add about 20 lines of code to start the analysis server and listen for analysis error results (and some additional code to handle collecting, sorting, filtering, and pretty-printing the results)

For a sample project (dartdoc), the time for analysis dropped from 12.2s to 7.7s for the first analysis, and to 1.3s for subsequent analysis's (once the new analysis driver had cached summaries of unchanged files). That’s a 9.4x speedup!

The results (with a few artificial analysis issues introduced into dartdoc for demonstration purposes):

# activate the older version of tuneup

pub global activate tuneup 0.2.6 tuneup check

> 4 issues found; analyzed 36 source files in 12.2s. # activate the latest version

pub global activate tuneup tuneup check

> 4 issues found; analyzed 36 source files in 7.7s. tuneup check

> 4 issues found; analyzed 36 source files in 1.3s.

Making it pretty

Now that we have an easy to maintain, highly performant tuneup utility, I spent some time making the output a bit more esthetic. I was able to use some new APIs from dart:io, in particular stdout.supportsAnsiEscapes. I used color codes in the output to highlight the issue text and to call out errors and warnings, and used some console re-writing to add a progress spinner during analysis. The results:

Addendum: another potential optimization…

With analysis times as short as 1.3 seconds, the majority of the time spent in ‘tuneup check’ is in just starting the analysis server. For a large Dart app, that time mostly goes towards the initial unoptimized JIT of Dart code.

We now have the ability to create trained JIT snapshots — snapshots which contain not just pre-parsed representations of Dart code, but actual cached versions of the JITed code. We use these pre-trained JIT snapshots in many of the Dart SDK tools in order to speed up their startup time. If we were to train the analysis server snapshot, it’s likely we’d see the startup time of the analysis server drop significantly.