By Shawn McKay

In this post, Shawn McKay, a web developer & educator, examines web performance across the various front-end frameworks available for Meteor.

Meteor is increasingly becoming a less opinionated framework. The community has created more database choices, as well as different front-end integrations. But more choices can mean more confusion. In this article, we’ll help sort out that confusion by comparing the 4 most popular options:

There’s a lot to cover, so I’ll break it down intro a three part series: first we’ll look at performance, then coding style & finally look back at performance after making further optimizations. It should help you understand more about performance testing, as well as the strengths and weaknesses of these four popular front-end frameworks.

Part 1: Performance

Performance can mean a lot of things. Let’s examine three key measurements:

Initial Script Loading Times Rendering Script Times Re-rendering Script Times on Changes

To do this, I’ve created an app, Waldo Finder; well, actually 4 apps, using four different frameworks. Each app uses the same Meteor.js backend, as the target here is only front-end choices. Waldo Finder renders a selected number of items & loops over them on a change. See how it works below:

Disclaimer: Performance is never exact, there are too many variables in play; running & re-running a test may yield very different results. Any background processes on your computer may also effect the results. Tests must be run multiple times with an average or median taken. No data can be exact. I strongly recommend you look at the code & try running the apps & tests for yourself. I hope you also share your results.

1. Initial Script Loading Times

Loading times refers to the time it takes to load the scripts when the page is opened. Addy Osmani has created an easy-to-use script for this called Timing.js.

To setup Timing.js: click on sources, snippets, make a new snippet, paste the code and save, run your snippet and view the results in the console.firstPaintTime is a good measure. It refers to how long your app takes to start before the first items can be inserted into the DOM.

I’ve posted my median results below.

However, these results aren’t necessarily fair or important for a number reasons:

First of all, React & Angular 2 can both utilize server-side rendering. In other words, an HTML version can be delivered to the client to avoid the few second delay of the app loading.

Secondly, I’ve used an unminimized development version of Angular 2 that loads a lot of unnecessary additional dependencies such as the angular 2 form library. The final finished version of Angular 2 is likely to be considerably less weighty.

All of these examples are loading Blaze, as it is included in the Meteor package. If Blaze were removed, the playing field would be levelled considerably.

Rendering & Script Performance Method

First let’s look at how we can measure script times for painting items in the DOM.

CPU Profile

Triggered events can be easily viewed using the CPU profile tool in Chrome Dev Tools.

Look for the corresponding activity. Start at when the event finished and subtract when the event started. If you control your mouse clicks, the script activity can be pretty easy to pinpoint.

If only there were a way to automate this kind of tedious work. And believe me, I spent hours recording and measuring activity in the profiler. Luckily, the Angular team has put together a performance testing automation tool called Benchpress.

Benchpress

Benchpress is built on top of an E2E tool called Protractor, which in turn uses Java’s Selenium. Basically Protractor acts like a little bot that automatically opens a selected browser, clicks on your specified targets and waits for the results. Benchpress then gathers the event performance statistics.

Here are some instructions on how to set up Protractor.

I’ve used this config file which runs this set of tests. The test loops over numbers of rows (10, 100, 1000, 2000, 3000, 4000, 5000) and records & averages the rendering & script times for a particular action over an adjustable sample of 20 times.

Have a look at a commented example from the first test in the spec file.

it ('should load 10 rows', function () {

// open the page

browser.get('http://localhost:3000/');

// run benchpress

runner.sample({

id: 'load-rows',

prepare: function () {

// clear out existing data by clicking "<button id="reset">

return $('#reset').click();

},

execute: function () {

// click on "<button id='count-100'>"

$('#count-100').click();

// click on "<button id='run'>" and record the resulting event

return $('#run').click();

}

// success, error callback

}).then(done, done.fail);

});

Protractor does the following:

opens the page in a new browser window presses ‘reset’, clearing out the table and waits for the it to finish clicks on a number of items (for example, ‘100’) Presses ‘run’ Benchpress collects the script running data and displays it in the terminal console

Try it for yourself. To run Benchpress, open three terminals.

run webdriver-manager start in a separate terminal

start in a separate terminal run your app in another terminal ( meteor )

) run the protractor path/to/config.js which runs the tests

Results will be listed in a table, adjusted to fit your terminal window. These statistics include garbage collection numbers, rendering times & script run times. Of these, I’ve chosen script run times as the best measure, as it represents the time a user must wait for the data to be loaded on the page. Render times, on the other hand, are consistently low and negligible.

Note: You may want to expand the window or adjust the font to make your tables look cleaner. Otherwise the table rows overlap inconsistently.

Opportunity: Build a tool that takes Benchpress output and exports in spreadsheet & graph form

Again, performance testing isn’t exact; numbers can range wildly. It’s almost a necessity to use a tool like Benchpress that can run & average multiple samples with a margin of error.

Before we look at the results, think about your expectations. Think about what you based expectations on. Finally, try to remember why they call it “computer-science”, and not “computer-gut-feeling”.

Disclaimer: I should also note that I’m most familiar with Angular 1, I run a site for sharing angular 2 resources, and I recently made the tutorial for Angular Meteor 2. I hope this hasn’t skewed the data, but I am human. If you can find a way to make any of the frameworks run faster, I will re-run the tests. Leave a comment, make a pull request or post an issue in the repo which I’ll link to later in the article when you’re good and ready.

2. Rendering Script Times

I’ve timed how long it takes to display different lists of items. That is, the script running time from when you press “run” to when the number of rows are added to the DOM. See the results below:

Generate Rows: Script Times

time (ms) / # of items

■ Blaze 2.1.2 ■React 0.13.0 ■Angular 1.4.2 ■Angular 2.0.0-alpha.32

Keep in mind, if your app is rendering 50,000 items in the browser window at one time, you probably have bigger worries than which framework you’re using. On the other hand, consider how these items may have played out on a machine with less processing ability, such as the device in your pocket.

3. Re-rendering Script Times on Changes

Re-rendering script times refers to the time it takes scripts to run in order to repaint changes over a data set; in this case, the time between pressing the ‘Find Waldos’ button and when all ‘Waldo’s have turned red.

Changes: Script Time

time (ms) / # of items

■ Blaze 2.1.2 ■React 0.13.0 ■Angular 1.4.2 ■Angular 2.0.0-alpha.32

There are no clear winners here out of the production ready frameworks. Angular 1 performs well here, but drops off drastically at around 20,000 items; while Blazes & React degrade more consistently.

Call To Action

Don’t take my word for it. Run the tests for yourself. Git clone this repo and follow the instructions.

Find a way to improve a framework’s performance, post an issue in the repo, make a pull request, or simply leave a comment below. I’d like the tests to be as fair a measure as possible.

In Part 3, I’ll review these recommended performance improvements and see their impact. If the data stands, great. If anything, I hope I’m wrong about a lot of these things. I’d consider it an opportunity to learn something and share the findings with others.

Conclusion

Performance is important, but it’s not the most important thing about the framework you choose. A slow site can lose customers, but each of these front-end frameworks are likely performant enough to fit your needs. The real underlying choice of which framework to choose is a matter of the community around it & the coding style you’re comfortable with. It should be easy to learn & reason about, with plenty of great tooling.

But keep in mind, expectations on the web are growing. Performance limits are the canvas an artist must paint within. The framework you choose should also be advancing forward toward a strong future.

To learn more about framework integration with Meteor, see some side-by-side comparative tutorials:

I’m currently working on the Meteor Angular 2 tutorial listed above. Try it out and give some feedback.

In Part 2 we’ll look at the differences in coding style.

Find me on twitter @Sh_McK or visit my blog.