Yesterday's patch releases contain some exciting performance improvements! In today's post, Core Team Member Jeremy Danyow shares a bit about how he's worked to improve repaint scenarios.

There's been a huge focus on performance in the months leading up to the beta-1 release. Back in June Rob put together an optimization plan and a benchmarking plan . These plans laid out the strategy for optimizing core framework components like the dependency-injection container, binding engine and the templating engine.

The primary focus of these optimizations has been reducing object allocations, array allocations and closures to decrease memory pressure and improve garbage collection efficiency. In the binding system, this meant getting rid of things like the array of callbacks that a property observer used to notify its subscribers when properties changed. In fact, the binding system doesn't use callbacks, closures or arrays in any of the critical paths. If you want to learn more about the binding engine, and the techniques used to optimize it, tune in to readthesource.io on December 10th.

repeat With the core binding optimizations in place we've been able to add powerful new features without impacting performance. We've also started performance tuning higher level parts of the framework such as the repeat template controller. In case you're not familiar with the repeat , it's a custom attribute shipped with Aurelia that enables "repeating" a template over a collection, similar to Knockout's foreach and Angular's ng-repeat . This round of performance tuning focused on optimizing the repeat's handling of collection changes. We looked at what the repeat does when the array it's bound to is replaced with a new array as well as what happens when the array is mutated via push/pop/splice/etc. At a high level, the steps involved for handling new items is to create new view instances for the items, invoke the created , bind and attached composition lifecycle callbacks, insert the new DOM nodes, and run the animation if necessary. Removing items from a collection causes the reverse: the detached and unbind lifecycle hooks are called and the view is animated out and removed from the DOM. Usually performing these steps isn't a huge performance bottleneck, especially if you've enabled view-caching, which allows Aurelia to save removed DOM nodes and reuse them when new items are added. This logic can become a bottleneck in scenarios where you're rapidly replacing the array with a new instance over and over. One could argue that if you're doing something like this you have bigger problems in terms of memory use and UI design, however it's still an interesting use-case to optimize around because it's what's used by the de-facto standard for testing a framework's rendering performance: dbmonster.

dbmonster dbmonster is a rendering benchmark that was popularized in Ryan Florence 2015 react.js conf talk. In his talk he demos three dbmonster implementations using Ember, Angular and React. Dbmonster involves rendering a two-dimensional array of fake database monitoring data and continually replacing the array of monitoring data to demonstrate a framework's "repaint performance". Mathieu Ancelin has put together a handy site that aggregates the dbmonster implementations of popular frameworks. There you can compare the dbmonster performance of react , angular 1 , angular 2 and many others. Here's what the dbmonster demos look like: When looking at these demos here are some things to keep an eye on: Smooth Scrolling: you should be able to scroll the page up and down without jankiness. Popup Tracking: when moving the mouse over the grid, the popup should follow and update without delay. Repaint Rate: At the bottom there's an indicator for repaint rate and memory usage. Repaint rate represents how often a new set of dbmonster data is being rendered. The higher the number the better. Memory: Look for a sawtooth pattern that doesn't continue to climb. The code to generate the dbmonster data contributes to the memory usage and GC activity so expect to see elevated memory usage with higher repaint rates. Mutations Slider: At the top of each demo there's a slider. This controls the variability of the data. More variability equates to more DOM updates and visa-versa. When the variability is at 1% (very low) you should see an extremely high repaint rate because there aren't a lot of DOM updates to do. If you don't see the repaint rate climb as the mutation rate is decreased it means the framework isn't efficient at tracking changes or identifying when to update the DOM. Note: These demos are not a precise measure of repaint rate. Many factors can impact performance - other open browser tabs, etc. For best results run these demos using chrome with the following command: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" -- user - data - dir = "C:\chrome\dev-sessions\perf" -- enable - precise - memory - info -- enable - benchmarking -- js - flags = "--expose-gc"