This free book is what I wanted when I started working with single page apps. It's not an API reference on a particular framework, rather, the focus is on discussing patterns, implementation choices and decent practices.

I'm taking a "code and concepts" approach to the topic - the best way to learn how to use something is to understand how it is implemented. My ambition here is to decompose the problem of writing a web app, take a fresh look at it and hopefully make better decisions the next time you make one.

Update: the book is now also on Github.

Introduction

Writing maintainable code

Implementation alternatives: a look at the options

Meditations on Models & Collections

Views - templating, behavior and event consumption

Why do we want to write single page apps? The main reason is that they allow us to offer a more-native-app-like experience to the user.

This is hard to do with other approaches. Supporting rich interactions with multiple components on a page means that those components have many more intermediate states (e.g. menu open, menu item X selected, menu item Y selected, menu item clicked). Server-side rendering is hard to implement for all the intermediate states - small view states do not map well to URLs.

Single page apps are distinguished by their ability to redraw any part of the UI without requiring a server roundtrip to retrieve HTML. This is achieved by separating the data from the presentation of data by having a model layer that handles data and a view layer that reads from the models.

Most projects start with high ambitions, and an imperfect understanding of the problem at hand. Our implementations tend to outpace our understanding. It is possible to write code without understanding the problem fully; that code is just more complex than it needs to be because of our lack of understanding.

Good code comes from solving the same problem multiple times, or refactoring. Usually, this proceeds by noticing recurring patterns and replacing them with a mechanism that does the same thing in a consistent way - replacing a lot of "case-specific" code, which in fact was just there because we didn't see that a simpler mechanism could achieve the same thing.

The architectures used in single page apps represent the result of this process: where you would do things in an ad-hoc way using jQuery, you now write code that takes advantage of standard mechanisms (e.g. for UI updates etc.).

Programmers are obsessed with ease rather than simplicity (thank you Rich Hickey for making this point); or, what the experience of programming is instead of what the resulting program is like. This leads to useless conversations about semicolons and whether we need a preprocessor that eliminates curly braces. We still talk about programming as if typing in the code was the hard part. It's not - the hard part is maintaining the code.

To write maintainable code, we need to keep things simple. This is a constant struggle; it is easy to add complexity (intertwinedness/dependencies) in order to solve a worthless problem; and it is easy to solve a problem in a way that doesn't reduce complexity. Namespaces are an example of the latter.

With that in mind, let's look at how a modern web app is structured from three different perspectives:

Architecture : what (conceptual) parts does our app consist of? How do the different parts communicate with each other? How do they depend on each other?

: what (conceptual) parts does our app consist of? How do the different parts communicate with each other? How do they depend on each other? Asset packaging : how is our app structured into files and files into logical modules? How are these modules built and loaded into the browser? How can the modules be loaded for unit testing?

: how is our app structured into files and files into logical modules? How are these modules built and loaded into the browser? How can the modules be loaded for unit testing? Run-time state: when loaded into the browser, what parts of the app are in memory? How do we perform transitions between states and gain visibility into the current state for troubleshooting?

A modern web application architecture

Modern single page apps are generally structured as follows:

More specifically:

Write-only DOM. No state / data is read from the DOM. The application outputs HTML and operations on elements, but nothing is ever read from the DOM. Storing state in the DOM gets hard to manage very quickly: it is much better to have one place where the data lives and to render the UI from the data, particularly when the same data has to be shown in multiple places in the UI.

Models as the single source of truth. Instead of storing data in the DOM or in random objects, there is a set of in-memory models which represent all of the state/data in the application.

Views observe model changes. We want the views to reflect the content of the models. When multiple views depend on a single model (e.g. when a model changes, redraw these views), we don't want to manually keep track of each dependent view. Instead of manually tracking things, there is a change event system through which views receive change notifications from models and handle redrawing themselves.

Decoupled modules that expose small external surfaces. Instead of making things global, we should try to create small subsystems that are not interdependent. Dependencies make code hard to set up for testing. Small external surfaces make refactoring internals easy, since most things can change as long as the external interface remains the same.

Minimizing DOM dependent-code. Why? Any code that depends on the DOM needs to be tested for cross-browser compatibility. By writing code in a way that isolates those nasty parts, a much more limited surface area needs to be tested for cross-browser compatibility. Cross-browser incompatibilities are a lot more manageable this way. Incompatibilities are in the DOM implementations, not in the Javascript implementations, so it makes sense to minimize and isolate DOM -dependent code.

Controllers must die

There is a reason why I didn't use the word "Controller" in the diagram further above. I don't like that word, so you won't see it used much in this book. My reason is simple: it is just a placeholder that we've carried into the single page app world from having written too many "MVC" server-side apps.

Most current single page application frameworks still use the term "Controller", but I find that it has no meaning beyond "put glue code here". As seen in a presentation:

"Controllers deal with adding and responding to DOM events, rendering templates and keeping views and models in sync".

WAT? Maybe we should look at those problems separately?

Single page apps need a better word, because they have more complex state transitions than a server-side app:

there are DOM events that cause small state changes in views

there are model events when model values are changed

there are application state changes that cause views to be swapped

there are global state changes, like going offline in a real time app

there are delayed results from AJAX that get returned at some point from backend operations

These are all things that need to be glued together somehow, and the word "Controller" is sadly deficient in describing the coordinator for all these things.

We clearly need a model to hold data and a view to deal with UI changes, but the glue layer consists of several independent problems. Knowing that a framework has a controller tells you nothing about how it solves those problems, so I hope to encourage people to use more specific terms.

That's why this book doesn't have a chapter on controllers; however, I do tackle each of those problems as I go through the view layer and the model layer. The solutions used each have their own terms, such as event bindings, change events, initializers and so on.

Asset packaging (or more descriptively, packaging code for the browser)

Asset packaging is where you take your JS application code and create one or more files (packages) that can be loaded by the browser via script tags.

Nobody seems to emphasize how crucial it is to get this right! Asset packaging is not about speeding up your loading time - it is about making your application modular and making sure that it does not devolve into a untestable mess. Yet people think it is about performance and hence optional.

If there is one part that influences how testable and how refactorable your code is, it is how well you split your code into modules and enforce a modular structure. And that's what "asset packaging" is about: dividing things into modules and making sure that the run-time state does not devolve into a mess. Compare the approaches below:

Messy and random (no modules) Every piece of code is made global by default

Names are global

Fully traversable namespaces

Load order matters, because anything can overwrite or change anything else

Implicit dependencies on anything global

Files and modules do not have any meaningful connection

Only runnable in a browser because dependencies are not isolated Packages and modules (modular) Packages expose a single public interface

Names are local to the package

Implementation details are inaccessible outside the package

Load order does not matter thanks to packaging

Explicitly declared dependencies

Each file exposes one module

Runnable from the command line with a headless browser

The default ("throw each JS file into the global namespace and hope that the result works") is terrible, because it makes unit testing - and by extension, refactoring - hard. This is because bad modularization leads to dependencies on global state and global names which make setting up tests hard.

In addition, implicit dependencies make it very hard to know which modules depend on whatever code you are refactoring; you basically rely on other people following good practices (don't depend on things I consider internal details) consistently. Explicit dependencies enforce a public interface, which means that refactoring becomes much less of a pain since others can only depend on what you expose. It also encourages thinking about the public interface more. The details of how to do this are in the chapters on maintainability and modularity.

Run-time state

The third way to look at a modern single page application is to look at its run-time state. Run time state refers to what the app looks like when it is running in your browser - things like what variables contain what information and what steps are involved in moving from one activity (e.g. page) to another.

There are three interesting relationships here:

URL < - > state Single page applications have a schizophrenic relationship with URLs. On the one hand, single page apps exist so that the users can have richer interactions with the application. Richer activities mean that there is more view state than can reasonably fit inside a URL. On the other hand, we'd also like to be able to bookmark a URL and jump back to the same activity.

In order to support bookmarks, we probably need to reduce the level of detail that we support in URLs somewhat. If each page has one primary activity (which is represented in some level of detail in the URL), then each page can be restored from a bookmark to a sufficient degree. The secondary activities (like say, a chat within a webmail application) get reset to the default state on reload, since storing them in a bookmarkable URL is pointless.

Definition < - > initialization Some people still mix these two together, which is a bad idea. Reusable components should be defined without actually being instantiated/activated, because that allows for reuse and for testing. But once we do that, how do we actually perform the initialization/instantiation of various app states?

I think there are three general approaches: one is to have a small function for each module that takes some inputs (e.g. IDs) and instantiates the appropriate views and objects. The other is to have a global bootstrap file followed by a router that loads the correct state from among the global states. The last one is to wrap everything in sugar that makes instantiation order invisible.

I like the first one; the second one is mostly seen in apps that have organically grown to a point where things start being entangled; the third one is seen in some frameworks, particularly with regards to the view layer.

The reason I like the first one is that I consider state (e.g. instances of objects and variables) to be disgusting and worth isolating in one file (per subsystem - state should be local, not global, but more on that later). Pure data is simple, so are definitions. It is when we have a lot interdependent and/or hard-to-see state that things become complicated; hard to reason about and generally unpleasant.

The other benefit of the first approach is that it doesn't require loading the full application on each page reload. Since each activity is initializable on its own, you can test a single part of the app without loading the full app. Similarly, you have more flexibility in preloading the rest of the app after the initial view is active (vs. at the beginning); this also means that the initial loading time won't increase proportionately to the number of modules your app has.

HTML elements < - > view objects and HTML events < - > view changes

Finally, there is the question of how much visibility we can gain into the run time state of the framework we are using. I haven't seen frameworks address this explicitly (though of course there are tricks): when I am running my application, how can I tell what's going on by selecting a particular HTML element? And when I look at a particular HTML element, how can I tell what will happen when I click it or perform some other action?

Simpler implementations generally fare better, since the distance from a HTML element/event to your view object / event handler is much shorter. I am hoping that frameworks will pay more attention to surfacing this information.

This is just the beginning

So, here we have it: three perspectives - one from the point of view of the architect, one from the view of the filesystem, and finally one from the perspective of the browser.

2. Maintainability depends on modularity: Stop using namespaces!

Modularity is at the core of everything. Initially I had approached this very differently, but it turned out after ~ 20 drafts that nothing else is as important as getting modularization right.

Good modularization makes building and packaging for the browser easy, it makes testing easier and it defines how maintainable the code is. It is the linchpin that makes it possible to write testable, packagable and maintainable code.

What is maintainable code?

it is easy to understand and troubleshoot

it is easy to test

it is easy to refactor

What is hard-to-maintain code?

it has many dependencies, making it hard to understand and hard to test independently of the whole

it accesses data from and writes data to the global scope, which makes it hard to consistently set up the same state for testing

it has side-effects, which means that it cannot be instantiated easily/repeatably in a test

it exposes a large external surface and doesn't hide its implementation details, which makes it hard to refactor without breaking many other components that depend on that public interface

If you think about it, these statements are either directly about modularizing code properly, or are influenced by the way in which code is divided into distinct modules.

What is modular code?

Modular code is code which is separated into independent modules. The idea is that internal details of individual modules should be hidden behind a public interface, making each module easier to understand, test and refactor independently of others.

Modularity is not just about code organization. You can have code that looks modular, but isn't. You can arrange your code in multiple modules and have namespaces, but that code can still expose its private details and have complex interdependencies through expectations about other parts of the code.

Compare the two cases above (1). In the case on the left, the blue module knows specifically about the orange module. It might refer to the other module directly via a global name; it might use the internal functions of the other module that are carelessly exposed. In any case, if that specific module is not there, it will break.

In the case on the right, each module just knows about a public interface and nothing else about the other module. The blue module can use any other module that implements the same interface; more importantly, as long as the public interface remains consistent the orange module can change internally and can be replaced with a different implementation, such as a mock object for testing.

The problem with namespaces

The browser does not have a module system other than that it is capable of loading files containing Javascript. Everything in the root scope of those files is injected directly into the global scope under the window variable in the same order the script tags were specified.

When people talk about "modular Javascript", what they often refer to is using namespaces. This is basically the approach where you pick a prefix like " window.MyApp " and assign everything underneath it, with the idea that when every object has its own global name, we have achieved modularity. Namespaces do create hierarchies, but they suffer from two problems:

Choices about privacy have to be made on a global basis. In a namespace-only system, you can have private variables and functions, but choices about privacy have to be made on a global basis within a single source file. Either you expose something in the global namespace, or you don't.

This does not provide enough control; with namespaces you cannot expose some detail to "related"/"friendly" users (e.g. within the same subsystem) without making that code globally accessible via the namespace.

This leads to coupling through the globally accessible names. If you expose a detail, you have no control over whether some other piece of code can access and start depending on something you meant to make visible only to a limited subsystem.

We should be able to expose details to related code without exposing that code globally. Hiding details from unrelated modules is useful because it makes it possible to modify the implementation details without breaking dependent code.

Modules become dependent on global state. The other problem with namespaces is that they do not provide any protection from global state. Global namespaces tend to lead to sloppy thinking: since you only have blunt control over visibility, it's easy to fall into the mode where you just add or modify things in the global scope (or a namespace under it).

One of the major causes of complexity is writing code that has remote inputs (e.g. things referred to by global name that are defined and set up elsewhere) or global effects (e.g. where the order in which a module was included affects other modules because it alters global variables). Code written using globals can have a different result depending on what is in the global scope (e.g. window.*).

Modules shouldn't add things to the global scope. Locally scoped data is easier to understand, change and test than globally scoped data. If things need to be put in the global scope, that code should be isolated and become a part of an initialization step. Namespaces don't provide a way to do this; in fact, they actively encourage you to change the global state and inject things into it whenever you want.

Examples of bad practices

The examples below illustrate some bad practices.

Do not leak global variables

Avoid adding variables to the global scope if you don't need to. The snippet below will implicitly add a global variable.

var foo = 'bar' ;

To prevent variables from becoming global, always write your code in a closure/anonymous function - or have a build system that does this for you:

;( function ( ) { var foo = 'bar' ; }());

If you actually want to register a global variable, then you should make it a big thing and only do it in one specific place in your code. This isolates instantiation from definition, and forces you to look at your ugly state initialization instead of hiding it in multiple places (where it can have surprising impacts):

function initialize ( win ) { win.foo = 'bar' ; }

In the function above, the variable is explicitly assigned to the win object passed to it. The reason this is a function is that modules should not have side-effects when loaded. We can defer calling the initialize function until we really want to inject things into the global scope.

Do not expose implementation details

Details that are not relevant to the users of the module should be hidden. Don't just blindly assign everything into a namespace. Otherwise anyone refactoring your code will have to treat the full set of functions as the public interface until proven differently (the "change and pray" method of refactoring).

Don't define two things (or, oh, horror, more than two things!) in the same file, no matter how convenient it is for you right now. Each file should define and export just one thing.

;( function ( ) { window .FooMachine = {}; FooMachine.processBar = function ( ) { ... }; FooMachine.doFoo = function ( bar ) { FooMachine.processBar(bar); }; window .BarMachine = { ... }; })();

The code below does it properly: the internal " processBar " function is local to the scope, so it cannot be accessed outside. It also only exports one thing, the current module.

;( function ( ) { var FooMachine = {}; function processBar ( ) { ... } FooMachine.doFoo = function ( bar ) { processBar(bar); }; return FooMachine; })();

A common pattern for classes (e.g. objects instantiated from a prototype) is to simply mark class methods as private by starting them with a underscore. You can properly hide class methods by using .call/.apply to set "this", but I won't show it here; it's a minor detail.

Do not mix definition and instantiation/initialization

Your code should differentiate between definition and instantiation/initialization. Combining these two together often leads to problems for testing and reusing components.

Don't do this:

function FooObserver ( ) { } var f = new FooObserver(); f.observe( 'window.Foo.Bar' ); module .exports = FooObserver;

While this is a proper module (I'm excluding the wrapper here), it mixes initialization with definition. What you should do instead is have two parts, one responsible for definition, and the other performing the initialization for this particular use case. E.g. foo_observer.js

function FooObserver ( ) { } module .exports = FooObserver;

and bootstrap.js :

module .exports = { initialize: function ( win ) { win.Foo.Bar = new Baz(); var f = new FooObserver(); f.observe( 'window.Foo.Bar' ); } };

Now, FooObserver can be instantiated/initialized separately since we are not forced to initialize it immediately. Even if the only production use case for FooObserver is that it is attached to window.Foo.Bar , this is still useful because setting up tests can be done with different configuration.

Do not modify objects you don't own

While the other examples are about preventing other code from causing problems with your code, this one is about preventing your code from causing problems for other code.

Many frameworks offer a reopen function that allows you to modify the definition of a previously defined object prototype (e.g. class). Don't do this in your modules, unless the same code defined that object (and then, you should just put it in the definition).

If you think class inheritance is a solution to your problem, think harder. In most cases, you can find a better solution by preferring composition over inheritance: expose an interface that someone can use, or emit events that can have custom handlers rather than forcing people to extend a type. There are limited cases where inheritance is useful, but those are mostly limited to frameworks.

;( function ( ) { window .Bar.reopen({ }); String .prototype.dasherize = function ( ) { }; })();

If you write a framework, for f*ck's sake do not modify built-in objects like String by adding new functions to them. Yes, you can save a few characters (e.g. _(str).dasherize() vs. str.dasherize() ), but this is basically the same thing as making your special snowflake framework a global dependency. Play nice with everyone else and be respectful: put those special functions in a utility library instead.

Building modules and packages using CommonJS

Now that we've covered a few common bad practices, let's look at the positive side: how can we implement modules and packages for our single page application?

We want to solve three problems:

Privacy: we want more granular privacy than just global or local to the current closure.

Avoid putting things in the global namespace just so they can be accessed.

We should be able to create packages that encompass multiple files and directories and be able to wrap full subsystems into a single closure.

CommonJS modules. CommonJS is the module format that Node.js uses natively. A CommonJS module is simply a piece of JS code that does two things:

it uses require() statements to include dependencies

statements to include dependencies it assigns to the exports variable to export a single public interface

Here is a simple example foo.js :

var Model = require ( './lib/model.js' ); function Foo ( ) { } module .exports = Foo;

What about that var Model statement there? Isn't that in the global scope? No, there is no global scope here. Each module has its own scope. This is like having each module implicitly wrapped in a anonymous function (which means that variables defined are local to the module).

OK, what about requiring jQuery or some other library? There are basically two ways to require a file: either by specifying a file path (like ./lib/model.js ) or by requiring it by name: var $ = require('jquery'); . Items required by file path are located directly by their name in the file system. Things required by name are "packages" and are searched by the require mechanism. In the case of Node, it uses a simple directory search; in the browser, well, we can define bindings as you will see later.

What are the benefits?

Isn't this the same thing as just wrapping everything in a closure, which you might already be doing? No, not by a long shot.

It does not accidentally modify global state, and it only exports one thing. Each CommonJS module executes in its own execution context. Variables are local to the module, not global. You can only export one object per module.

Dependencies are easy to locate, without being modifiable or accessible in the global scope. Ever been confused about where a particular function comes from, or what the dependencies of a particular piece of code are? Not anymore: dependencies have to be explicitly declared, and locating a piece of code just means looking at the file path in the require statement. There are no implied global variables.

But isn't declaring dependencies redundant and not DRY? Yes, it's not as easy as using global variables implicitly by referring to variables defined under window . But the easiest way isn't always the best choice architecturally; typing is easy, maintenance is hard.

The module does not give itself a name. Each module is anonymous. A module exports a class or a set of functions, but it does not specify what the export should be called. This means that whomever uses the module can give it a local name and does not need to depend on it existing in a particular namespace.

You know those maddening version conflicts that occur when the semantics of include() ing a module modifies the environment to include the module using its inherent name? So you can't have two modules with the same name in different parts of your system because each name may exist only once in the environment? CommonJS doesn't suffer from those, because require() just returns the module and you give it a local name by assigning it to a variable.

It comes with a distribution system. CommonJS modules can be distributed using Node's npm package manager. I'll talk about this more in the next chapter.

There are thousands of compatible modules. Well, I exaggerate, but all modules in npm are CommonJS-based; and while not all of those are meant for the browser, there is a lot of good stuff out there.

Last, but not least: CommonJS modules can be nested to create packages. The semantics of require() may be simple, but it provides the ability to create packages which can expose implementation details internally (across files) while still hiding them from the outside world. This makes hiding implementation details easy, because you can share things locally without exposing them globally.

Creating a CommonJS package

Let's look at how we can create a package from modules following the CommonJS package. Creating a package starts with the build system. Let's just assume that we have a build system, which can take any set of .js files we specify and combine them into a single file.

[ [./model/todo.js] [./view/todo_list.js] [./index.js] ] [ Build process ] [ todo_package.js ]

The build process wraps all the files in closures with metadata, concatenates the output into a single file and adds a package-local require() implementation with the semantics described earlier (including files within the package by path and external libraries by their name).

Basically, we are taking a wrapping closure generated by the build system and extending it across all the modules in the package. This makes it possible to use require() inside the package to access other modules, while preventing external code from accessing those packages.

Here is how this would look like as code:

;( function ( ) { function require ( ) { } modules = { 'jquery' : window .jQuery }; modules[ './model/todo.js' ] = function ( module, exports, require ) { var Dependency = require ( 'dependency' ); module .exports = Todo; }); modules[ 'index.js' ] = function ( module, exports, require ) { module .exports = { Todo: require ( './model/todo.js' ) }; }); window .Todo = require ( 'index.js' ); }());

There is a local require() that can look up files. Each module exports an external interface following the CommonJS pattern. Finally, the package we have built here itself has a single file index.js that defines what is exported from the module. This is usually a public API, or a subset of the classes in the module (things that are part of the public interface).

Each package exports a single named variable, for example: window.Todo = require('index.js'); . This way, only relevant parts of the module are exposed and the exposed parts are obvious. Other packages/code cannot access the modules in another package in any way unless they are exported from index.js . This prevents modules from developing hidden dependencies.

Building an application out of packages

The overall directory structure might look something like this:

assets - css - layouts common - collections - models index.js modules - todo - public - templates - views index.js node_modules package.json server.js

Here, we have a place for shared assets ( ./assets/ ); there is a shared library containing reusable parts, such as collections and models ( ./common ).

The ./modules/ directory contains subdirectories, each of which represents an individually initializable part of the application. Each subdirectory is its own package, which can be loaded independently of others (as long as the common libraries are loaded).

The index.js file in each package exports an initialize() function that allows that particular package to be initialized when it is activated, given parameters such as the current URL and app configuration.

Using the glue build system

So, now we have a somewhat detailed spec for how we'd like to build. Node has native support for require(), but what about the browser? We probably need a elaborate library for this?

Nope. This isn't hard: the build system itself is about a hundred fifty lines of code plus another ninety or so for the require() implementation. When I say build, I mean something that is super-lightweight: wrapping code into closures, and providing a local, in-browser require() implementation. I'm not going to put the code here since it adds little to the discussion, but have a look.

I've used onejs and browserbuild before. I wanted something a bit more scriptable, so (after contributing some code to those projects) I wrote gluejs, which is tailored to the system I described above (mostly by having a more flexible API).

With gluejs, you write your build scripts as small blocks of code. This is nice for hooking your build system into the rest of your tools - for example, by building a package on demand when a HTTP request arrives, or by creating custom build scripts that allow you to include or exclude features (such as debug builds) from code.

Let's start by installing gluejs from npm:

$ npm install gluejs

Now let's build something.

Including files and building a package

Let's start with the basics. You use include(path) to add files. The path can be a single file, or a directory (which is included with all subdirectories). If you want to include a directory but exclude some files, use exclude(regexp) to filter files from the build.

You define the name of the main file using main(name) ; in the code below, it's "index.js". This is the file that gets exported from the package.

var Glue = require ( 'gluejs' ); new Glue() .include( './todo' ) .main( 'index.js' ) .export( 'Todo' ) .render( function ( err, txt ) { console .log(txt); });

Each package exports a single variable, and that variable needs a name. In the example below, it's "Todo" (e.g. the package is assigned to window.Todo ).

Finally, we have a render(callback) function. It takes a function(err, txt) as a parameter, and returns the rendered text as the second parameter of that function (the first parameter is used for returning errors, a Node convention). In the example, we just log the text out to console. If you put the code above in a file (and some .js files in "./todo"), you'll get your first package output to your console.

If you prefer rebuilding the file automatically, use .watch() instead of .render() . The callback function you pass to watch() will be called when the files in the build change.

Binding to global functions

We often want to bind a particular name, like require('jquery') to a external library. You can do this with replace(moduleName, string) .

Here is an example call that builds a package in response to a HTTP GET:

var fs = require ( 'fs' ), http = require ( 'http' ), Glue = require ( 'gluejs' ); var server = http.createServer(); server.on( 'request' , function ( req, res ) { if (req.url == '/minilog.js' ) { new Glue() .include( './todo' ) .basepath( './todo' ) .replace( 'jquery' , 'window.$' ) .replace( 'core' , 'window.Core' ) .export( 'Module' ) .render( function ( err, txt ) { res.setHeader( 'content-type' , 'application/javascript' ); res.end(txt); }); } else { console .log( 'Unknown' , req.url); res.end(); } }).listen( 8080 , 'localhost' );

To concatenate multiple packages into a single file, use concat([packageA, packageB], function(err, txt)) :

var packageA = new Glue().export( 'Foo' ).include( './fixtures/lib/foo.js' ); var packageB = new Glue().export( 'Bar' ).include( './fixtures/lib/bar.js' ); Glue.concat([packageA, packageB], function ( err, txt ) { fs.writeFile( './build.js' , txt); });

Note that concatenated packages are just defined in the same file - they do not gain access to the internal modules of each other.

[1] The modularity illustration was adapted from Rich Hickey's presentation Simple Made Easy

http://www.infoq.com/presentations/Simple-Made-Easy

http://blog.markwshead.com/1069/simple-made-easy-rich-hickey/

http://code.mumak.net/2012/02/simple-made-easy.html

http://pyvideo.org/video/880/stop-writing-classes

http://substack.net/posts/b96642

require()

require()

index.js

initialize()

window.*

module.exports

var $ = require('jquery')

var User = require('./model/user.js')

package.json

package.json

js { "name": "modulename", "description": "Foo for bar", "version": "0.0.1", "dependencies": { "underscore": "1.1.x", "foo": "git+ssh://git@github.com:mixu/foo.git#0.4.1" } }

npm install

npm version patch

git+ssh://user@host:project.git#tag-sha-or-branch

js { "dependencies": { "foo": ">1.x.x" } }

except

describe(foo) .. before() .. it()

suite(foo) .. setup() .. test(bar)

exports['suite'] = { before: f() .. 'foo should': f() }

assert()

[~] mkdir example [~] cd example [example] npm init Package name: (example) Description: Example system Package version: (0.0.0) Project homepage: (none) Project git repository: (none) ... [example] npm install --save-dev mocha

js var assert = require('assert'), Model = require('../lib/model.js'); exports['can check whether a key is set'] = function(done) { var model = new Model(); assert.ok(!model.has('foo')); model.set('foo', 'bar'); assert.ok(model.has('foo')); done(); };

done()

js exports['given a foo'] = { before: function(done) { this.foo = new Foo().connect(); done(); }, after: function(done) { this.foo.disconnect(); done(); }, 'can check whether a key is set': function() { // ... } };

js exports['given a foo'] = { beforeEach: function(done) { // ... }, 'when bar is set': { beforeEach: function(done) { // ... }, 'can execute baz': function(done) { // ... } } };

make TESTS += test/model.test.js test: @./node_modules/.bin/mocha \ --ui exports \ --reporter list \ --slow 2000ms \ --bail \ $(TESTS) .PHONY: test

node ./path/to/test.js

js // if this module is the script being run, then run the tests: if (module == require.main) { var mocha = require('child_process').spawn('mocha', [ '--colors', '--ui', 'exports', '--reporter', 'spec', __filename ]); mocha.stdout.pipe(process.stdout); mocha.stderr.pipe(process.stderr); }

js exports['it should be called'] = function(done) { var called = false, old = Foo.doIt; Foo.doIt = function(callback) { called = true; callback('hello world'); }; // Assume Bar calls Foo.doIt Bar.baz(function(result)) { console.log(result); assert.ok(called); done(); }); };

js function Channel(options) { this.backend = options.backend || require('persistence'); }; Channel.prototype.publish = function(message) { this.backend.send(message); }; module.exports = Channel;

js var MockPersistence = require('mock_persistence'), Channel = require('./channel'); var c = new Channel({ backend: MockPersistence });

this.backend.send

Persistence.send

js var Persistence = require('persistence'); function Channel() { }; Channel.prototype.publish = function(message) { Persistence.send(message); }; Channel._setBackend = function(backend) { Persistence = backend; }; module.exports = Channel;

_setBackend

Persistence

js // using in test var MockPersistence = require('mock_persistence'), Channel = require('./channel'); exports['given foo'] = { before: function(done) { // inject dependency Channel._setBackend(MockPersistence); }, after: function(done) { Channel._setBackend(require('persistence')); }, // ... } var c = new Channel();

_setBackend()

done()

js exports['can read a status'] = function(done) { var client = this.client; client.status('item/21').get(function(value) { assert.deepEqual(value, []); client.status('item/21').set('bar', function() { client.status('item/21').get(function(message) { assert.deepEqual(message.value, [ 'bar' ]); done(); }); }); }); };

.bind()

.trigger()

Node.js EventEmitter jQuery Attach a callback to an event .on(event, callback) / .addListener(event, callback) .bind(eventType, handler) (1.0) / .on(event, callback) (1.7) Trigger an event .emit(event, data, ...) .trigger(event, data, ...) Remove a callback .removeListener(event, callback) .unbind(event, callback) / .off(event, callback) Add a callback that is triggered once, then removed .once(event, callback) .one(event, callback)

js EventEmitter.when = function(event, callback) { var self = this; function check() { if(callback.apply(this, arguments)) { self.removeListener(event, check); } } check.listener = callback; self.on(event, check); return this; };

js exports['can subscribe'] = function(done) { var client = this.client; this.backend.when('subscribe', function(client, msg) { var match = (msg.op == 'subscribe' && msg.to == 'foo'); if (match) { assert.equal('subscribe', msg.op); assert.equal('foo', msg.to); done(); } return match; }); client.connect(); client.subscribe('foo'); };

js exports['doIt sends a b c'] = function(done) { var received = []; client.on('foo', function(msg) { received.push(msg); }); client.doIt(); assert.ok(received.some(function(result) { return result == 'a'; })); assert.ok(received.some(function(result) { return result == 'b'; })); assert.ok(received.some(function(result) { return result == 'c'; })); done(); };

js exports['doIt sends a b c'] = function(done) { var received = [], old = jQuery.foo; jQuery.foo = function() { received.push(arguments); old.apply(this, Array.prototype.slice(arguments)); }); jQuery.doIt(); assert.ok(received.some(function(result) { return result[1] == 'a'; })); assert.ok(received.some(function(result) { return result[1] == 'b'; })); done(); };

Binding behavior to HTML via event handlers. When the user interacts with the view HTML, we need a way to trigger behavior (code). The view layer implementation is expected to provide a standard mechanism or convention to perform these tasks. The diagram below shows how a view might interact with models and HTML while performing these tasks: There are two questions:

Low-end interactivity Example: Github

Pages are mostly static.

You take a document that represents a mostly static piece of information already processed, and add a bit of interactivity via Javascript.

Changing data usually causes a full page refresh. High-end interactivity Example: Gmail

Pages are mostly dynamic.

You have a set of data which you want the user to interact with in various ways; changes to the data should be reflected on the page immediately.

Changing data should update the views, but not cause a page refresh - because views have many small intermediate states which are not stored in the database. State and data can be stored in HTML, because if data is altered, the page is refreshed.

Because a majority of the state is in the HTML, parts of the UI do not generally interact with each other.

If complex interactions are needed, they are resolved on the server. Storing state and data in HTML is a bad idea, because it makes it hard to keep multiple views that represent the same data in sync.

Complex interactions are more feasible; data is separate from presentation.

Interactions don't need to map to backend actions, e.g. you can paginate and select items in a view without writing a new server-side endpoint.

$().live()

[ Data in JS models ] [ Data in JS models ] [ Model-backed views ] [ Markup accesses models ]

js var model = new Todo({ title: 'foo', done: false }), view = new TodoView(model);

{{view TodoView}} {{=window.model.title}} {{/view}}

js Todos.on('change', function() { ... });

js Framework.registerObserver(window.App.Todos, 'change', function() { ... });

js Framework.observe('App.Todos', function() { ... });

Framework.

js function() { ... }.observe('App.Todos');

$('#foo').on('click', ...)

<p>Hello {{name}}</p>

{{name}}

<p>Hello <span id="$0">foo</span></p>

<p> Hello <script id="metamorph-0-start" type="text/x-placeholder></script> foo <script id="metamorph-0-end" type="text/x-placeholder"></script>. </p>

p span

p

<table> <tr> <th>Names</th> {{#people}} <td>{{name}}</td> {{/people}} </tr> </table>

or

inside a

. If you place an invalid wrapper element like a <script> tags and <!-- comment --> tags stay in all DOM locations, even invalid DOM locations, so they can be used to implement a string-range-oriented rather than element-oriented way to access data, making string-granular updates possible. Script tags can be selected by id (likely faster) but influence CSS selectors that are based on adjacent siblings and can be invalid in certain locations. Comment tags, on the other hand, require (slow) DOM iteration in old browsers that don't have certain APIs, but are invisible to CSS and valid anywhere in the page. Performance-wise, the added machinery vs. view-granular approaches The model layer looks fairly similar across different single page app frameworks because there just aren't that many different ways to solve this problem. You need the ability to represent data items and sets of data items; you need a way to load data; and you probably want to have some caching in place to avoid naively reloading data that you already have. Whether these exist as separate mechanisms or as a part of single large model is mostly an implementation detail. The major difference is how collections are handled, and this is a result of choices made in the view layer - with observables, you want observable arrays, with events, you want collections. ## Data source Common way of instantiating models from existing data Fetching models by id Fetching models by search A data source (or backend proxy / API) is responsible for reading from the backend using a simplified and more powerful API. It accepts JSON data, and returns JSON objects that are converted into Models. Note how the data source reads from the data store/cache, but queries the backend as well. Lookups by ID can be fetched directly from the cache, but more complicated queries need to ask the backend in order to search the full set of data. ## Model A place to store data Emits events when data changes Can be serialized and persisted The model contains the actual data (attributes) and can be transformed into JSON in order to restore from or save to the backend. A model may have associations, it may have validation rules and it may have subscribers to changes on its data. ## Collection Contains items Emits events when items are added/removed Has a defined item order Collections exist to make it easy to work with sets of data items. A collection might represent a subset of models, for example, a list of users. Collections are ordered: they represent a particular selection of models for some purpose, usually for drawing a view. You can implement a collection either: As a model collection that emits events As an observable array of items The approach you pick is dependent mostly on what kind of view layer you have in mind. If you think that views should contain their own behavior / logic, then you probably want collections that are aware of models. This is because collections contain models for the purpose of rendering; it makes sense to be able to access models (e.g. via their ID) and tailor some of the functionality for this purpose. If you think that views should mostly be markup - in other words, that views should not be "components" but rather be "thin bindings" that refer to other things by their name in the global scope - then you will probably prefer observable arrays. In this case, since views don't contain behavior, you will also probably have controllers for storing all the glue code that coordinates multiple views (by referring to them by name). ## Data cache Caches models by id, allowing for faster retrieval Handles saving data to the backend Prevents duplicate instances of the same model from being instantiated A data store or data cache is used in managing the lifecycle of models, and in saving, updating and deleting the data represented in models. Models may become outdated, they may become unused and they may be preloaded in order to make subsequent data access faster. The difference between a collection and a cache is that the cache is not in any particular order, and the cache represents all the models that the client-side code has loaded and retained. # 7. Implementing a data source In this chapter, I will look at implementing a data source. ## Defining a REST-based, chainable API for the data source Let's start off by writing some tests in order to specify what we want from the data source we will build. It's much easier to understand the code once we have an idea of what the end result should look like. Given the following fixture: var fixture = [ { name: 'a', id: 1, role: 2 }, { name: 'b', id: 2, role: 4, organization: 1 }, { name: 'c', id: 3, role: 4, organization: 2 } ]; var db.user = new DataSource(); ... here are tests describing how I'd like the data source to work: ## Can load a single item by ID 'can load a single item by ID': function(done) { db.user(1, function(user) { assert.equal(fixture[0], user); done(); }); }, ## Can load multiple items by ID 'can load multiple items by ID': function(done) { db.user([1, 2, 3], function(users) { assert.deepEqual(fixture, users); done(); }); }, ## Can load items by search query The data source should support retrieving items by conditions other than IDs. Since the details depend on the backend used, we'll just allow the user to add search terms via an object. The parameters are passed to the backend, which can then implement whatever is appropriate (e.g. SQL query by name) to return the result JSON. 'can load items by search query': function(done) { db.user({ name: 'c'}, function(user) { assert.deepEqual(fixture[2], user); done(); }); }, ## Can add more search conditions using and() We'll also support incrementally defining search parameters: 'should allow for adding more conditions using and()': function(done) { db.user({ role: 4 }) .and({ organization: 1 }, function(users) { assert.deepEqual(fixture[1], users); done(); }); }, ## Implementing the chainable data source API The full implementation for a chainable data source API is below. It almost fits on one screen. function Search(options) { this.uri = options.uri; this.model = options.model; this.conditions = []; } Search.prototype.and = function(arg, callback) { if(!arg) return this; this.conditions.push(arg); return this.end(callback); }; The data source accepts two parameters: uri , which is a function that returns a URL for a particular id

, which is a function that returns a URL for a particular id model , an optional parameter; if given, the results will be instances of that model instead of plain Javacript objects (e.g. JSON parsed as a JS object). The idea behind chainable APIs is that the actual action is delayed until a callback is passed to the API. conditions is a simple array of all the parameters (model ID's and search parameters) passed to the current data source search. Also note how all the functions return this . That allows function calls to be written one after another. Search.prototype.end = function(callback) { if(!callback) return this; var self = this, params = {}, urls = []; function process(arg) { if(typeof arg == 'number') { urls.push(self.uri(arg)); } else if (Array.isArray(arg)) { urls = urls.concat(arg.map(function(id) { return self.uri(id); })); } else if(arg === Object(arg)) { Object.keys(arg).forEach(function(key) { params[key] = arg[key]; }); } } this.conditions.forEach(process); (urls.length == 0) && (urls = [ this.uri() ]); this._execute(urls, params, callback); }; The end() function is where the conditions are processed and stored into url and params . We call process() on each condition in order to extract the information. process(arg) looks at the type of each argument. If the argument is a number, we assume it's a model ID. If it is an array, then it is considered an array of IDs. Objects are assumed to be search parameters (key: value pairs). For numbers, we map them to a url by calling this.uri() on them. That parameter is part of the resource definition. Search.prototype._execute = function(urls, params, callback) { var self = this, results = []; urls.forEach(function(url) { Client .get(url).data(params) .end(Client.parse(function(err, data) { if(err) throw err; results.push((self.model ? new self.model(data) : data)); if(results.length == urls.length) { callback((urls.length == 1 ? results[0] : results)); } })); }); }; This is where the magic happens (not really). We call the HTTP client, passing each URL and set of parameters. Once we get each result, we store it in the results array. When the results array is full, we call the original callback with the results. If there was only one result, then we just take the first item in the array. Search.prototype.each = function(callback) { return this.end(function(results) { results.forEach(callback); }); }; module.exports = function(options) { return function(arg, callback) { return new Search(options).and(arg, callback); } }; If .each(function() { ...}) is called, then we take the callback, and wrap it in a function that iterates over the results array and calls the callback for each result. This requires ES5 (e.g. not IE; since we rely on Array.forEach to exist). For IE compatibility, use underscore or some other shim. Finally, how do we define a datasource? We return a function that accepts (arg, callback) and itself returns a new instance of Search. This allows us to define a particular data source and store the configuration in another variable. Every search is a new instance of Search . See the full usage example at the end of the chapter for details. Making ajax a bit nicer: Client where the {{#people}} strings are, the browser will relocate it outside the table as a way recover from invalid markup. But without an element, you cannot refer to that particular part of the DOM in a manner that works in IE. So you need some other way to make that part of the DOM accessible and hence replaceable in a more granular manner. There are two known techniques for this:tags andtags stay in all DOM locations, even invalid DOM locations, so they can be used to implement a string-range-oriented rather than element-oriented way to access data, making string-granular updates possible. Script tags can be selected by id (likely faster) but influence CSS selectors that are based on adjacent siblings and can be invalid in certain locations. Comment tags, on the other hand, require (slow) DOM iteration in old browsers that don't have certain APIs, but are invisible to CSS and valid anywhere in the page. Performance-wise, the added machinery vs. view-granular approaches does incur a cost . There are also still some special cases, like select elements on old IE version, where this approach doesn't work. ## Conclusion The single page app world is fairly confusing right now. Frameworks define themselves more in terms of what they do rather than how they accomplish it. Part of the reason is that the internals are unfamiliar to most people, since -- let's face it -- these are still the early days of single page apps. I hope this chapter has developed a vocabulary for describing different single page app frameworks. Frameworks encourage different kinds of patterns, some good, some bad. Starting from a few key ideas about what is important and what should define a single page app, frameworks have reached different conclusions. Some approaches are more complex, and the choice about what to make easy influences the kind of code you write. String-granular bindings lead to heavier models. Since model properties are directly observable in views, you tend to add properties to models that don't represent backend data, but rather view state. Computed properties mean that model properties can actually represent pieces of logic. This makes your model properties into an API. In extreme cases, this leads to very specific and view-related model properties like "humanizedName" or "dataWithCommentInReverse" that you then observe from your view bindings. There is a tradeoff between DRY and simplicity. When your templating system is less sophisticated, you tend to need to write more code, but that code will be simpler to troubleshoot. Basically, you can expect to understand the code you wrote, but fewer people are well versed in what might go wrong in your framework code. But of course, if nothing breaks, everything is fine either way. Personally, I believe that both approaches can be made to work. # 6. The model layer: an overview Let's examine the model layer in more detail. In the introduction chapter, a model was shown as something that simply queries and writes to storage. The diagram below shows more details of the model layer:The model layer looks fairly similar across different single page app frameworks because there just aren't that many different ways to solve this problem. You need the ability to represent data items and sets of data items; you need a way to load data; and you probably want to have some caching in place to avoid naively reloading data that you already have. Whether these exist as separate mechanisms or as a part of single large model is mostly an implementation detail. The major difference is how collections are handled, and this is a result of choices made in the view layer - with observables, you want observable arrays, with events, you want collections. ## Data source Common way of instantiating models from existing data Fetching models by id Fetching models by search A data source (or backend proxy / API) is responsible for reading from the backend using a simplified and more powerful API. It accepts JSON data, and returns JSON objects that are converted into Models. Note how the data source reads from the data store/cache, but queries the backend as well. Lookups by ID can be fetched directly from the cache, but more complicated queries need to ask the backend in order to search the full set of data. ## Model A place to store data Emits events when data changes Can be serialized and persisted The model contains the actual data (attributes) and can be transformed into JSON in order to restore from or save to the backend. A model may have associations, it may have validation rules and it may have subscribers to changes on its data. ## Collection Contains items Emits events when items are added/removed Has a defined item order Collections exist to make it easy to work with sets of data items. A collection might represent a subset of models, for example, a list of users. Collections are ordered: they represent a particular selection of models for some purpose, usually for drawing a view. You can implement a collection either: As a model collection that emits events As an observable array of items The approach you pick is dependent mostly on what kind of view layer you have in mind. If you think that views should contain their own behavior / logic, then you probably want collections that are aware of models. This is because collections contain models for the purpose of rendering; it makes sense to be able to access models (e.g. via their ID) and tailor some of the functionality for this purpose. If you think that views should mostly be markup - in other words, that views should not be "components" but rather be "thin bindings" that refer to other things by their name in the global scope - then you will probably prefer observable arrays. In this case, since views don't contain behavior, you will also probably have controllers for storing all the glue code that coordinates multiple views (by referring to them by name). ## Data cache Caches models by id, allowing for faster retrieval Handles saving data to the backend Prevents duplicate instances of the same model from being instantiated A data store or data cache is used in managing the lifecycle of models, and in saving, updating and deleting the data represented in models. Models may become outdated, they may become unused and they may be preloaded in order to make subsequent data access faster. The difference between a collection and a cache is that the cache is not in any particular order, and the cache represents all the models that the client-side code has loaded and retained. # 7. Implementing a data source In this chapter, I will look at implementing a data source. ## Defining a REST-based, chainable API for the data source Let's start off by writing some tests in order to specify what we want from the data source we will build. It's much easier to understand the code once we have an idea of what the end result should look like. Given the following fixture:... here are tests describing how I'd like the data source to work: ## Can load a single item by ID## Can load multiple items by ID## Can load items by search query The data source should support retrieving items by conditions other than IDs. Since the details depend on the backend used, we'll just allow the user to add search terms via an object. The parameters are passed to the backend, which can then implement whatever is appropriate (e.g. SQL query by name) to return the result JSON.## Can add more search conditions using and() We'll also support incrementally defining search parameters:## Implementing the chainable data source API The full implementation for a chainable data source API is below. It almost fits on one screen. Since I wanted the same code to work in Node and in the browser, I added a (chainable) HTTP interface that works both with jQuery and Node.js. Here is a usage example: Client .get( 'http://www.google.com/' ) . data ({q: 'hello world' }) . end ( function (err, data) { console. log ( data ); }); And the full source code: for jQuery (~40 lines; below) and for Node (~70 lines; w/JSON parsing). var $ = require ( 'jquery' ); function Client ( opts ) { this .opts = opts || {}; this .opts.dataType || ( this .opts.dataType = 'json' ); this .opts.cache = false ; }; Client.prototype.data = function ( data ) { if (!data || Object .keys(data).length == 0 ) return this ; if ( this .opts.type == 'GET' ) { this .opts.url += '?' +jQuery.param(data); } else { this .opts.contentType = 'application/json' ; this .opts.data = JSON .stringify(data); } return this ; }; Client.prototype.end = function ( callback ) { this .opts.error = function ( j, t, err ) { callback && callback(err); }; this .opts.success = function ( data, t, j ) { callback && callback( undefined , data); }; $.ajax( this .opts); }; module .exports.parse = Client.parse = function ( callback ) { return function ( err, response ) { callback && callback( undefined , response); }; }; [ 'get' , 'post' , 'put' , 'delete' ].forEach( function ( method ) { module .exports[method] = function ( urlStr ) { return new Client({ type: method.toUpperCase(), url: urlStr }); }; }); Putting it all together Now, that's a fairly useful data source implementation; minimal yet useful. You can certainly reuse it with your framework, since there are no framework dependencies; it's all (ES5) standard Javascript. Defining a data source Now, let's create a page that allows us to use the datasource to retrieve data. For example, you might want to use the datasource with a model. You may have noticed that I slipped in support for instantiating models from the result (see the this.model parameter in implementation). This means that we can ask the data source to instantiate objects from a given model constructor by passing the model option: Todo.find = new DataSource({ uri: function ( id ) { return 'http://localhost:8080/api/todo/' + (id ? encodeURIComponent (id) : 'search' ); }, model: Todo }); As you can see, the uri function simply returns the right URL depending on whether the search is about a specific ID or just a search. The code also demostrates composition over inheritance. The inheritance-based way of setting up this same functionality would be to inherit from another object that has the data source functionality. With composition, we can simply assign the DataSource to any plain old JS object to add the ability to retrieve JSON data by calling a function. Building a backend. The server-side for the datasource can be fairly simple: there are two cases - reading a model by ID, and searching for a model by property. var http = require ( 'http' ), url = require ( 'url' ); var todos = [ { id: 1 , title: 'aa' , done: false }, { id: 2 , title: 'bb' , done: true }, { id: 3 , title: 'cc' , done: false } ], server = http.createServer(); var idRe = new RegExp ( '^/api/todo/([0-9]+)[^0-9]*$' ), searchRe = new RegExp ( '^/api/todo/search.*$' ); server.on( 'request' , function ( req, res ) { res.setHeader( 'content-type' , 'application/json' ); if (idRe.test(req.url)) { var parts = idRe.exec(req.url); if (todos[parts[ 1 ]]) { res.end( JSON .stringify(todos[parts[ 1 ]])); } } else if (searchRe.test(req.url)) { var data = '' ; req.on( 'data' , function ( part ) { data += part; }); req.on( 'end' , function ( ) { var search = undefined ; try { search = JSON .parse(data); } catch (error) {} res.end( typeof (search) === 'undefined' ? undefined : JSON .stringify( todos.filter( function ( item ) { return Object .keys(search).every( function ( key ) { return item[key] && item[key] == search[key]; }); }) )); }); } else { console .log( 'Unknown' , req.url); res.end(); } }); 8. Implementing a model What's a model? Roughly, a model does a couple of things: Data . A model contains data.

. A model contains data. Events . A model emits change events when data is altered.

. A model emits change events when data is altered. Persistence. A model can be stored persistently, identified uniquely and loaded from storage. That's about it, there might be some additional niceties, like default values for the data. Defining a more useful data storage object (Model) function Model(attr) { this.reset(); attr && this.set(attr); }; Model.prototype.reset = function() { this._data = {}; this.length = 0; this.emit('reset'); }; Model.reset() _data: The underlying data structure is a object. To keep the values stored in the object from conflicting with property names, let's store the data in the _data property Store length: We'll also keep a simple length property for quick access to the number of elements stored in the Model. Model.prototype.get = function(key) { return this._data[key]; }; Model.get(key) This space intentionally left blank. Model.prototype.set = function(key, value) { var self = this; if(arguments.length == 1 && key === Object(key)) { Object.keys(attr).forEach(function(key) { self.set(key, attr[key]); }); return; } if(!this._data.hasOwnProperty(key)) { this.length++; } this._data[key] = (typeof value == 'undefined' ? true : value); }; Model.set(key, value) Setting multiple values: if only a single argument Model.set({ foo: 'bar'}) is passed, then call Model.set() for each pair in the first argument. This makes it easier to initialize the object by passing a hash. Note that calling Model.set(key) is the same thing as calling Model.set(key, true) . What about ES5 getters and setters? Meh, I say. Setting a single value: If the value is undefined, set to true. This is needed to be able to store null and false. Model.prototype.has = function(key) { return this._data.hasOwnProperty(key); }; Model.prototype.remove = function(key) { this._data.hasOwnProperty(key) && this.length--; delete this._data[key]; }; module.exports = Model; Model.has(key), Model.remove(key) Model.has(key): we need to use hasOwnProperty to support false and null. Model.remove(key): If the key was set and removed, then decrement .length. That's it! Export the module. Change events Model accessors (get/set) exist because we want to be able to intercept changes to the model data, and emit change events. Other parts of the app -- mainly views -- can then listen for those events and get an idea of what changed and what the previous value was. For example, we can respond to these: a set() for a value that is used elsewhere (to notify others of an update / to mark model as changed)

a remove() for a value that is used elsewhere We will want to allow people to write model.on('change', function() { .. }) to add listeners that are called to notify about changes. We'll use an EventEmitter for that. If you're not familiar with EventEmitters, they are just a standard interface for emitting (triggering) and binding callbacks to events (I've written more about them in my other book.) var util = require('util'), events = require('events'); function Model(attr) { // ... }; util.inherits(Model, events.EventEmitter); Model.prototype.set = function(key, value) { var self = this, oldValue; // ... oldValue = this.get(key); this.emit('change', key, value, oldValue, this); // ... }; Model.prototype.remove = function(key) { this.emit('change', key, undefined, this.get(key), this); // ... }; The model extends events.EventEmitter using Node's util.inherits() in order to support the following API: on(event, listener)

once(event, listener)

emit(event, [arg1], [...])

removeListener(event, listener)

removeAllListeners(event) For in-browser compatibility, we can use one of the many API-compatible implementations of Node's EventEmitter. For instance, I wrote one a while back (mixu/miniee). When a value is set() , emit('change', key, newValue, oldValue) . This causes any listeners added via on()/once() to be triggered. When a value is removed() , emit('change', key, null, oldValue) . Using the Model class So, how can we use this model class? Here is a simple example of how to define a model: function Photo ( attr ) { Model.prototype.apply( this , attr); } Photo.prototype = new Model(); module .exports = Photo; Creating a new instance and attaching a change event callback: var badger = new Photo({ src: 'badger.jpg' }); badger.on( 'change' , function ( key, value, oldValue ) { console .log(key + ' changed from' , oldValue, 'to' , value); }); Defining default values: function Photo (attr) { attr.src || (attr.src = 'default.jpg' ); Model.prototype.apply( this , attr); } Since the constructor is just a normal ES3 constructor, the model code doesn't depend on any particular framework. You could use it in any other code without having to worry about compatibility. For example, I am planning on reusing the model code when I do a rewrite of my window manager. Differences with Backbone.js I recommend that you read through Backbone's model implementation next. It is an example of a more production-ready model, and has several additional features: Each instance has a unique cid (client id) assigned to it.

You can choose to silence change events by passing an additional parameter.

Changed values are accessible as the changed property of the model, in addition to being accessible as events; there are also many other convenient methods such as changedAttributes and previousAttributes.

property of the model, in addition to being accessible as events; there are also many other convenient methods such as changedAttributes and previousAttributes. There is support for HTML-escaping values and for a validate() function.

.reset() is called .clear() and .remove() is .unset()

Data source and data store methods (Model.save() and Model.destroy()) are implemented on the model, whereas I implement them in separate objects (first and last chapter of this section). 9. Collections What's in a collection? A collection: contains items (or models)

emits events when items are added/removed

is ordered; can be accessed by index via at() and by model ID via get() In this chapter, we'll write an observable array, and then add some additional niceties on top of it to make it a collection (e.g. something that is specific to storing models). Storing Models and emitting events Let's start with the constructor. We want to mixin EventEmitter to add support for events for the collection. function Collection ( models ) { this .reset(); models && this .add(models); } util.inherits(Collection, events.EventEmitter); To support passing a set of initial models, we call this.add() in the constructor. Resetting the collection. Self-explanatory, really. We will use an array to store the models, because collections are ordered rather than indexed; and we will maintain a length property directly for convenience. Collection.prototype.reset = function ( ) { this ._items = []; this .length = 0 ; this .emit( 'reset' ); }; Adding items. We should be able to call add(model) and emit/listen for an "add" event when the model is added. Collection.prototype.add = function ( model, at ) { var self = this ; if ( Array .isArray(model)) { return model.forEach( function ( m ) { self.add(m, at); }); } this ._items.splice(at || this ._items.length, 0 , model); this .length = this ._items.length; this .emit( 'add' , model, this ); }; To support calling add([model1, model2]) , we'll check if the first parameter is an array and make multiple calls in that case. Other than that, we just use Array.splice to insert the model. The optional at param allows us to specify a particular index to add at. Finally, after each add, we emit the "add" event. Removing items. We should be able to call remove(model) to remove a model, and receive events when the item is removed. Again, the code is rather trivial. Collection.prototype.remove = function ( model ) { var index = this ._items.indexOf(model); if (index > - 1 ) { this ._items.splice(index, 1 ); this .length = this ._items.length; this .emit( 'remove' , model, this ); } }; Retrieving items by index and retrieving all items. Since we are using an array, this is trivial: Collection.prototype.at = function ( index ) { return this ._items[index]; }; Collection.prototype.all = function ( ) { return this ._items; }; Iteration We also want to make working with the collection easy by supporting a few iteration functions. Since these are already implemented in ES5, we can just call the native function, setting the parameter appropriately using .apply() . I'll add support for the big 5 - forEach (each), filter, map, every and some: [ 'filter' , 'forEach' , 'every' , 'map' , 'some' ].forEach( function ( name ) { Collection.prototype[name] = function ( ) { return Array .prototype[name].apply( this ._items, arguments ); } }); Sorting Implementing sorting is easy, all we need is a comparator function. Collection.prototype.sort = function ( comparator ) { this ._items.sort(comparator || this .orderBy); }; Array.sort is already implemented in ES3 and does what we want: you can pass a custom comparator, or set collection.orderBy to set a default sort function. Using our observable array The code above covers the essence of an observable array. Let's look at few usage examples before moving on to a making it a collection. var items = new Collection(); items.on( 'add' , function ( item ) { console .log( 'Added' , item); }); setInterval( function ( ) { items.add( Math .floor( Math .random() * 100 )); console .log(items.all()); }, 1000 ); Creating a collection A collection is a more specialized form of an observable array. Collections add the ability to hook into the events of the models they contain, and add the ability to retrieve/check for item presence by model id in addition to the position in the array. get(modelId). Let's implement get(modelId) first. In order to make get() fast, we need a supplementary index. To do this, we need to capture the add() and remove() calls: Collection.prototype.add = function ( model, at ) { var self = this , modelId; modelId = model.get( 'id' ); if ( typeof modelId != 'undefined' ) { this ._byId[modelId] = model; } }; Collection.prototype.remove = function ( model ) { var index = this ._items.indexOf(model), modelId; modelId = model.get( 'id' ); if ( typeof modelId != 'undefined' ) { delete this ._byId[modelId]; } }; Now get() can make a simple lookup: Collection.prototype.get = function ( id ) { return this ._byId[id]; }; Hooking into model events. We need to bind to the model change event (at least), so that we can trigger a "change" event for the collection: Collection.prototype._modelChange = function ( key, value, oldValue, model ) { this .emit(key, value, oldValue, model); }; Collection.prototype.add = function ( model, at ) { model.on( 'change' , this ._modelChange); }; And we need to unbind when a model is removed, or the collection is reset: Collection.prototype.remove = function ( model ) { model.removeListener( 'change' , this ._modelChange); }; Collection.prototype.reset = function ( ) { var self = this ; if ( this ._items) { this ._items.forEach( function ( model ) { model.removeListener( 'change' , self._modelChange); }); } }; 10. Implementing a data cache There are three reasons why we want a data store: To have a central mechanism for saving data.

To retrieve cached models quickly.

To prevent duplicate instances of the same model being created. The first two are obvious: we need to handle saving, and when possible, use caching to make unambiguous retrievals fast. The only clearly unambigous type of retrieval is fetching a model by id. The last reason is less obvious. Why is it bad to have duplicate instance of the same model? Well, first, it is inefficient to have the same data twice; but more importantly, it is very confusing if you can have two instances that represent the same object but are separate objects. For example, if you have a data cache that always returns a new object rather than reusing an existing one, then you can have situations where you change the model data, or add a model data listener, but this change does not actually work as expected because the object you used is a different instance. We'll tackle this after looking at saving and caching. Implementing save() Serializing models into JSON. In order to send the model data, we need the ability to transform a model into a string. JSON is the obvious choice for serializing data. We need to add a additional method to the model: Model.prototype.json = function ( ) { return JSON .stringify( this ._data); }; Mapping to the right backend URL. We also need to know where to save the model: Model.prototype.url = function ( method ) { return this .prototype.urlRoot + (method == 'create' ? '' : encodeURIComponent ( this .id)); }; There are three kinds of persistence operations (since reads are handled by the data source): "create": PUT /user

"update": POST /user/id

"delete": DELETE /user/id When the model doesn't have a id, we will use the "create" endpoint, and when the model does have id, we'll use the "update"/"delete" endpoint. If you set Model.prototype.urlRoot to "http://localhost/user", then you'll get the urls above, or if your URLs are different, you can replace Model.prototype.url with your own function. Connecting Model.save() with the DataStore. Reading is done via the data source, but create, update and delete are done via the data store. For the sake of convenience, let's redirect Model.save() to the DataStore: Model.prototype.save = function ( callback ) { DataStore.save( this , callback); }; And do the same thing for Model.destroy : Model.prototype.destroy = function ( callback ) { DataStore.delete( this , callback); }; Note that we allow the user to pass a callback, which will be called when the backend operation completes. Managing the model lifecycle Since the data store is responsible for caching the model and making sure that duplicate instances do not exist, we need to have a more detailed look at the lifecycle of the model. Instantiation. There are two ways to instantiate a model: new Model(); The cache should do nothing in this case, models that are not saved are not cached. DataSource.find(conditions, function ( model ) { ... }); Here, the models are fetched from the backend using some conditions. If the conditions are just model IDs, then the data source should check the cache first. When models are instantiated from data with an ID, they should be registered with the cache. Persistence operations: create, update, delete. Model.save(); Once the backend returns the model id, add the model to the data cache, so that it can be found by id. Model.save(); Add the model to the data cache, so that it can be found by id. Model.delete(); Remove the model from the data cache, and from any collections it may be in. Data changes. When the model ID changes, the cache should be updated to reflect this. Reference counting. If you want an accurate count of the number of models, you must hook into Collection events (e.g. add / remove / reset). I'm not going to do that, because a simpler mechanism -- for example, limiting model instances by age or by number -- achieves the essential benefits without the overhead of counting. When ES6 WeakMaps are more common, it'll be much easier to do something like this. Implementing the data store / cache DataStore.add(), DataStore.has(), DataStore.save(), DataStore.delete(), DataStore.reference(). The implementation section is still a work in progress, my apologies. 11. Implementing associations: hasOne, hasMany Defining associations. Associations / relationships are sugar on top of the basic data source implementation. The idea is that you can predefine the associations between models, for example, that a post hasMany comments. This might be described as: function Post ( args ) { Model.apply( this , args); this .definition = { tags: Tags, comments: Comments }; } We can fetch stuff manually without assocation support. For example, assume that posts.comment_ids is an array of ids: db.tag(post.comment_ids, function ( tags ) { tags.forEach( function ( tag )) { }); }); But given several levels of nesting (post has comment has author), this gets old pretty fast. It's the age-old problem of dealing with callbacks - which turns out to be pretty trivial once you add a couple of control flow patterns to your repertoire. The fundamental ones are "series", "parallel" and "parallel but with limited concurrency". If you are unfamiliar with those, go read Chapter 7 - Control Flow of my previous book. Don't pretend to have a blocking API. Some frameworks have taken the approach that they pretend to provide a blocking API by returning a placeholder object. For example: var comments = post.get( 'comments' ); This is a very, very leaky abstraction. It just introduces complexity without really solving the issue, which is that you have to wait for the database to return results. I'd much rather allow the user to set a callback that gets called when the data has arrived; with a little bit of control flow you can easily ensure that the data is loaded - or build a higher level mechanism like we will be doing. APIs that appear not to incur the cost of IO but actually do are the leakiest of abstractions (Mikeal Rogers). I'd much rather opt for the simple callback, since that allows me to explictly say that a piece of code should run only when the required data has arrived. Building a nicer API for fetching associated records Now, I don't want to do this either: post.get( 'tags' , function ( tags ) { post.get( 'comments' ).each( function ( comment ) { comment.get( 'author' , function ( comments ) { }); }); }); Instead, I think the right pattern (as advocated in my previous book) is to tell the system what I want and pass a single callback that will run when the data is loaded: post.with([ 'tags' , 'comments.author' ], function ( post ) { }); Basically, you tell the API what you want as the input, and give it a callback to run when it has done your bidding. Implementation. How can we build this? It is basically an API that takes a bunch of paths, looks up the metadata, makes data source calls to fetch by ID, and stores the data on the model, and then calls the continuation callback. The implementation section is still a work in progress, my apologies. 12. Views - Templating What's in a template? I would classify templating systems not based on their input, but based on their output: as simple functions

as functions and metadata

as objects with lifecycles The simplest systems make string interpolation and array iteration more convenient. More complicated ones generate metadata that can be used as an input for other systems. The simplest templating system A template is the part of the view object that is responsible for generating HTML from input data. In other words, a template is a function which takes a single argument: base (context) and returns a string of HTML. function itemTemplate ( base ) { return [ '<li>' , '<div class="todo' , (base.done ? ' done' : '' ), '">' , base.text, '</div>' , '</li>' ].join( '' ); } Of course, writing templates with this syntax is generally not preferred. Instead, templating libraries are used in order to get the best of both worlds: the nicest possible template definition syntax, and the performance of using native JS operations. Templating syntax should have no performance impact - you should always precompile your templates into their optimal JS equivalents. The optimal output for simple templates In theory, unless a templating library does something extremely unusual, all of the templating libraries should have similar performance: after all, they only perform string interpolation on an input and ought to compile to similar compiled JS output. Sadly, in the real world very few templating languages actually compile to the optimal markup. Have a look at the results from this benchmark: Resig Micro-templating: 3,813,204 (3813 templates per ms; 61,008 in 16ms) Underscore.js template: 76,012 (76 templates per ms; 1216 in 16ms) Handlebars.js: 45,953 (46 templates per ms; 736 in 16ms) ejs: 14,927 (15 templates per ms; 240 in 16ms) I'm not discussing the causes here, because even with the slowest templating engine, the rendering itself doesn't have a significant impact in terms of total time (since even the slowest engines can cope with hundreds of template renders per 16 ms). In other words - despite large differences (up to two orders of magnitude) in microbenchmarks - generating HTML from a compiled template is unlikely to be a bottleneck no matter how slow it is, except on mobile browsers. Outputting metadata / objects with lifecycles As I noted in the overview chapter for the view layer, the key difference between view layer implementations is their update granularity: whether views are redrawn as a whole (view-granular) or can be rendered at element-granularity or string-granularity. View-granular systems can just use the simple output where a compiled template is represented as a function that takes a set of data and returns a string. Element-granular and string-granular view layers need more metadata, because they need to convert the bindings into code that keeps track of and updates the right parts of the view. Hence, element-granular and string-granular rendering requires a templating system that outputs objects / metadata in addition to strings. Notice that this doesn't generally affect what features are supported in the templating language: it just affects how granular the updates are and the syntax for defining things like event handlers. Templating language features Let's have a look at some common templating language features. Sadly, I don't have the time right now to write a templating system - as cool and fun that would be, I'm pretty sure it would be a low payoff in terms of writing a book. String interpolation allows us to insert values into HTML. Dependending on the update granularity, the tokens can be updated either only by re-rendering the whole view, or a single element, or by updating the content of the element with string-granular updates. < div > Hello {{ name }}! </ div > Escaping HTML. It is generally a bad practice not to escape the values inserted into HTML, since this might allow malicious users to inject Javascript into your application that would then run with the privileges of whomever is using the application. Most templating libraries default to escaping HTML. For example, mustache uses {{name}} for escaped HTML and {{{name}}} ("triple mustache") for unescaped strings. Simple expressions. Expressions are code within a template. Many templating libraries support either a few fixed expressions / conditions, or allow for almost any JS code to be used as an expression. < li > < div class = "todo {{ done? }}" > {{ text }} </ div > </ li > I don't have a strong opinion about logic-in-views vs. logicless views + helpers. In the end, if you need logic in your views, you will need to write it somewhere. Intricate logic in views is a bad idea, but so is having a gazillion helpers. Finding the right balance depends on the use case. Generally, templating engines support {{if expr}} and {{else}} for checking whether a value is set to a truthy value. If the templating library doesn't support logic in views, then it usually supports helpers, which are external functions that can be called from the template and contain the logic that would otherwise be in the template. Displaying a list of items. There are basically two ways, and they correspond to how sets of items are represented in the model layer. The first option corresponds to observable arrays: you use an expression like each to iterate over the items in the observable array: {{view App.TodoList}} < ul > {{each todos}} {{view App.TodoView}} < li > < div class = "todo {{ done? }}" > {{ text }} </ div > </ li > {{/view}} {{/each}} </ ul > {{/view}} The second option corresponds with collections of models, where the view is bound to a collection and has additional logic for rendering the items. This might look something like this: {{collectionview App.TodoList tag=ul collection=Todos}} < li > < div class = "todo {{ done? }}" > {{ text }} </ div > </ li > {{/collectionview}} Observable arrays lead to less sophisticated list rendering behavior. This is because each is not really aware of the context in which it is operating. Collection views are aware of the use case (since they are components written for that specific view) and can hence optimize better for the specific use case and markup. For example, imagine a chat message list of 1000 items that is only updated by appending new messages to it. An observable array representing a list of messages that contains a thousand items that are rendered using a each iterator will render each item into the DOM. A collection view might add restrictions about the number of items rendered (e.g. only showing the most recent, or implementing incremental rendering by only rendering the visible messages in the DOM). The observable array also needs to keep track of every message, since there is no way of telling it that the messages, once rendered, will never be updated. A collection view can have custom rendering logic that optimizes the renderer based on this knowledge. If we choose the "each" route for collections, then optimizing rendering performance becomes harder, because the mechanism most frameworks provide is based on rendering every item and tracking every item. Collection views can be optimized more, at the cost of manually writing code. Nested view definition Templating libraries usually only support defining one template at a time, since they do not have an opinion about how templates are used in the view layer. However, if the output from your templating system is a set of views (objects / metadata) rather than a set of templates (functions that take data arguments), then you can add support for nested view definition. For example, defining a UserInfo view that contains a UserContact and UserPermissions view, both of which are defined inside the App.UserInfo view: {{view App.UserInfo}} < ul > < li > User information </ li > {{view App.UserContact}} ... {{/view}} {{view App.UserPermissions}} ... {{/view}} </ ul > {{/view}} This means that the output from compiling the above markup to object/metadata info should yield three views: UserInfo, UserContact and UserPermissions. Nested view definition is linked directly with the ability to instantiate and render a hierarchy of views from the resulting object; in the case above, the UserInfo view needs to know how to instantiate and render UserContact and UserPermissions in order to draw itself. In order to implement this, we need several things: A template parser that outputs objects/metadata

A view layer that is capable of rendering child views from templates

Optionally, the ability to only render the updated views in the hierarchy The first two are obvious: given markup like the one in the example, we want to return objects for each view. Additionally, views that contain other views have to store a reference to those views so that they can instantiate them when they are drawing themselves. What about the ability to only render the updated views in the hierarchy? Well, imagine a scenario where you need to re-render a top-level view that contains other views. If you want to avoid re-rendering all of the HTML, then you have two choices: Write the render() function yourself, so that it calls the nested render() functions only when relevant

After the initial render, only perform direct updates (e.g. via element-granular or string-granular bindings) The first option is simpler from a framework perspective, but requires that you handle calls to render() yourself. This is just coordination, so not much to discuss here. The second option relies on adding metadata about which pieces of data are used in the views, so that when a model data change occurs, the right views/bound elements can be updated. Let's have a look at how this might be done next. Adding metadata to enable granular (re)-rendering The basic idea here is to take one set of strings (the names/paths to the model data in the global scope), and translate them into subscriptions on model changes (e.g. callbacks that do the right thing). For example, given this templating input: {{view}} Hello {{ window.App.currentUser.name }}! {{/view}} ... the output should be a view object, a template and a event subscription that updates the piece of the DOM represented by the {{window.App.currentUser.name}} token. References to items can be considered to be dependencies: when a observed value changes, then the element related to it should change. They might result in a subscription being established like this: Framework .observe( 'window.App.currentUser.name' ) .on( 'change' , function ( model ) { $( '#$1' ).update(model); }); Where $('#$1') is an expression which selects the part to update. I am glossing over the implementation of the DOM selection for the piece of DOM. One way that might be done - in the case of a element-granular view layer - would be to create a templating function that wraps those updateable tokens with a span tag and assigns sequential ID numbers to them: <div id= "$0" > Hello <span id= "$1" >Foo< /span>! </ div> The id attributes would need to be generated on demand when the view is rendered, so that the code that subscribes to the change can then refer to the updateable part of the string by its ID. For string-granular updates, the same would be achieved by using <script> tags, as discussed in the overview chapter for the view layer. To avoid having to type the fully qualified name of the model data that we want to bind to, views can add a default scope in the context of their bindings: {{view scope="window.App.currentUser"}} Hello {{ name }}! {{/view}} This addition makes the subscription strings less verbose. This is the gist of granular re-rendering. There are additional things to consider, such as registering and unregistering the listeners during the view life cycle (e.g. when the view is active, it should be subscribed; when it is removed, it should be unsubscribed). Additionally, in some cases there is an expression that needs to be evaluated when the observed value changes. These are left as an exercise to the reader, at least until I have more time to think about them. 13.Views - Behavior: binding DOM events to HTML and responding to events In this chapter, I will discuss the things that need to happen in order to respond to user events: attaching listeners to DOM nodes in the HTML in order to react to user events

handling cross-view communication

abstracting common behavior in views Different kinds of UI interactions Additing interactivity is about taking a DOM event and responding to it by doing something in the model layer and the view layer. Let's look at a few different kinds of interactions using Gmail as an example, and see how they might affect the state of the application (e.g. the model/view layers). Model data change. Here, the user interaction results in a model property being set. For example, in Gmail, click a message to star it. This might result in message.starred being set to true. Assuming that the view layer receives change events from the model, any views showing that message can then update themselves. Single view state change. Here, it is less clear which model is associated with the change. For example, in Gmail, click a collapsible section to show/hide it. This is naturally expressed as a property of the view instance. Multiple view state change. In this case, we want a single action to influence multiple views. For example, in Gmail, change the compactness of the app display density. This will cause all views to adjust their display density, making them visually more compact. There are two ways this might be implemented: by sending a transient message to which all views react, or by having a setting in the global scope that all views poll/subscribe to. Page state transition. What makes page state transitions different from the others is that it involves a wholesale change in the page. Views might be destroyed or hidden, and new views swapped in place of them. For example, in Gmail, click on "Compose" to start writing a new message, which loads up the message editor. Binding DOM events to the View What the examples above try to show is that in order to respond to user actions, we need to do two things: Listen to DOM events

Given the event, figure out what action makes sense Listening to DOM events is all about the lifecycle of our view. We need to make sure that we attach the DOM listeners when the element containing the view is inserted into the DOM and removed when the element is removed. In essence, this requires that we delay event registration and make sure it each handler is attached (but only once), even if the view is updated and some elements within the view are discarded (along with their event handlers). Figuring out what action makes sense is part app programming, part framework capabilities. Whether we are using model-backed views or markup-driven views, we still want to make the most common operations simple to do by providing access to the related information. The rest is app-specific. Options for specifying the event-to-handler relations Since the DOM only has a element-based API for attaching events, there are only two choices: DOM-based event bindings.

Framework-generated event bindings. DOM-based event bindings basically rely on DOM properties, like the element ID or element class to locate the element and bind events to it. This is fairly similar to the old-fashioned $('#foo').on('click', ...) approach, except done in a standardized way as part of view instantiation. Here is an example: View.template = '<div>\ <input type="checkbox" class="select">\ <img class="toggleStar">\ <a class="hide">Hide</a>\ </div>' ; View.events = { 'click .select' : function ( ) { Emitter.emit( 'intent:message:select' , this .model); }, 'click .toggleStar' : function ( ) { this .model.toggleStar(); }, 'click .hide' : 'hide' , }; Framework-generated event bindings allow you to bind event handlers to HTML without explicitly providing a element ID or selector for the view. Here is an example: {{#view Foo}} < div > < input type = "checkbox" {{ onclick = "Emitter.emit('intent:message:select', this.model);" }} > < img {{ onclick = "this.model.to