L20n beta 4 introduces robust language negotiation, friendlier error reporting, and major performance improvements.

I'm happy to announce today the beta 4 release of L20n. We spent a lot of time thinking about our API and gathering feedback from the early adopters for this release. The theme of the release was forward-compatibility. You can install L20n beta 4 from GitHub (source, dist) or from the npm registry.

Language negotiation

Based on the previous beta feedback, this version introduces a few important API changes intended to make working with language negotiation easier. You can now register all available locales in a Context instance and then use the requestLocales method to freeze the context and start the language negotiation process.

ctx.registerLocales now takes two arguments: the default locale for the context and a list of all available locales,

now takes two arguments: the default locale for the context and a list of all available locales, ctx.requestLocales is a new method which triggers language negotiation between the available locales, including the default locale (as defined via ctx.registerLocales ), and the locales passed as arguments.

is a new method which triggers language negotiation between the available locales, including the default locale (as defined via ), and the locales passed as arguments. ctx.freeze has been removed; use ctx.requestLocales instead,

has been removed; use instead, ctx.supportedLocales is a new read-only property which holds the result of the language negotiation (i.e. the current fallback chain of locales),

With these changes in place, we effectively moved the whole language negotiation process into asynchronous requestLocales , which opens the way for experiments with language packs or dynamically-fetched translations for languages not originally registered by the developer.

The simplest scenario hasn't changed; here's all the code you need to create a working context:

var ctx = L20n . getContext (); ctx . addResource ( '<hello "Hello, world!">' ); ctx . requestLocales ();

But you can now also register all available locales and trigger language negotiation by passing the user-preferred locales to requestLocales , like so:

var ctx = L20n . getContext (); // register the default locale and all available locales ctx . registerLocales ( 'en-US' , [ 'de' , 'en-US' , 'fr' , 'pl' ]); ctx . linkResource ( function ( locale ) { return './path/to/' + locale + '/translations.l20n' ; }); // ask for the user's preferred locales, e.g. navigator.language ctx . requestLocales ( 'fr-CA' , 'fr' ); // L20n will fetch './path/to/fr/translations.l20n' as the result of the // language negotiation

Other noteworthy changes

In order to prepare for reacting to changes in context data (much like we do right now for globals like @screen.width.px ), we removed the data property from Context instances and introduced the updateData method.

Context instances emit error and warning events to help with the debugging of L20n resources (bug 802850). By introducing two types of events we allow developers to filter them efficiently in the JavaScript console. Find out more about the specific errors emitted in the API documentation.

Under the hood, we went back from using promises to using regular callbacks (bug 869016). We started using promises early on when our async logic was much more complex; since then we simplified how the context works, and using callbacks didn't mean we had to deal with the infamous pyramid of doom-style nesting. In fact, due to the fact that the context can work both asynchronously and synchronously, with promises we had to write additional code to work around the always-async then method (as per the spec). Removing promises made debugging easier: the stack is cleaner (although thankfully, with features like black boxing, it could be made cleaner in the debugger) and uncaught errors are reported in the console without jumping through hoops. Last but not least, creating and initializing contexts is much faster now (up to 35% faster!); which translates into significant performance gains, especially on mobile: desktop: 3-4ms faster, Keon (512MB RAM): 5-10ms faster, Unagi (256 MB RAM): 15-20ms faster.



Early adopters make L20n great

It is thanks to the questions and feedback from many people that we are able to continue to improve L20n and bring it closer to a 1.0. There has been a number of great discussions on the tools-l10n mailing list (go read this one if you're a localizer), as well as a few questions on StackOverflow that indicate interest in L20n and help us hone the message of our documentation.

We've also seen community projects show up which extend L20n. In particular, I'd like to highlight these two projects:

Thank you!

What's next?

We're focusing now on the documentation, the first-time experience for developers (bug 897034 and the demo repository) and tutorials for localizers. Michał has a few ideas on how to improve our current build and testing infrastructure (bug 907840). Check out the list of bugs targeting 1.0 on Bugzilla for a full picture.