We’re starting to undertake a few new initiatives, here at Mozilla that attempt to find ways to benefit web developers – and by extension, JavaScript Libraries. I think this is an excellent movement, so I’m doing everything that I can to support it and push it forward. With that in mind, here’s an introduction to one of the first initiatives that we’re undertaking.

JavaScript libraries can be fickle beasts. Generally speaking, they attempt to pave over browser bugs and interfaces, providing a consistent base-layer that users can build upon. This is a challenging task, as bugs can frequently be nonsensical – and even result in browser crashes.

There are a number of techniques that can be used to know about, and work around, bugs or missing features – but generally speaking, object detection is the safest way to determine is specific feature is available, and usable. Unfortunately, in real-world JavaScript development, object detection can only get you so far. For example, there’s no object that you can ‘detect’ to determine if browsers return inaccurate attribute values from getAttribute, if they execute inline script tags on DOM injection, or if they fail to return correct results from a getElementsByTagName query.

Additionally, object detection has the ability to completely fail. Safari currently has a super-nasty bugs related to object detection. For example, assuming that you have a variable and you need to determine if it contains a single DOM Element, or a DOM NodeList. One one would think that it would be as simple as:

if ( elem.nodeName ) { // it's an element } else { // it's a nodelist }

However, in the current version of Safari, this causes the browser to completely crash, for reasons unknown. (However, I’m fairly certain that this has already been fixed in the nightlies.)

Side Story I was in the group of JavaScript developers who provided feature/bug fix recommendations to Microsoft for their next version of IE. A huge issue that we were faced with was that we were knowingly asking Microsoft to both break their browser and alienate their existing userbase, in the name of standards. For example, if Microsoft adds proper DOM Events (addEventListener, etc.) – should they then remove their IE-specific event model (attachEvent, etc.)? Assuming that they do decide to remove the deprecated interfaces, this will have serious effects upon JavaScript developers and libraries (although, in the case of the DOM Event model, object detection is a viable solution and is, therefore, completely future-compatible.)

Additionally, in Internet Explorer, doing object detection checks can, sometimes, cause actual function executions to occur. For example:

if ( elem.getAttribute ) { // will die in Internet Explorer }

That line will cause problems as Internet Explorer attempts to execute the getAttribute function with no arguments (which is invalid). (The obvious solution is to use “typeof elem.getAttribute == ‘undefined'” instead.)

The point of these examples isn’t to rag on Safari or Internet Explorer in particular, but to point out that rendering-engine checks can end up becoming very convoluted – and thusly, more vulnerable to future changes within a browser. This is a very important point. A browser deciding to fix bugs can cause more problems for a JavaScript developer than simply adding new features. For every bugfix, there are huge ramifications. Developers expect interfaces to work and behave in very-specific ways.

The recent Internet Explorer 7 release can be seen as a case study in this. They fixed numerous CSS-rendering errors in their engine, which caused an untold number of web sites to render incorrectly. By fixing bugs, shockwaves were sent throughout the entire web development industry.

All of this is just a long-winded way of saying: Browsers will introduce bugs. Either these bugs are going to be legitimate mistakes or unavoidable bug fixes – either way, they’ll be regressions that JavaScript developers will have to deal with.

At Mozilla, we’ve looked at this issue and Mike Shaver came up with an excellent solution: Simply include the test suites of popular JavaScript libraries inside the Mozilla code base.

Doing this will provide, at least two, huge benefits:

Library developers will be able to know about unavoidable regressions and adjust their code before the release even occurs. Mozilla developers will be able to have a massively-expanded test suite that will help to catch any unintended bugs. In addition to making sure that less, general, bugs will be introduced into the system, library authors and users will be content knowing that their code is already working in the next version of Firefox, without having to do any extra work.

What progress has already been made? Mochikit‘s test suite (Mochitest) is already a part of Mozilla’s official test suite (it’s used to test UI-specific features). I’ve already touched base with Alex Russel, of Dojo, and I’ll be working to integrate their test suite once Dojo 0.9 hits. Perhaps unsurprisingly, I’ll be working to integrate jQuery’s test suite into the core, too. Additionally, I’m also starting to contact other popular library developers attempting to get, at least, a static copy of their test suite in place.

Note: This initiative isn’t limited to straight JavaScript libraries. If you have a large, testable, JavaScript-heavy, Open Source project let me know and I’ll be sure to start moving things forward. For example, some form of testing for Zimbra will probably come into play.

In all, I think this is a fantastic step forward – and a step that really shows the immediate benefits of having an open development process centered around browser implementations. I hope to see other browser manufacturers catch on too, as having universally-available pre-release library testing will simply help too many users to count.