AMO Development Changes in 2010

The AMO team met in Mountain View last week to develop a 2010 plan. We've been wanting to change some key areas of our development flow for a while but we needed to make sure time was budgeted in the overall AMO and Mozilla goals. As usual, the timeline will be tight, but the AMO developers do amazing work and as our changes are implemented, development should just get faster. I'll give a brief summary of the changes we're planning; a lot of discussion went into this and I'm not going to be able to cover everything here. If you've been in the AMO calls or reading the notes you probably already know most of this.

Migrating from CakePHP to Django

This is a big undertaking and we've been discussing it for quite a while. We're currently the highest trafficked site on the internet using CakePHP and along with that we've run into a lot of frustrating issues. CakePHP has serviced AMO well for several years, so it's not my intention to bad mouth it here, but I do want to give a fair summary of why we're moving on. Please also note that AMO is still running on CakePHP 1.1 which is, I think, a year out of date? Three substantial issues:

Useful Database Abstraction Layer: CakePHP has a concept of database abstraction, but we didn't find it powerful enough. When it did work it would return enormous nested arrays of data causing massive CPU and memory usage (out of memory errors plague us on AMO). When it didn't work, we'd end up doing queries directly which kind of defeats the purpose. We couldn't use prepared statements so we'd have to escape variables ourselves. There was no effective caching built-in and since we just had huge arrays as a response there was no effective way to invalidate the cache we were using (see: Caching is easy; Expiration is hard). The DB layer should return objects that are easy to cache and easy to invalidate. The built-in Django database classes (combined with memcache) should work fine for us here.

CakePHP has a concept of database abstraction, but we didn't find it powerful enough. When it did work it would return enormous nested arrays of data causing massive CPU and memory usage (out of memory errors plague us on AMO). When it didn't work, we'd end up doing queries directly which kind of defeats the purpose. We couldn't use prepared statements so we'd have to escape variables ourselves. There was no effective caching built-in and since we just had huge arrays as a response there was no effective way to invalidate the cache we were using (see: Caching is easy; Expiration is hard). The DB layer should return objects that are easy to cache and easy to invalidate. The built-in Django database classes (combined with memcache) should work fine for us here. Effective unit tests: I've beat the drum about our unit tests before but the simple matter is that it's really difficult to do them right with the tools we are using. Our test data is already very limited, but if we try to run all our tests right now they'll run out of memory (and take forever). The CakePHP method of mocking controllers and models was inadequate for what we needed and difficult to deal with. We want our unit tests to run quickly, from the command line, and be independent from each other so there aren't intermittent problems to waste our time with. We'll be using Django's built-in testing framework.

I've beat the drum about our unit tests before but the simple matter is that it's really difficult to do them right with the tools we are using. Our test data is already very limited, but if we try to run all our tests right now they'll run out of memory (and take forever). The CakePHP method of mocking controllers and models was inadequate for what we needed and difficult to deal with. We want our unit tests to run quickly, from the command line, and be independent from each other so there aren't intermittent problems to waste our time with. We'll be using Django's built-in testing framework. Better debugging: Debugging in CakePHP amounts to defining a DEBUG level and seeing what is printed on the screen (usually the giant arrays). We supplemented this with Xdebug where we needed it, but that's still not enough. A framework should have excellent logging and on-the-fly debugging that displays a full traceback (often something will fail deep within CakePHP and we'll get the file/line where PHP gave up, but not the line in our code that started the problem), the values of variables, the page headers, server settings, SQL that was run, what views and elements are in use, etc. We're planning on using a combination of pdb, IPython, and the django-debug-toolbar to make all of this easily accessible while developing.

Those are the major issues we're having right now, but if you want to dig into the comparison some more check out our discussion wiki pages, but realize the majority of discussion happened in person.

Moving away from SVN

We moved AMO into SVN in 2006 and it's treated us relatively well. Somewhere along the line, we decided to tag our production versions at a revision of trunk instead of keeping a separate tag and merging changes into it. It's worked for us but it's a hard cutoff on code changes, which means that while we're in a code freeze no one can check anything in to trunk. As we begin to branch for larger projects this will become more of a hassle, so I'm planning on going back to a system where a production tag is created and changes are merged into it as they are ready to go live.

Most of the development team has been using git-svn for several months and, aside from the commands being far more verbose, we haven't had many complaints. We've discovered Git is a much more powerful development tool and we expect to use it directly starting some time next year. As of now, we expect to maintain the /locales/ directory in SVN so this change doesn't affect localizers but we'll keep people notified if there are any changes to that process.

Continuous Integration

I mentioned excellent testing being one of the reasons we're moving to Django. Along with that testing is the opportunity for continuous integration. We plan on using Hudson as the framework for our continuous integration. With excellent test coverage and quick feedback from Hudson this should drastically lower our regressions and boost our confidence when we deploy. Speaking of which...

Faster Deployment

For most of 2009 we've pushed on 3 week cycles. 2 weeks of development, 1 week of QA and L10n . Delays and regressions being what they are, I think we averaged a little better than a push a month. This is a fairly rapid cycle for a lot of development shops, but I feel like it's holding us back. We've heard a lot of success stories about shorter cycles and I'd like to aim for deployment (optionally, of course) of a few times per week. By shortening the development cycle we reduce the stress of:

the developers: Everyone likes to see what they've done go out quicker and it means less conflicts with others when the patches are smaller.

Everyone likes to see what they've done go out quicker and it means less conflicts with others when the patches are smaller. the QA team: Right now we dump 2 weeks of work on them and say we need it done right away. With smaller cycles they can verify small changes as they go and not be overwhelmed.

Right now we dump 2 weeks of work on them and say we need it done right away. With smaller cycles they can verify small changes as they go and not be overwhelmed. the infrastructure team: Smaller changes means less to go wrong and with a continuous integration server and some automation they can have minimal involvement with the whole process.

Smaller changes means less to go wrong and with a continuous integration server and some automation they can have minimal involvement with the whole process. the localizers: Every time we release we dump a bunch of changes on these fantastic people and tell them we need them back in a week. Most of the time they plow forward and get them done on time. If they don't though, they are stuck with waiting for the next 3 week cycle. If we push often, it's not a big deal.

Every time we release we dump a bunch of changes on these fantastic people and tell them we need them back in a week. Most of the time they plow forward and get them done on time. If they don't though, they are stuck with waiting for the next 3 week cycle. If we push often, it's not a big deal. the product managers: These guys come up with crazy ideas for us to implement and then they stare at graphs and numbers to see if it worked. With shorter cycles they can get faster feedback about what works and what doesn't.

These guys come up with crazy ideas for us to implement and then they stare at graphs and numbers to see if it worked. With shorter cycles they can get faster feedback about what works and what doesn't. the users: Faster release cycles means bugs that are fixed in the repository are fixed on the live site sooner. 'nuff said.

Process Data Offline

Much of AMO relies on cron jobs to get things done. All the statistics, add-on download numbers, how popular an add-on is, all the star rating calculations, any cleanup or maintenance tasks - these are all run via cron and they are so intensive that the database has trouble keeping up. We're planning on utilizing Gearman to farm all this work out to other machines in incremental pieces instead of single huge queries. Any heavy calculating that can be done offline will be moved to these external processors which should help improve the speed of the site and make all our statistics more reliable (as currently the cron jobs have a tendency to fail before they are complete).

Improve the Documentation

Documentation is a noble goal of many developers but it rarely gets enough attention. We evaluated our current documentation and found it is woefully out of date. By being on a wiki that is rarely used it doesn't get updated except when someone tries to use it and sees it's not right. We're hoping to change that by moving the developer documentation into the code repository itself. We'll be able to integrate with generated API docs, style the docs however we want, and check in changes right along with our code patches. When someone checks out a copy of AMO, they'll get all the documentation right along with it. We'll use Sphinx to build the docs.

The outline above details several large, high-level changes but there are a lot of other plans for smaller improvements as well. This post got a lot longer than I was expecting, but I'm really excited about the direction AMO is headed for 2010. As these changes are implemented the site will become more responsive and reliable, and we'll be able to adapt to the needs of Mozilla's users even faster. As always, feedback and discussion are welcome and stay tuned for further back end improvements.