It's also very flexible: there's a standard, modular interface that lets you use virtually any training model, data set or parameters. You don't need to replace everything just to change one component. And since it's open source, you could easily see the community share its own models to help you get started.

It's doubtful you'll use Tensor2Tensor at home, of course, since you still have to be steeped in deep learning know-how to make it work. However, this could open the door to researchers that don't have the luxury of a many-GPU setup to train their deep learning systems in a reasonable amount of time. This should help them finish projects faster, or give them time to produce higher-quality results.