I seem to recall that some years ago, there were sentiments on the mailing lists that Scala had become too stabilized, that it seemed there was too much resistance to evolving the language as Scala had become used more widely and in the enterprise. Thankfully, today that is far from the case, for a number of reasons.

One reason is that a few years ago, Dotty became a realistic project, and has since been intensely active. The motivation for Dotty was primarily to reboot scala on solid theoretical foundations. Since it was decided that the Dotty codebase would itself be used for Scala 3, that is another important reason for a “reboot” in some sense: The new codebase is better than the scalac codebase in many ways, so whatever ramifications switching to a rewritten compiler has, are worthwhile for that reason as well.

The Dotty codebase is also the place where a lot of innovation on the language is happening. However I would argue that technically, this is mostly orthogonal to the reboot, since language evolution is a need in of itself. In principle the same language changes could be written against the Scala 2 codebase (and in fact a number are being backported), however there are some practical reason why they’re being written against the Dotty codebase: (1) It’s expected to be their long-term home, so that is their primary target in a sense; (2) Since there is no pressure to cut production-ready releases from the Dotty codebase, it’s a safer place for experiments; and (3) it’s an easier codebase to work with.

So I think in discussions we need to disentangle somewhat the DOT-and-codebase reboot from the broader matter of language evolution.

Now, there are some language changes that are inherently coupled to the new DOT foundations. However many of them are not. So the question is, should we be aiming to work through and implement as many changes as possible in one go? Or should we have more of a slow-and-steady outlook?

For instance, what is our long-term strategy? Three years after Dotty is released will we still be making changes to the language, or at some point will we just declare it “done”? I assume we will think of things by then that we haven’t thought of today, and besides the landscape of other languages will have changed in one respect or another. So I’m sure there will changes we want to make then, one way or another.

Speaking of the languages landscape, a few years ago perhaps there was more of a sense that most languages are what they are, especially Java. But today it looks to me like every major language is evolving and innovating. ES6 was a game changer for the Javascript community but language updates are a yearly thing now. Java is evolving almost rapidly now. Typescript, Rust, C#, Python, C++, all seem today to have a mindset of continuous evolution.

So I think before we discuss what language changes will be in Scala 3.0, we should have a clear strategy in place for how innovation and evolution will happen after 3.0.

The current, almost entirely (in my field of vision) implicit attitude seems to be that Scala 3.0 is our “one big chance.”

I want to make a counterargument to this approach. Here are some issues I have with it:

Right now (or in a year from now, or whatever), what should I tell someone that wants to learn Scala? Why would someone learn a language when the fundamentals are going to change radically soon? It will be too hard to upgrade. Yes there will be rewrite tools, although we will only know how successful they will be (stated goals notwithstanding) when they are actually tried on the many gigantic private codebases out there, which can only happen after 3.0 is finalized. To be clear the rewrite tools are necessary. But how far they will get us is to me a hope that has not been proven. This applies especially to changes that are not simple mechanical transformations. And I can imagine scenarios where the rewrite tool cannot really be used. People have already said it’s unrealistic to make such big changes to some codebases. I fear that this mindset leads to pressure on the people making decisions about the features, which may influence things negatively.

The situation reminds me a bit of the old Waterfall approach vs. Agile/iterative approach dichotomy. Not that Dotty development is Waterfall, but the distinction can be applied at different levels. The main idea of the iterative approach is that you get things into real-world use as quickly as possible, even if it means smaller units of delivery, which ideally means into production releases used by end-users.

So here is an alternative proposal.

I said before that the reboot is inherently tied to some langauge changes, namely those that are required by DOT. But really, we can completely decouple the codebase switch from language evolution altogether, since Dotty has a Scala 2 mode. So:

Release an MVP (minimum viable product) from the Dotty codebase as soon as possible, with Scala 2 mode only (at least as exposed to most users). The most radical version of this is to implement 2.14 this way, but that isn’t intrinsic to the point. The next stable release from the Dotty codebase would be the minimum changes required to implement DOT. The highest priority should be to get TASTY production ready so that we can put an end to the “separate universe per major scala version” thing. Right now, that is a big restriction on innovation since it boxes in the compiler release cycle a lot. This would take off a lot of the pressure involved in designing features, since you don’t have to worry about missing your chance to get it in. Freed of extraneous pressure, evolve the language at its own pace, from now on, indefinitely.

Optionally:

We could adopt a timed-releases model (similar to Ubuntu, Java, GitLab, and many others), namely that stable releases are cut at some fixed time period, but what goes into them can vary, depending purely on what got done during that cycle. For instance if the cycle was monthly (like GitLab), it doesn’t matter if during that month there was only a tiny bug fix, or there were seven major language changes that stabilized, you release it. Adopt semantic versioning (MAJOR.MINOR.PATCH). Bump the first number whenever appropriate. Don’t fear high numbers. A few years ago getting to higher numbers may have seemed weird but now it’s completely normal, for instance Google Chrome’s version is in the 70s and Firefox’s is in the 60s. C# is up to version 8 and Java is up to 12. I for one would not object to having in the future Scala 4.0, 5.0, etc.

The relevance of these last two suggestions is that it dovetails with the idea that in a post-TASTY world, innovation can happen at an unconstrained pace.

One argument made by Martin Odersky for the “big bang” approach is that “You cannot rewrite books continually.” If I understand correctly he applied that mainly to changes where something fundamental was being replaced, as opposed to changes that involve “putting new layers of additional complexity on what is already there” – such changes don’t require rewriting the book I guess, since newcomers can read the book without getting wrong information, and learn whatever was added later separately?

Anyway, I wonder how other languages compare in that regard? In any case I’m not sure the answer, but it seems to me that it needs a different solution somehow.