Then And Now

At Java’s inception it was considered to be slow. It borrowed syntax from C and C++ and was thus often compared to them. Both of these languages can be used at an eerily low level, though, allowing savvy developers to squeeze the last bit of performance out of their hardware. Java with its many abstractions, chief among them “(almost) everything is an Object”, compiling to bytecode instead of machine code, and automatic garbage collection, had a hard standing against that.

But Moore’s law, the benefits of deferred decisions, and excellent engineering worked in favor of Java and performance got better with every new release. Nowadays, and particularly in comparison with languages like Python and Ruby, Java is considered to be a powerhouse. Java’s garbage collection and just-in-time compiler even got powerful enough to close the gap to C/C++, at least enough to require any further comparison of the three to be a detailed discussion about coding techniques, underlying hardware architecture, problem space, and other context.

Productivity

But performance must not only be measured in CPU cycles and memory consumption. Providing services by leveraging a programming language requires many more resources, not least among them developer time. Java’s safety (especially regarding types and memory) and cohesiveness (“just do something OO”), allowed more developers to producer safer code faster. It just takes much less time to become productive in Java than in C++.

At the same time Java carved out a niche in which hardware resources are readily available. Considering the entire industry, few applications have make-or-break performance requirements and many can be sped up sufficiently by installing a new server or, more recently, requesting another EC2 instance. You need either tight requirements or a lot of users to justify spending a decent percentage of your engineering budget on performance.

Talking about Java being more productive… it is interesting to note that history rhymes here. In its first years Java was considered “slow but cool” and while it could not rival the incumbent’s (speak C++) better performance it scored with less complexity. But not only did Java grow more complex over time, as all languages do, also new languages with very different and often simpler approaches sprang up around it. Now, these are the “slow but cool” kids while Java is the performant incumbent everybody wants to replace.

Diminishing Returns

But will performance improve as it did until now? While it may seem that progress happened by itself this is far from the truth. CPU vendors, OS developers, the JDK team, all of them have been hard at work and spent billions worth of engineering time to get us to where we are. But further progress is increasingly hard to achieve. Moore’s Law ends (or ended?) some time this decade and making Java faster is no easy feat either.

G1, for example, is a beast with highly complex internals and a lot of options to tweak. This is in part because it does not actually improve throughput, the most obvious and apparently easier performance metric, but predictable pause times. Still, that doing that while staying competitive to existing solutions required this level of complexity underlines my point.

Another example is string compaction, coming in Java 9. The idea is simple: In a usual application most strings only use ISO-8859–1 and characters can thus be represented by a single byte, so why use two as String (or rather char ) does? Using a different representation promises cutting the required memory for strings (on average between 20 % to 30 % of an application's live data) in half! The implementation was not that simple, though, and also makes future improvements less likely and less fruitful.

Exhibit C is Project Valhalla. Overly simplified it promises user-defined primitives and generics over primitives with all the performance gains that they have. But just listen to Brian Goetz explaining some of the ins and outs and you will see how complicated it is to introduce this feature.

But maybe it’s just Java that’s steering into a local optimum? Possible, yes. It would be interesting to know how exactly new system-level languages like D, Rust, or Go fare compared to Java, (if you know, get in touch!) but I’d be surprised if they beat it by a landslide. So maybe this place in which C/C++, Java, and others have congregated is close to a global optimum after all.

So What?

So what if we’re reaching a phase where performance improvements are slower? It’s not like things aren’t going pretty fast the way they are. True on the desktop and in the web but, and this is where Claire’s talk opened my eyes, what about the embedded, mobile, AR/VR experience?

Performance-wise all of these areas suffer from much smaller and thus less powerful hardware and a high focus on energy efficiency while at the same time demands are high — apparently, spectacular VR needs 120 fps, AR even more. So if these things are supposed to be our future, we better make sure to write code that works on them and does not degenerate to a slide show.

How? It was recommended to me to look into data-oriented design, particularly this talk. Will do!