Dealing with tuning all the potential values manually would be a pain and could need to be re-worked every time you pulled in a new dependency, for example... Which should be something automated. We weren’t going to have that — so we set out to find out how some of the pros do it and stumbled onto the Java build pack for Cloud Foundry. This group would obviously know how to handle deploying JVM based applications since they are involved with the major backers of Spring Framework.

We don’t use Cloud Foundry, even though it’s a great ecosystem, but traced down to their memory calculator 🎉 for Java. During the research it is also hard to know what defaults to use — of course then came sensible defaults from Dave Syer Spring Boot applications. We plugged those in and haven’t seen any major anomalies in our JVM memory usage since:

./java-buildpack-memory-calculator \

-loadedClasses (400 * appJarInMB) \

-poolType metaspace \ # hope you are on java8

-stackThreads ( 15 + appJarInMB * 6 / 10) \

-totMemory 512M

You can sensibly use that to run your application and be in pretty good shape. I wrote a simple script to do this dynamically at each Docker container boot (to ensure settings always respect the environment, which is a good practice for anything containerized — see the 12Factor App):

You will get a few JVM args to provide to the java command (the inputs in this example are pretty arbitrary and are only there for the output):

$ java-buildpack-memory-calculator -loadedClasses 400 -poolType metaspace -stackThreads 300 -totMemory 1024M

-XX:CompressedClassSpaceSize=8085K -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=15937K -Xss1M -Xmx461352K -XX:ReservedCodeCacheSize=240M

This program will even tell you if your desired allocation won’t likely work well for the JVM and will emit an error (but try to give you as close of values as it can).

Originally I was going to recommend you set -Xms to match -Xmx but Glyn Normington pointed out that causes complications from an autoscaling point of view (see the relevant GitHub issue). If you autoscale based on application memory, it’s best to use their output entirely. However, if you are planning to run more simply with a few servers behind a load balancer — it’s probably good to match Xms and Xmx to avoid suffering random paging issues or memory exhaustion on the host.

Simply enough your docker container (or host) needs bash and the java-buildback-memory-calculator available on the $PATH .

No container crashes due to OOM to date :).