On my current project I have AWS Lambda function written on Java. I look for ways to decrease Lambda cold start's delays. Unfortunately in this moment I don't have ability to rewrite the function on some other language (Python or Go). In addition to standard actions to optimize function code I investigate other ways.

Empirically I found that increasing of RAM value leads to performance improvements. I was interested which configuration of JVM is used in Lambda environment, but it was not very accessible information. So I tried to use next function that iterates through InputArguments in the RuntimeMXBean (I found similar investigation in Internet):

Lambda Parameter Extraction

package com.amazonaws.lambda.demo; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import java.lang.management.ManagementFactory; import java.lang.management.RuntimeMXBean; import java.util.List; public class LambdaFunctionHandler implements RequestHandler<Object, String> { @Override String handleRequest(Object input, Context context) { context.getLogger().log("Input: " + input); RuntimeMXBean runtimeMxBean = ManagementFactory.getRuntimeMXBean(); List arguments = runtimeMxBean.getInputArguments(); for (String arg : arguments) { context.getLogger().log(arg); } return "Call succeeded."; } }

And I got this result:

-XX:MaxHeapSize=445645k -XX:MaxMetaspaceSize=52429k -XX:ReservedCodeCacheSize=26214k -XX:+UseSerialGC -Xshare:on -XX:-TieredCompilation

As you see it configured to use Serial GC and don't use tiered compilation. I'm not sure why AWS engineers choose such configuration, but it would be interesting try to use some other JVM settings. Does any ability to tune GC in AWS Lambda's JVM or this is fully close blackbox?