As always, we can look into it by constructing the experiment that acquires the int[] array with JNI Critical, and then deliberately ignores the suggestion to release the array after we are done with it. Instead, it would allocate and retain lots of objects between the acquire and release:

We need to generate the appropriate headers, compile the native parts into a library, and then make sure JVM know where to find that library. Everything is encapsulated here .

If GC was attempted, JVM should see if anybody holds that lock. If anybody does, then at least for Parallel, CMS, and G1, we cannot continue with GC. When the last critical JNI operation ends with "release", then VM checks if there are pending GC blocked by GCLocker, and if there are, then it triggers GC . This yields "GCLocker Initiated GC" collection.

Notice how GC is not happening in-between "Acquired" and "Released", this the implementation detail leaking out to us. But the smoking gun is "GCLocker Initiated GC" message. GCLocker is a lock that prevents GC from running when JNI critical is acquired. See the relevant block in OpenJDK codebase:

G1

Of course, since we are playing with fire — doing weird things in JNI critical region — it can spectacularly blow up. This is reproducible with G1:

$ make run-g1 java -Djava.library.path=. -Xms4g -Xmx4g -verbose:gc -XX:+UseG1GC CriticalGC [0.012s][info][gc] Using G1 <HANGS>

Oops! It hangs all right. jstack will even say we are RUNNABLE , but waiting on some weird condition:

"main" #1 prio=5 os_prio=0 tid=0x00007fdeb4013800 nid=0x4fd9 waiting on condition [0x00007fdebd5e0000] java.lang.Thread.State: RUNNABLE at CriticalGC.main(CriticalGC.java:22)

The easiest way to have a clue about this to run with "fastdebug" build, which will then fail on this interesting assert:

# # A fatal error has been detected by the Java Runtime Environment: # # Internal Error (/home/shade/trunks/jdk9-dev/hotspot/src/share/vm/gc/shared/gcLocker.cpp:96), pid=17842, tid=17843 # assert(!JavaThread::current()->in_critical()) failed: Would deadlock # Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x15b5934] VMError::report_and_die(...)+0x4c4 V [libjvm.so+0x15b644f] VMError::report_and_die(...)+0x2f V [libjvm.so+0xa2d262] report_vm_error(...)+0x112 V [libjvm.so+0xc51ac5] GCLocker::stall_until_clear()+0xa5 V [libjvm.so+0xb8b6ee] G1CollectedHeap::attempt_allocation_slow(...)+0x92e V [libjvm.so+0xba423d] G1CollectedHeap::attempt_allocation(...)+0x27d V [libjvm.so+0xb93cef] G1CollectedHeap::allocate_new_tlab(...)+0x6f V [libjvm.so+0x94bdba] CollectedHeap::allocate_from_tlab_slow(...)+0x1fa V [libjvm.so+0xd47cd7] InstanceKlass::allocate_instance(Thread*)+0xc77 V [libjvm.so+0x13cfef0] OptoRuntime::new_instance_C(Klass*, JavaThread*)+0x830 v ~RuntimeStub::_new_instance_Java J 87% c2 CriticalGC.main([Ljava/lang/String;)V (82 bytes) ... v ~StubRoutines::call_stub V [libjvm.so+0xd99938] JavaCalls::call_helper(...)+0x858 V [libjvm.so+0xdbe7ab] jni_invoke_static(...) ... V [libjvm.so+0xdde621] jni_CallStaticVoidMethod+0x241 C [libjli.so+0x463c] JavaMain+0xa8c C [libpthread.so.0+0x76ba] start_thread+0xca

Looking closely at this stack trace, we can reconstruct what had happened: we tried to allocate new object, there were no TLABs to satisfy the allocations from, so we jumped to slowpath allocation trying to get new TLAB. Then we discovered no TLABs are available, tried to allocate, failed, and discovered we need to wait for GCLocker to initiate GC. Enter stall_until_clear to wait for this…​ but since we are the thread who holds the GCLocker, waiting here leads to deadlock. Boom.

This is within the specfication, because the test had tried to allocate things within the acquire-release block. Leaving the JNI method without the paired release was a mistake that exposed us to this. If we haven’t left, we could not allocate in acquire-release without calling JNI, thus violating the "thou shalt not call JNI functions" principle.