With this post we’ll explore how Java / Scala debuggers are written and work. Native debuggers such WinDbg for Windows or gdb for Linux/Unix get their power from hooks provided to them directly by the OS to monitor and manipulate the state of an external process. The JVM, acting as an abstraction layer on top of the OS, provides its own independent architecture for debugging bytecode.

This framework and its APIs are completely open, documented and extensible, which means you can write you own debugger fairly easily. The framework’s current design is built out of two main parts – the JDWP protocol and the JVMTI API layer. Each has its own set of benefits and use-cases for which it works best.

The JDWP protocol

The Java Debugger Wire Protocol is used to pass requests and receive events (such as changes in thread states or exceptions) between the debugger and debuggee process using binary messages, usually over the network. The concept behind this architecture is to create as much separation as possible between the two. This is meant to reduce the Heisenberg effect (Werner the physicist that is, not your friendly Meth cooking Walt) of having the debugger alter the execution of the target code while it’s running.

Removing as much debugger logic from the target process also helps make sure that changes in the debugged VM state (such as “stop the world” GC, or OutOfMemoryErrors) do not affect the debugger itself. To make things easier, the JDK comes with the JDI (Java Debugger Interface) which provides a complete debugger side implementation of the protocol, complete with ability to connect, detach, monitor and manipulate the state of a target VM.

This protocol is the same one used by Eclipse’s debugger for example. If you look at the command line arguments passed to your java process when it’s debugged by the IDE you’ll notice the additional arguments (-agentlib:jdwp=transport=dt_socket,…) passed to it by Eclipse to enable JVM debugging, also establishing the port on which requests and events will be sent.

The JVMTI API

The second key component in the modern JVM debugger architecture is a set of native APIs covering a wide range of areas relating to the operation of the JVM, known as the JVM Tooling Interface (i.e. JVMTI). Unlike the JDWP, the JVMTI is designed as a set of C/C++ APIs along with and a mechanism for the JVM to dynamically load precompiled libraries (such as a .dll or .so) that make use of the commands provided by the API.

This approach differs from JDWP in that it actually executes the debugger inside the target process. This increases the possibility of the debugger impacting the application code both in terms of performance and stability. The key advantage however is the ability to interact directly with the JVM in near real-time.

Since the JVMTI provides a powerful set of low level set of APIs, I thought it would be interesting to dive a bit deeper and explain how it works and what are some of the cool things you can do with it. The API headers are available through jvmti.h which comes with the JDK.

Writing Your debugger library

Writing your own debugger requires creating a native OS library in C++. Your “main” function in this case would look like –

The function will be invoked by the JVM when your debugger agent is loaded by the JVM. The ever important JavaVM pointer passed to you will provide you with everything you need to converse with the JVM. It introduces the jvmtiEnv class available through the JavaVM::GetEnv method that enables you to interact with the JVMTI layer through the concept of capabilities and events.

JVMTI capabilities

One of the key aspects of writing a debugger is to be extremely mindful of the effects of your debugger code on the target process. This is especially important in the case of native debugger libraries where your code runs in very close conjunction to the app. To help you get more fine grained control over how your debugger affects the execution of code, the JVMTI specification introduces a concept of capabilities.

When writing your debugger you can tell the JVM in advance which sets of API commands or events you intend to use (i.e. set breakpoints, suspend threads,..). This enables the JVM to prepare for this in advance, and gives you more control over your debugger’s run-time overhead. This approach also enables JVMs from different vendors to programmatically tell you which API commands are currently supported out of the entire JVMTI specification.

Not all capabilities are created equal. Some capabilities come at a relatively small performance overhead. Other interesting ones, such as can_generate_exception_events for receiving callbacks when exception are thrown in code, or can_generate_monitor_events for receiving callbacks when locks are acquired, come at a higher cost. The reason is they prevent the JVM from optimizing the code during JIT compile to its full extent and can force the JVM to drop into interpreted mode at run-time.

Other capabilities such as can_generate_field_modification_events used for receiving a notification whenever a target object field is set (i.e. setting watches) come at an even higher cost, slowing code execution by a significant percentage. Even though the JVM supports loading multiple native libraries concurrently, some capabilities in HotSpot such as can_suspend used for suspending and resuming threads can only be claimed by one library at a time.

One of the hardest parts we faced when we built OverOps production debugger was providing similar capabilities without incurring that kind of overhead (more on that in a future post).

Setting callbacks. Once you’ve received your set of capabilities, your next step is to set up callbacks which will be invoked by the JVM to let you know when things actually happen. Each of those callbacks will provide fairly deep information as to the event that has transpired. For example, for an exception callback this information would include the bytecode location in which the exception was thrown, the thread, the exception object and if and where it will be caught.

It’s important to note that a capability’s overhead is sometimes divided into two parts. The first part comes simply by enabling it, as it will cause the JIT compiler to compile things differently just to create the potential of making calls into your code. The second part comes when you actually install a callback function, as it causes the JVM to choose less optimized execution paths at run-time – ones through which it is able to make a call into your code along with the additional overhead of parsing and passing you meaningful data.

Breakpoints and watches. Your debugger can provide familiar capabilities for inspecting specific state at run-time such as SetBreakpoint to signal the JVM to suspend execution at a specific byte code instruction, or SetFieldModificationWatch to pause execution whenever a field is modified. At that point you can use other complementary functions such as GetStackTrace and GetThreadInfo to learn more about your current position in the code and report that back.

Most JVMTI functions like the one shown below refer to classes and methods using abstract handles known as a jmethodID and jclass (this should ring familiar if you’ve ever written Java Native Interface code). Additional functions such as GetMethodName and GetClassSignature are provided to help you obtain the actual symbol names from the class’s constant pool. You can then use these to log data to file in readable form or render them in a UI like the ones we see in our IDEs everyday.

Attaching your debugger

Once you’ve written your debugger library the next step is to attach it to a JVM. There are a few ways of doing that –

1. Connecting JDWP. If you’re writing a JDWP based debugger you’ll need to add a startup argument in the form of – agentlib:jdwp=transport=dt_socket,suspend=y,address=localhost:<port> to the debuggee to enable over the wire debugging. These arguments detail the form of communication between the debugger and target (in this case sockets) and whether or not to start the debuggee in suspended mode.

2. Attaching a JVMTI library. The JVM loads JVMTI libraries through an agentpath command line argument passed to the debuggee process and pointing to your library’s location on disk.

An alternative way is to append your agent command line arguments to the global JAVA_TOOL_OPTIONS environment variable which gets picked up by every new JVM, and whose value is automatically appended to its list of existing arguments.

3. Remote Attach. Yet another method to attach your debugger is by using the remote attach API. This simple and powerful API enables you to attach agents to running JVM processes without them being launched with any command line arguments. The downside here is that you will not have access to some of the capabilities you’d normally want such as can_generate_exception_events, as these can only requested at VM startup – sadly taking some of the punch out of your debugger.

We Analyzed 30,000 GitHub Projects – Here Are The Top 100 Libraries in Java, JS and Ruby – read more