A new technique allows attackers to hide malicious code inside digitally signed files without breaking their signatures and then to load that code directly into the memory of another process.

The attack method, developed by Tom Nipravsky, a researcher with cybersecurity firm Deep Instinct, might prove to be a valuable tool for criminals and espionage groups in the future, allowing them to get malware past antivirus scanners and other security products.

The first part of Nipravsky's research, which was presented at the Black Hat security conference in Las Vegas this week, has to do with file steganography -- the practice of hiding data inside a legitimate file.

While malware authors have hidden malicious code or malware configuration data inside pictures in the past, Nipravsky's technique stands out because it allows them to do the same thing with digitally signed files. That's significant because the whole point of digitally signing a file is to guarantee that it comes from a particular developer and hasn't been altered en route.

If an executable file is signed, information about its signature is stored in its header, inside a field called the attribute certificate table (ACT) that's excluded when calculating the file's hash -- a unique string that serves as a cryptographic representation of its contents.

This makes sense because the digital certificate information is not part of the original file at the time when it is signed. It's only added later to certify that the file is configured as intended by its creator and has a certain hash.

However, this means that attackers can add data, including another complete file inside the ACT field, without changing the file hash and breaking the signature. Such an addition will modify the overall file size on disk, which includes its header fields, and this file size is checked by Microsoft's Authenticode technology when validating a file signature.

However, the file size is specified in three different places inside the file header and two of those values can be modified by an attacker without breaking the signature. The problem is that Authenticode checks those two modifiable file size entries and doesn't check the third one.

According to Nipravsky, this is a design logic flaw in Authenticode. Had the technology checked the third, unmodifiable file size value, attackers wouldn't be able to pull off this trick and still keep the file signature valid, he said.

The malicious data added to the ACT is not loaded into memory when the modified file itself is executed because it's part of the header, not the file body. However, the ACT can serve as a hiding place to pass a malicious file undetected past antivirus defenses.

For example, attackers could add their malicious code to one of the many Microsoft-signed Windows system files or to a Microsoft Office file. Their signatures would still be valid and the files functional.

Moreover, most security applications whitelist these files because they're signed by trusted publisher Microsoft to avoid false positive detections that could delete critical files and crash the system.

The second part of Nipravsky's research was to develop a stealthy way to load the malicious executable files hidden inside signed files without being detected. He reverse engineered the whole behind-the-curtain process that Windows performs when loading PE files to memory. This procedure is not publicly documented because developers don't typically need to do this themselves; they rely on the OS for file execution.

It took four months of eight-hours-per-day work, but Nipravsky's reverse engineering efforts allowed him to create a so-called reflective PE loader: an application that can load portable executables directly into the system memory without leaving any traces on disk. Because the loader uses the exact process that Windows does, it's difficult for security solutions to detect its behavior as suspicious.

Nipravsky's loader can be used as part of a stealthy attack chain, where a drive-by download exploit executes a malware dropper in memory. The process then downloads a digitally signed file with malicious code in its ACT from a server and then loads that code directly into memory.

The researcher has no intention of releasing his loader publicly because of its potential for abuse. However, skilled hackers could create their own loader if they're willing to put in the same effort.

The researcher tested his reflective PE loader against antivirus products and managed to execute malware those products would have otherwise detected.

In a demo, he took a ransomware program that one antivirus product normally detected and blocked, added it to the ACT of a digitally signed file, and executed it with the reflective PE loader.

The antivirus product only detected the ransom text file created by the ransomware program after it had already encrypted all of the user's files. In other words, too late.

Even if attackers don't have Nipravsky's reflective PE loader, they can still use the steganography technique to hide malware configuration data inside legitimate files or even to exfiltrate data stolen from organizations. Data hidden inside a digitally signed file would likely pass network-level traffic inspection systems without problems.