Which begs the question, why don’t the game developers make something that detects or renders cheats useless? They have, followed shortly by the hackers working around the problems. Overall it has turned into an Arms race: a long term fight over who has the best technology. Overall hackers have won this arms race so far using three basic methods.

Note: I won’t be discussing statistical analysis in this article. For example, if you take samples of a player you can heuristically determine if they are a potential hacker and this is a useful tool, but this article is limited to programmatic detection of hacks.

Hackers Brute Forced the Arms Race

There are so many hackers and only one creator of the game. For every patch a slew of new hacks with slight variations and different signatures will be released. This method alone is enough to keep most games developers without deep pockets left wasting their man-power fighting hacks instead of releasing new game features. Think about that from a business perspective and balance it with the profit incentive they have since a hacker banned will typically re-purchase the game.

Hackers Created Service Models

Instead of selling the cheat as a product to be inspected by the game developers, they instead sell it as a service typically for a monthly cost. This allows them to obfuscate the program so that each one is technically different, which makes it harder for systems like Valve Anti-Cheat (VAC) to correlate them together. This increases the cost to the game developers to reverse engineer the cheats. It also makes the game developers pay the hack developers monthly, which I find hilariously absurd.

Hackers Got Smart

Hack developers individually compete now, and being “detected” is almost like being a milk company selling spoiled milk. This has lead to the most sophisticated techniques used overall in hacking. With a hack outside of gaming you start out without the users consent trying to escape your shackles and escalate privileges. To translate, usually you’re stuck in a web browser running Javascript and you somehow have to get arbitrary code to execute from another server on the Internet. That’s pretty hard, and it happens all the damn time. With game hacks the hackers have a much easier time. The user is giving them full consent and they are willing to jump through whatever hoops needed.

The game developers are now stuck in a war they can’t win. Hackers have code running in kernel space and above the OS layer. If the OS is doing it’s job decently, there should be no legitimate way for the user space game to detect it. The game developers also are not afforded with the same luxuries as the hack developers.

Cryptography Solves this Arms Race

Arms races suck. They divert resources from doing things like making new game features into spending time keeping the game from being ruined. To really solve the problem game developers have to escape the arms race they are stuck in. There is a solution in cryptography called Homomorphic Encryption.

Unlike traditional encryption this method does not actually transmit the message to be decrypted. For example, when accessing a secure web page (HTTPS) it will render the HTML, encrypt it, transmit that to you, and then you will decrypt and render the original HTML. With homomorphic encryption you instead encrypt the instructions needed to produce the result and send those. The operations are broken down in such a way where something as simple as calculating x + y is indistinguishable from x + z or even y * z. Your computer can run the program and calculate the results, but it’s not capable of altering the code in any meaningful way. In our case this means the hackers cannot decrypt/access the memory of the program to find things like positions of enemies.

In theory this cannot work on current hardware, because current hardware does not directly run the encrypted code. Also unfortunate is that the entire system would have to be able to run encrypted code, otherwise any component that doesn’t can be leveraged to break the encryption. For example, if only the CPU runs encrypted code but the GPU doesn’t it would be easy to correlate which areas of encrypted code are responsible for rendering which things, etc.

With this type of encryption wall hacks in FPS games like Counter-Strike: Global Offensive and map hacks in RTS games like Starcraft II would not be possible. Making any type of hack at this point will essentially be emulating a human. For example, an aimbot could be a monitor running a program that scans pixels for enemies and reports them to the hacked mouse for aim correction. However, it would lack the context needed to make perfect choices every time, which is what makes hacks so powerful currently.