The only way to beat this game is to let a robot do it.

Alright, alright it’s not the only way, people managed to finish this game without cheating.

My project has been featured on Hackaday, yay!

Project idea

Anyway, that’s not the point. The point is that a bot for this game makes a really nice image processing project to start learning OpenCV: simple shapes but lots of human disturbing effects, fast-paced game meaning real-time is required, very simple controls: rotate CW or CCW.

The idea came to me after struggling to beat the 3rd level out of 6 of Super Hexagon. At this time I had already to many projects going on but I saved it for later. 3 months later, (still trying to beat level 4…) I proposed the idea for a software project at the end of the 2nd year of engineering school and it got accepted, yeah! I teamed up with my friend JP and I’ll finally get to beat the game (hopefully!).

Grabbing the video, real-time

We tried at first to record the screen with video capture software like Fraps or VLC and create a video stream that could be grabbed by OpenCV, but the received frames were lagging up to 2 seconds behind.

The solution we used was API Hooking with DLL Injection. It allows one to make the program jump to your own code (residing in a carefully crafted DLL) instead of executing a particular function from a DLL. This tutorial helped me to understand how it works, and the process to make it work. In short, this is how it works: The injection of a DLL calls an “on load” function that replaces the first 6 bytes of the hooked function by a JMP instruction to your code, followed by a return. Every time the hooked function is called, your code will run, then the return instruction will be called, skipping the rest of the original function. You can remove the hook by replacing the JMP instruction by the original instruction.

Using OllyDbg, I found that the game called glutSwapBuffers() from glut32.dll. This function is used after all the graphics content is drawn to the back buffer and just before the front and back buffers are swapped. Note that now my code is executed by the game itself, in its own context. I can simply read the buffer and convert it to a cv::Mat image.

Of course, because we replaced the function with our own, we have to actually swap the buffers by explicitly calling the original glutSwapBuffer() function. I did this by undoing the hook, calling the original function and hooking it again.

Image processing

The image processing library we use is OpenCV. None of us ever used it before, but the community and tutorials are numerous, so it’s quite easy to get started with it.

This game is full of color changes, rotations, perspective deformation and other effects aiming at destabilizing the player. Before any image recognition occurs, we have to preprocess each frame in order to be able to find the walls and the ship.

Preprocessing

Original frame Masking Grayscale Normalizing Binarizing

Starting from the original frame, we mask the black and white text with the color in the center. Then the RGB channels are merged into one by going to a grayscale palette. Before binarizing, we normalize the image. Normalizing extends the range of color: the darkest gray is transformed into full black, the lightest gray becomes full white and the grays in between are scaled. The binarization is a simple threshold. Everything over the threshold gets white, everything under it is black.

After these steps, we end up with the walls, the ship and the central shape in white, on a black background.

Ship

Now that the frame is cleaned and standardized, we can detect the ship, the small triangle rotating close to the center.

We crop the image to keep only the center part. Then we use a high-pass filter and the function findContours() to detect all the polygons in the image. We do the same for the ship’s model image and find the most resembling polygon with matchShapes().

Once detected, the ship’s center is computed as well as its angular position. We then mask the ship with a black disk, it is not needed for the next steps.

Central shape

Usually, the arena of this game is constituted of 6 sectors (hexagon), but sometimes, sectors are removed and the central shape changes accordingly into a pentagon or a square. We need to detect this.

Hexagon Pentagon Square

We use the list of contours from the ship detection and remove the polygons with an area too large or too small. We also remove polygons that are not centered (cropped walls). The central shape is the polygon with the smallest area. The number of edges from 4 to 6 tells us the shape, but we can extract more information about the arena.

The arena is constantly rotating and we need to detect the angular position (or the angle) of each sector’s bisector. We do this by computing the normal to each edge that crosses the center of the frame. With some clever filtering and averaging with the 6 edges, we get a good estimation of how much the arena rotated, with very little noise.

Detecting the walls

The walls are white polygons on a black background. We could have detected them all, but the IA I coded only needs 3 distances per sector:

d1: the distance from the center to the inside edge of the central shape. d2: the distance from the center to the outside edge of the central shape. d3: the distance from the center to the first wall.

d1 and d2 will be used to get rid of the scaling/zooming effect and d3 is the measure of how good this particular sector is. The further the first wall, the better.

I measure these 3 distances by “casting” a ray along the bisector of the sector. d1, d2 and d3 are the distances at which the color of the pixel along the bisector goes from black to white, white to black and again black to white. d3 is capped by a circle to avoid advantaging directions due to the rectangular shape of the frame.

This is the result for different patterns:

Pattern 1 walls Pattern 2 walls

IA strategies

The first IA I coded managed to stay alive for the first 40s of the level 1. It was a very simple and temporary IA, although the final version is just an amelioration of it.

For simple patterns, the best choice is always to go toward the sector with the greatest distance d3. As the walls get closer, the way out is indeed these sectors:

Easy pattern 1 Easy pattern 2

The IA then drives the ship to the best sector by simulating keys presses (left or right), rotating in the shortest direction.

Problems

As you can imagine, the 40-second barrier is due to more complex patterns appearing around this time.

Using only d3, the correct way out is found: the top right sector. The shortest path is to rotate clockwise but this way is obstructed by a wall. With d1 and d2, we can detect that there is a wall: (d2-d1) much greater than d1. By testing every sector on the way, the IA can choose to rotate in the other direction if there is a wall.

Unfortunately, this is not enough:

In this pattern, the way out is detected, and both directions are free from walls. Or are they?… The sector that IS the way out is detected as a wall itself! This means that both CW and CCW rotation are invalid. The IA will wait until the wall disappears and then move, but it will be too late…

New version

The solution I came up with is to look for the “best” sector once CW and once CCW. The wall detection is dropped and replaced by an accessibility condition. A sector is accessible if there is enough space (d3-d2 large enough) to let the ship pass to this sector from the previous one: d3(i-1)-d2(i) and d3(i)- d2(i-1) must be large enough for sector i to be consider accessible from the sector i-1.

I must tell you that the walls are not lethal by themselves. You lose the game is your ship gets crushed by a wall. But you can hit the side of an obstacle with your ship without any problem.

This IA works well for all the patterns, and actually beat the game (the 6 levels!). It manages to beat levels 1, 2, 3 and 4 almost everytime. Levels 5 and 6 requires a couple of trials before a good run. You can see it performing on the 3rd level:

And on the 6th and last level: