AI

AI systems for games rise in complexity proportionally to how diverse you want your game to be. If you need enemies that rush to the player in order to hit them, like in tower defence games, you won’t need some well-designed, modular machinery to back it up. However, enemies in Hatchet were designed to be intelligent, making various decisions, all readable to the player. Those decisions dictate how the player perceives the game: if done wrong, player may reasonably expect certain enemy’s behaviour, and be misguided due to poor AI predictability. So, we marked what we wanted our AI to be:

Smart, but not too smart!

Predictable, but not too predictable!

And that’s a problem…

Making AI behave certain way so that player finds it realistic and human-like is a thing of experimentation. Through testing, interacting, we find that certain behaviours work or not, and tweak the behaviour or the decision so that it becomes more natural to the player. Player who pays attention, that is.

Our misfortune lies in that we started making the AI before we saw how much slight improvements will be necessary. This is our first game of larger scale, so we based our AI on the decision tree, since it’s the industry standard for decision making. Improvements and special cases were difficult to pull off and required coding. That’s why we decided to make a second system, with knowledge gathered in our failed experiment. It’s designed to make it easy to change what influences decisions for actions, so that only coding we need to do is actions (unavoidable) and the system itself. The new AI is based on the Utility AI principles, and it will be described in detail in future blogs.