April 5, 2020

Back in 052 I updated blender-armature to add support for bone groups as a pre-requisite for being able to play different animations on the upper and lower bodies of a character.

An example of when I need this is when a character is chasing down another entity while they're fighting.

The character's upper body would be playing an attack animation while their lower body would be playing a walk animation.

The reason that I put that on pause was the the game client's code was getting too complex and adding that in felt like building on top of a jenga puzzle.

Fast forward to today where we're living our post-refactor life - I was able to fit in in pretty nicely.

Based on what the entity is currently doing the SkeletalAnimationSystem will select the upper and lower body armature animations, calculate the dual quaternions and then store them in the SkeletalAnimationComponent so that the RenderSystem can later uses them when rendering.

Handling lower and upper body animations took a few iterations. I started with something incredibly complex with some traits and new type wrappers - and then after a few rounds of throwing things away and starting over landed on a simple struct.

/// The current action for the armature. /// /// If the armature does not have an upper and lower body then only the upper body should be used. #[derive(Debug)] pub struct CurrentAction { pub(super) upper_body: ActionSettings<'static>, pub(super) lower_body: ActionSettings<'static>, }

After getting the skeletal animations working I moved on to working on combat.

I wanted to experiment with the way that we display hitpoints.

I started off by firing up Blender and working on a model to use.

Made a heart model in Blender to use for testing our hitpoints visual.

The first iteration was a display with multiple hearts.

A red heart was meant to represent one hitpoint, and other colors would represent 5, 10, 20, 50, etc.

I worked on some transitions between the display when you gain or lose hitpoints that would make the hearts grow/shrink and interpolate between colors.

The first prototype of the overhead hitpoints display. I didn't like it. The animations are placeholders.

I didn't like how busy it felt and how having the different colors didn't feel very intuitive. Seeing several different colors at once was a little confusing.

I also didn't like that you couldn't really tell how much damage you did. If you went from one 5-heart to three 1-hearts you were increasing the number of hearts displayed while the current hitpoints were decreasing. Unintuitive.

Throwing this code away, but kept a screenshot for the memories.

Now I'm leaning into a simpler system. There is one heart above your head when you've recently been attacked, and there is one color.

Work in progress - working on an improved hitpoints display.

I'm going to experiment with things such as having the heart's size grow/shrink as you gain and lose hitpoints. I'll also try adding a number next to the heart with the exact number of remaining hitpoints and interpolating between the numbers as you receive damage or heal.

I want the combat in the game to feel satisfying, so I'm willing to invest in prototyping and scrapping ideas until we land on something smooth.

I'm also not sold on using a 3d model for the overhead display and could potentially move to a 2d sprite. I need to think and experiment more.

I first introduced the concept of Npcs making decisions based on what they know about the world in 049.

This is in contrast to a more common approach of having a planner process that has global state visibility and controls what NPCs do.

This complete decoupling of one NPC from all other NPCs and restriction of visibility to a view of state specific to just that NPC allows NPCs to run the process of deciding what to do on any machine, not just the game server. As long as their state is sent to that machine.

In the code I'm referring it as a distributed npc architecture.

The essence of it is that npcs are by design meant to be processed on other servers and then these servers communicate back to the main game server with requests for what they want the npc to do.

The npcs use a system similar to goal oriented action planning to make decisions on what to do every tick. If they decide to take an action they must communicate over the same protocol that human players use.

An npc is no different from a human player in the servers mind. Both players and npcs have the ConnectedClientComponent . The game server just sends some state to you, and you send ClientRequestBatch to the game server whenever you want to make something happen.

This means that any behavior that I design for an npc can be given to players - and vice versa.

Another nice part is that the system is designed from the beginning to allow an npc to be processed on one server and then another server could process it on the next tick.

When an npc is processed it stores everything that its paying attention to on disk (in production we use a Kubernetes volume) and when we process the npc again on the next tick it first reads from this cache.

So since zero state is stored on a npc client we can:

Use cheap AWS spot instances for running our npc planning

Dynamically decide which distributed npc client to use to process an npc on a tick by tick basis. If one npc is repeatedly taking a long time to plan we can dynamically decide to process it on a larger server. And in general the system can auto-tune itself to maximize resource utilization and minimize dead time by storing heuristics on how long servers are taking to process npcs and moving these workloads around accordingly.

Actually now that I type the above - I could see a potentially better approach of just having a queue of jobs and having the clients pull from that queue. Yeah that would probably be much simpler. If a job is added and doesn't get processed in a reasonable amount of time then we'd just scale up the AWS auto-scaling group of clients. Nice!

Granted none of this is needed right now so I haven't gone as far as building this sort of workload management, but I'm happy that when it is needed the architecture will support it.

For now I'm running the distributed client in a thread on the main game server.

The main cost of this approach is needing to serialize the npcs state and then send it over the wire, but this should pale in comparison to the cost of pathfinding through possible tasks in order to build a plan to meet a goal. So I'm hoping that this will end up scaling rather nicely.

If this architecture pans out and stabilizes over the next year or two I'll write a more detailed blog post on it.

Back in 049 the npc decision making was based on a random function. This week we implemented a system similar to goal oriented action planning by pathfinding through possible tasks to accomplish a goal.

The available tasks and their costs are dynamic based on what the npc knows about the world.

Right now I'd rate the code quality a C-.

The structure is in place and it's working - but it still needs to be exercised more for me to land on something that feels super clean to work with.

It took a couple of days to get to the C-. At the beginning I was just flailing and throwing unimplemented!() in left and right as I tried to land on a servicable structure.

In the video above I attack an npc that has a StateRequirement::AvoidDeathByAttack(RiskTolerance::Medium) .

// Dev Journal Note: In the future I can use two side by side vectors instead of having this nested indirection. // Dev Journal Note: A weighted vec is just a vector where items can be randomly chosen based on their weighting. // This allows me to add some variety in the order that goals are visited. I use the WeightedVec // in a few other places in the codebase. pub type StateRequirements = Vec<(WeightedVec<StateRequirement>, GoalTier)>;

This state requirement has a higher priority than any of its other state requirements.

The npc checks if it's met by keeping a local cache of facts that it has observed. In this case it leans on its tracking of everytime it has been attacked.

/// Keep track of entities that have recently attacked the NPC. /// /// After enough time has elapsed since an attacker has attacked this entity they'll be removed /// from the NPC's memory. #[derive(Debug, Serialize, Deserialize)] pub struct RecentlyAttackedBy { attackers: HashMap<EID, AttackerInfo>, pub(super) last_attacked_at_tick: Option<u32>, } /// Information that we store about an entity that attacked the NPC, such as when they first /// attacked and all of the damage that they've dealt. #[derive(Debug, Serialize, Deserialize)] struct AttackerInfo { first_attacked_at_tick: u32, last_attacked_at_tick: u32, hits: Vec<u16>, }

If the npc has been attacked recently the StateRequirement.is_met function will return false - and it will pathfind through Task s that are able to solve for that requirement until it lands on a Vec<Task> plan.

Then from tick to tick it will process the current task and when it is completed move on to the next task.

Since this system is so young right now there is only only applicable task here, Task::DestroyEntity(EntityId) . So it selects this and then sends a ClientRequest::AutoAttack(EntityId) to the server. Thus beginning to fight back.

In the future we can add more potential tasks to make things far more interesting.

For example, if an npc knows that the enemy attacking it is a 2x2 tile enemy and it knows of a nearby area that can only be reached by 1x1 tile entities it might decide to make a run for it using Task::MoveToTile .

I'm expecting this system to get really cool over time as I add more and more possible Task nodes into the system and the NPCs start to choose plans that I would've never expected.

Again, the distributed npc client code is still bad and not very organized or fluid - but we'll get there. Such is the nature of building something for the first time. It starts off messy - despite having decent test coverage in place.

I relish the moment when the code is still trash but you can see that underneath the rubble is a sparking beam of light that will just take the honing that comes from needing to solve a few more problems before it lights up the sky.

Basic dependency injection into shaders via string replacements to share code between shaders. Removed a lot of duplicated shader code.

What initially started as me taking a little break to read up on Apple's Metal Graphics API led to about six hours on the couch that ultimately led to PRing an example of headless rendering into metal-rs. Learned a lot!

Our financials for March 2020 were:

item cost / earning revenue + $4.99 aws - $107.82 adobe substance - $19.90 GitHub LFS data pack - $5.00 photoshop - $10.65 ngrok - $10.00 chinedun@akigi.com Google email - $12.00 --- --- total - $160.48

I've cancelled the google email to save money. I can add it back when we need it.

I wanted to release/announce an alpha of the game on April 9th - yeah that's just not happening.

I was only expecting to have one or two things to do and then just grow from there, but we're still at zero.

So we'll aim for May 21st.

We're making good progress from week to week ever since finishing the refactor so just need to keep pushing and land on something that's fun.

Cya next time!

- CFN