Making a Hack’n’slash #8: Implementing a basic AI using a Finite State Machine 1/2

Let’s work on our enemies and give them some brains using state machines…

This article is also available on Medium.

Earlier in this series, we set up some basic interactions between our guard hero and a “Brute” enemy – we made sure that whenever we enable our “attack” input, the guard throws a punch and, if the enemy is close enough, it takes a hit.

But of course, for now, the enemy stands completely still and doesn’t react to your actions. This makes our Hack’n’slash pretty uninteresting, right?

Well – let’s see how we can fix this by implementing a basic AI for our enemies based on finite state machines! Today and in the next episode, we will setup a simple algorithm to make the enemies spot the player, run towards it, attack or return to their spawn point.

Important note: in these tutorials, we are going to work on the AI for our “normal” enemies, the creepers that are all around the levels. We will see in a later episode how to handle more special cases such as the boss patterns… 😉

A quick note on modelling AI

Almost every game needs to have some form of artificial intelligence, or AI, at one point – it can be more or less complex, and it can be very specific or very generic, but all in all, game designers have to put some brains in the creatures and the characters. And they can now play with various techniques to model this sort of system: ad-hoc logic, state machines, behaviour trees, planners…

If you want a really good overview of the most famous AI architectures (along with their advantages and drawbacks), I really recommend you check out this article by Dave Mark (it’s from 2012, though, so perhaps more recent architectures aren’t described there).

Always keep in mind that none of these techniques are silver bullets: usually, one technique will be best suited for a given situation and less relevant in another. It’s all a question of limitations, requirements and trade-offs.

For example, state machines and behaviour trees are (usually) very deterministic. They rely on multiple small elements that you compose together into a larger system and that somewhat encapsulate each action but where links are hardcoded by the programmers. If this system becomes very large, state machines can be a bit too restrictive and so we can either take advantage of the behaviour trees’ modularity, or try the other techniques like planners or utility-based AI. But on the other hand, these methods create way more “random” behaviours because their structure is more dynamic. This can even lead to unexpected reactions from the AI that designers wouldn’t have thought of, which is great for having “human-like interactions”… however it makes it harder to implement and debug for programmers!

Note: you can make stochastic state machines or behaviour trees, but it’s rarely “pure random”; more often than not, it’s about giving weights to specific transitions to make some chains of actions more frequent than others. So designers and programmers still need to decide on those weights and “prepare” this randomness…

Designing our Hack’n’slash enemy AI

Here, I want my simple enemy AI to have the following features:

  • there will only be a few possible actions for the AI: moving towards a position or moving towards a target, attacking at a given rate, going back to the initial spawn point, taking a hit and dying
  • these actions will be associated with Animator states (i.e. animations)
  • the players need to be able to anticipate the enemy’s actions because hack’n’slash often imply mastery and a “death and retry” philosophy – this means we want a deterministic AI

All of this means that more “random” AI architectures like planners or utility-based AI are not relevant for our use case (we want to have a predictable behaviour) and that we don’t have a very complex system overall; so we should look into state machines or behaviour trees.

In fact, here, given the simplicity of our behaviour, state machines should be enough. And since they are a bit simpler to implement than behaviour trees, let’s go ahead and pick this technique for our enemy AI.

More precisely, we are going to implement a finite state machine, or FSM.

We actually already saw an example of a state machine with our Animator Controller! Basically, FSMs rely on a finite number of states and transitions: you initially define the set of states your entity can be in, and then you decide what inputs will trigger a change from one state to another. Finally, you simply decide what’s your initial state and, from there, the entity will essentially “live a life of its own” by regularly updating its current state.

There are several ways of implementing a finite state machine for a game. For example, you can create several C# classes that each describe the logic and data for one state, and it’s the collection of all those classes that forms the entire state machine.

Note: if you want a little refresher on simple FSMs, you can check out this short Unity tutorial I made recently – or for longer dive into FSMs with an example of the multi-class implementation, you can also take a look at this other video tutorial 😉

But, in our case, the behaviour is quite simple, so we can just implement our state machine directly in the EnemyManager based on an enum. The script will simply switch its current state between the different enum values and then check this state to know which logic to run during the Update().

Ok, with that said: I listed before the actions I want for my AI – let’s now translate this into states for our FSM!

  1. when the AI is moving towards a target or a position, it will be in a “MoveTo” state
  2. when the AI is attacking, it will be in an “Attack” state
  3. when the AI is going back to its spawn point, it will be in a “Return” state
  4. when the AI has lost all its healthpoints and takes a fatal blow, it will enter the final “Die” state
  5. else, the AI will be in an “Idle” state (and won’t do nothing)

Note: as you can see, the TakeHit() function we coded previously in our EnemyManager isn’t mentioned here – that’s because this specific method is run at a very specific moment that is “independent” of the AI’s actions, so it is not directly part of the state machine 😉

In other words, I can define an enum at the top of my EnemyManager class with this list of possible states, and have my state machine start in its “Idle” state:

Now, we’ll have to dive deeper into the various features of the AI and discuss how the movement will work, what will trigger the transitions, what additional data about the enemy type our EnemyManager will need…

Detecting the player

First of all, in order to exit the “Idle” state and have our enemy do something, we’ll need it to spot that our hero is nearby! To do this, we can use a SphereCollider to represent the field of vision (FOV) of the entity and turn it into a trigger to easily detect any collider that enters this area.

The only problem is that, at the moment, our Brute model already has a BoxCollider component to be able to take hits from the guard’s punches. So, we won’t be able to use Unity’s pretty handy OnTriggerEnter() built-in function to check for colliders because it will get confused between the Box- and SphereCollider components.

The solution is to add a level to the hierarchy of our object to differentiate between those two colliders.

At the top level, we have our actual “Brute” object (that’s what we need to turn into a Prefab, as we saw last time):

It has the SphereCollider that will be the entity’s field of vision and the EnemyManager. Note that this object is not on the “Enemy” layer, because that’s not the collider that we want to check for when the guard throws punches.

Then we have the inner level with the “Model” child object that has the Animator, the BoxCollider and the “Enemy” layer:

Also, a little trick is that if your FBX model’s orientation isn’t consistent with the root object’s direction, you can use this intermediary transform to re-adjust it – for example, here, I’ll need to rotate my “Model” object along the Y axis:

Now, we’ll be able to use OnTriggerEnter() in our EnemyManager to detect incoming colliders, compare their tags with our “Player” reference tag and run our state switching logic if need be. We’ll also mark the player as the current target so that we can track it as it moves around:

Important note: of course, make sure that the hero object (that is the player’s avatar) has the pre-defined “Player” tag! 😉

This _target is a temporary reference to the transform the entity wants to follow. It will only be used for “dynamic” destinations. So if it’s null, then it means the enemy either has no where to go, or it has to reach a static precomputed position.

But of course, we need to set the radius of the SphereCollider to be the FOV of the entity! So first, let’s update our EnemyData class with this new piece of info:

And then, at the beginning of our EnemyManager, we’ll use it to initialise the SphereCollider parameters (it’s also a good time to use the [RequireComponent] attribute to insure we do have this collider on our object):

Note: remember to update the “Brute” EnemyData Scriptable Object instance with a non-null FOV radius! 😉

Moving our enemy

The next step is to actually chase the player and get close enough to hit it!

This means we need two additional parameters in our EnemyData – the movement speed and the attack range (so that we know when we can stop chasing and start attacking):

Now: how are we going to move the enemy in an intelligent way? In particular, how can we make it avoid walls, holes and other obstacles? How can we make it plan the best path to our hero on its own?

Luckily, Unity has us covered once again thanks to its built-in AI navigation system! 🙂

Setting up the scene for AI navigation

In short, the idea behind Unity’s navigation system is to define a navigation mesh and agents to walk on it with a pre-built AI for pathfinding. This nav mesh does a re-mapping of the floors in your scene and creates an equivalent representation of the space as a grid of very tiny cells. These cells are then used by the pathfinding algorithm of the nav agent (based on the famous A-Star) to find the best path to the target point. The nice thing is that we can easily designate some objects in our scene as obstacles to instantly “cut holes” in the nav mesh and block the agent if it tries to compute a path that goes through these areas.

To tell Unity that our ground should be a walkable floor for the agents, we first need to open the “AI > Navigation” window.

Then, in the “Object” tab, with our “Ground” game object selected, we can turn into a Navigation Static mesh. The object will automatically be assigned the first default type of navigation area, “Walkable”, but you can change this with the dropdown if need be (or even define other types of nav areas in the “Areas” tab, at the top of the Navigation window!).

Then, go to the “Bake” tab and click the button to bake the nav mesh.

This will create a NavData asset in your project’s folder, next to your scene, because this data is tightly linked to the level so Unity directly puts it in the same directory 😉

Finally, make sure to toggle on the “Show NavMesh” tool in the Scene view… and you’ll see that the ground is now covered in a light-blue grid:

The only issue is that, for now, the walls in the scene aren’t defined as obstacles: an agent would consider these areas as walkable, too. To fix this, let’s add a new component on these objets – the NavMesh Obstacle. Also, we’ll have to toggle on the “Carve” option so that they really cut a hole in the nav mesh as expected:

If you open the Navigation window again and re-select your ground, you’ll see that there are now white zones around the obstacles that indicate these areas are not walkable for the agents:

The last step is to add a NavMesh Agent component on the Brute object (at the top-level) to declare it as an “intelligent” entity in this navigation system:

Note: feel free to tweak the settings of this component throughout the tutorial to see how they impact the movement of the character; I’ll stick with the defaults for now but it might not be suited to your own game look-and-feel! 😉

Thanks to this nav system, we’ll soon have an enemy that’s able to follow us while avoiding walls for example:

Using the NavMesh agent in our AI script

Now that we have configured our scene and our entity to use Unity’s navigation system, we can add a reference to the NavMesh agent component in our EnemyManager script and use it, for example, when the hero is spotted by the enemy or when it gets out of sight and the enemy should go back to its spawn point.

So first, let’s update our OnTriggerEnter() function to set the agent’s destination:

Then, let’s define the OnTriggerExit() built-in entrypoint (which is called whenever a collider leaves the trigger collider on the same object as the script) – the entity will need to switch to its “Return” state, reset the current target to be null and set the destination point of the nav agent to the spawn point position:

By default, when you change the destination point of the nav agent, it will immediately update its course and optionally its velocity to go towards it. This sometimes causes a bit of a slide on the ground and annoying large U-turns – depending on the type of game you’re making, you might prefer for the enemy to turn back instantly to look at its spawn point and then run in this direction.

To do this, simply reset the agent’s velocity to the Vector3.zero value and set the object’s forward vector (so that it looks at the target) when you change its destination:

At that point, we have a simple enemy AI and the Brute can spot the player… but it doesn’t follow it very well and it doesn’t stop to attack when it reaches it!

Implementing the movement state logic

The reason is that we’ve set the player to be the target but we still need to setup some logic for when the AI is in the “MoveTo” or the “Return” state. Basically, in both those cases, we need to:

  • check the remaining distance to the agent’s destination point
  • if the enemy is close to the target and in the “Return” state, it should switch back to the “Idle” state and reset the rest of its variables
  • but if the enemy is closer to the target than its attack range and in the “MoveTo” state, then it should switch to the “Attack” state
  • finally, if the enemy is still away from its target, it needs to re-update its nav destination based on the current’s target position – which will automatically make the enemy run in this direction

Here’s the C# version:

After these improvements, the enemy is now able to turn towards the player and stop in front of it if it’s close enough:

Adding some animations

To wrap up this first part on our enemy AI, let’s set up a little run animation for the character! The idea is the same as for the player’s Animator Controller: we’re going to add a “Run” state based on the “Running” animation clip, a “Running” boolean parameter and two conditional transitions between the “Idle” and the “Run” state that check this parameter:

Then, in our C# script, we’ll re-use the hash technique to optimise the animations update:

And, finally, we’ll turn the “Running” parameter on or off in our various methods:

And tadaa! The enemy now switches from one animation to the other when it changes its AI state 🙂

Conclusion

In this episode, we’ve started to implement a simple AI for our “normal” enemies using finite state machines. The Brute can now spot the guard, run in its direction and stop when it reaches it…

But we’re not done! Next time, we’ll continue coding up the AI to inject the attack pattern and fully integrate the “take hit” and die logics 😉

Leave a Reply

Your email address will not be published.