Why you should use Unity’s new input system

Let’s see why Unity’s new input system is worth diving into! 🙂

This article is also available on Medium.

Since Unity 2019.2, the famous game engine now offers a second input system that is meant to ultimately supersede the current one. Although this new system is still in development and only available as an additional package, it already has some really interesting features and deserves we at least take a quick look at it.

I actually talked about this new input system a bit in my RTS tutorial series, during the episode on shortcuts. Although I was a bit critical of it in this tutorial and rather focused on how to implement a shortcut system by hand… I have to admit it was voluntarily quite biased because I wanted to push forward a handmade solution 😉

But in truth – Unity’s new input system has great advantages and is worth taking a look at…

So today, let’s see how to create a very basic cross-platform (gamepad/keyboard+mouse) player controller script using this new system!

An overview of the new system

How is the new system organised?

The new input system relies on a few core objects that are essential to understand if you want to dive deeper into this: the input actions asset, the action maps and the bindings.

The input actions are stored as an asset in your project. They are the top-level object that you’ll need to instantiate and refer to to access the input system.

This asset contains one or more action maps. Those maps define all the mappings between a key or a device input slot and an action, in a given context. This is very useful if you want your controls to perform different actions depending on the situation your player is currently in (for example: in the UI of the menu, in the 3d world on the ground, flying with a parachute…).

Finally, each action map contains one or more binding. A binding is an action that can be listened to and reacted to in your C# script: it will be activated and “emitted” by the input system if the matching button or stick is used. The nice thing is that this binding can contain a list of device inputs and keys to directly handle cross-platform “disambiguities”.

Why is the new system interesting?

All in all, as OneWheelStudio explains in this video, although it’s a bit complex to apprehend at first, the new input system is pretty powerful.

More precisely, this system shines in 3 areas:

  • it makes cross-platform controls easy: compared to the old input system, it is way faster to perform the same action via your keyboard, an Xbox controller, a Playstation gamepad, etc. and keep all devices consistent and in sync
  • there are lots of quick-wins & facilitators like binding compositing, setting the input by pressing it on your controller, switching between multiple action maps depending on the current context…
  • it can work thanks to events: this makes it more optimised for discrete inputs because you don’t need to continuously poll and check for them in your Update() function anymore; instead, you define callbacks that are managed on their own, automatically, if the input is indeed activated

Note: polling and regular update is still required for continuous inputs like sticks for example, as we’ll see later on in this tutorial 🙂

This new system even has a built-in input debugger that lets you quickly check which inputs are currently activated during runtime! And you can also install some additional samples to directly visualise your gamepad / basic inputs on-screen while playing:

So: even though it takes a bit more preliminary work and it requires you to really focus at first, the new input system actually does a lot of the heavy lifting under the hood and ultimately creates a more robust architecture.

Why you should use actions instead of direct inputs

I’ve said before that the new system relies on bindings to properly map player inputs to actions. But why is that a valuable technique? Why can’t we just say: “if I press the space-bar, then the player jumps!”? It would be way more readable, right?

Well…

It’s true that the most basic form of input is one that directly maps a specific piece of hardware, so a button or an axis value, to a function in the game:

The problem with that method is that it completely depends on the device and it is pretty hard to re-configure afterwards. If you want to handle another type of input controller, you’ll have to add another check in your if-statement… and if the player wants to re-map the controls to better suit their style, it’s simply impossible!

That’s why, usually, you don’t do direct binding but rather add an intermediary component: the action.

The idea is that instead of referencing a specific device button or axis, you say that your function will be triggered by an event that corresponds to an action. This action is purely abstract. The important thing is that it can be caused by one or more inputs and that these inputs can change throughout the game, without this change impacting the action-event-function part of the chain!

Let’s take back our jump example – you could have a gamepad or a keyboard run the “Jump” action, and then this action triggers the “Jump” event that, in turn uses the “Jump” function as callback:

Or you could allow the players to do some re-mapping and specify their own controls instead of the default ones:

Having an input system is way more flexible than direct bindings and that’s why it’s at the heart of Unity’s new input system

Importing and enabling the new input system in your project

Alright – enough talking, let’s get to work! 🙂

To use the new input system, you’ll need to have Unity’s 2019.2 version or newer. Then, you’ll have to import this package in your project because it is still a downloadable lib for now. So, go to your package manager and, in the Unity Registry, install the Input System package:

Once it’s done installing, Unity should warn you about needing to “switch to the new system“.

To facilitate the transition and avoid too many breaking changes, Unity’s team has decided to let devs have both systems coexist, or pick just a single one. You can click “Yes” right now, or set this option in your Project Settings > Player panels.

To benefit from the new system, you can choose either the “Input System Package (New)” option or “Both”. If you choose “Both”, then the old system will continue to work as well. This can be very sweet if you have other packages that rely on the old input system… like, in my case, Cinemachine for example (that allows me to track and follow my player with ease!).

Setting up our input maps and bindings

Creating the input actions asset

The first step before we can get to coding is to create our famous input actions asset. To do this, there are two possibilities: you can either create a brand new asset from scratch or use a default one with some pre-configured maps and bindings.

If you want to go the long way, you can simply right-click in your project folder and create a new asset of Input Actions type, as usual:

But I’d personally rather use (and adapt) the default one, because here we’ll design some pretty common schemes so these default will already be partly okay for us.

Let’s first create a empty game object (we can call it “InputManager”) and add a new PlayerInput component to it:

This component will handle all the input processing and callback triggering for our single-player game; if you need multiple controllers at once, then you’ll have to dig into the “PlayerInputManager” instead. This is a really nice feature (that you can check out with the sample example scenes if you’re interested) that enables you to quickly do lobbies, split-screens… 🙂

Then, in this component, click on the “Create Actions…” button. This will prompt a popup that lets you choose where you want to store the asset, and automatically create a new .inputactions asset with some bindings in it.

To inspect and edit it, simply double-click on it in your project folder; this should bring up the following panel (that you can dock somewhere in your layout if you want):

Note: you can also create action maps and bindings via C# scripting, but I won’t be diving into this in this post. If you want more info, you can check out the official docs 🙂

Examining and tweaking the bindings

You see that, because we created input actions based on the default ones, we already have a “Player” and a “UI” action map (in the left column). We’ll ignore the UI map in this tutorial and focus on the Player action map.

In this map, as you can see, we have 3 bindings for now: “Move”, “Look” and “Fire”. Inside of each binding, you see the associated inputs per device type (keyboard, gamepad…). The “Move” action also shows you an example of composite bindings: the keyboard WASD or arrow keys are actually treated as if they were gamepad sticks, only we “clamp” the values to the 4 up/right/down/left axes.

In our case, we won’t be needing the “Fire” action, so we’ll replace it with our “Jump” action. The other two actions (“Move” and “Look”) however can be used almost directly! 🙂

Take a closer look at those two bindings: the “Move” and “Look” are both actions of value type. On the other hand, “Fire” (which we will rename to “Jump” shortly) is a button action. The new system provides you with 3 action types:

  1. value actions: for inputs that need to be tracked continuously and that require disambiguation – typically used for sticks or keyboard arrows
  2. button actions: for discrete inputs that are pressed/released/held
  3. passthrough actions: similar to value actions, but they bypass the disambiguation process – which allows you to get and consume inputs from all controls at once, if need be

In our case, we need 2 continuous actions, “Move” and “Look”, and 1 button action: “Jump”.

Let’s take care of this “Jump” action!

In your actions editor, select the “Fire” action and rename it “Jump”. Then, click on the various device inputs and change them to be something more intuitive, like the “South” button of the gamepad and the “Spacebar” key of the keyboard.

Note: the “South” button is a nice placeholder for any button at the bottom of the “diamond” on a gamepad controller… be it the “A” from an Xbox controller or the “X” from a PS one 😉

You can do this by clicking on the dropdown in the right column that says “Path” and picking the path to the binding input; or just clicking “Listen” and then pressing the input key on your input device:

Important note: when you’re done editing the asset, don’t forget to save it! You can either click on the “Save Asset” at the top or enable the “Auto-Save” mode on the right – but the package is still in development and this sometimes cause some UI issues…

Auto-generating a C# class from the asset

The last thing we can do before we can start coding is use this asset to create a C# class that wraps this asset and makes it easier to access and use our inputs.

Basically, it will avoid us having to remember that our input is in the “Player” map, and that it’s named “Move”; instead, we’ll have a class that gives our IDE an auto-complete hint and validates the path to the binding we wrote is valid.

It’s straight-forward: just select your input actions asset and, in the Inspector, toggle the “Generate C# class” checkbox. Optionally, you can change the asset path, name and namespace of the class – or just leave the defaults.

Finally, when you click “Apply”, the new C# class will be added to your project. If you keep the defaults, the name of this class will depend on the name of your .inputactions asset – in my case, it’s “DefaultPlayerActions”.

Ok! We’re now ready to use those actions in our code 🙂

Using our inputs to move our player object!

In this tutorial, I’m going to stick with a primitive for my player and use a little red sphere. I’ll put it in a simple scene that has a ground plane, 4 walls and a few cubes so that I can move around and test collisions.

All the objects in the scene (ground, walls and cubes) have colliders (either BoxCollider or MeshCollider components) so that there are collisions with the player.

I will use the left stick or WASD/arrow keys to move it around (using a rigidbody and physics-based movement scheme), the right stick or mouse to rotate the camera, and the “south” button or spacebar to jump.

Designing our player controller logic

All of this will be handled by our HeroController class – create a new C# script and add it as a new component on the “Hero” object:

Then, open it in your IDE and remove the auto-generated Start() and Update() functions. Instead, the first step is to create and get a reference to our input actions.

The idea is to instantiate the DefaultPlayerActions() class we’ve just generated to have a unique input manager instance. We can create it in our Awake() function by using the C# class constructor:

Now, we can access the different bindings inside this input actions instance and enable/disable them in the relevant methods. Note that we need to import the UnityEngine.InputSystem package to be able to use the InputAction variable type:

You see that, thanks to the auto-generated class, we can easily access the various bindings with things like: .Player.Move 🙂

For the “Move” and “Look” bindings, since they are continuous, we’ll use polling in our FixedUpdate() function (I’ll use FixedUpdate() instead of Update() to be sure all physics are computed properly). For the “Jump” binding, we can directly set a callback function, OnMove, to be called when the button is pressed.

Note: if you want to learn more about how to use events in C#, you can check out another article I wrote on this topic a while ago 😉

To check all the inputs are wired up as expected, you can add some debugs in your script:

At that point, if you run the game, you’ll see that you get the various debugs in the console:

  • if you have a gamepad controller and you move the left/right sticks or press the “south” button
  • or if you’re using the keyboard+mouse device and press the WASD/arrow keys, move the mouse or press the spacebar

Moving the player

Let’s create a physics-based movement logic that uses a Rigidbody component on our player. We’ll also have a collider to get collisions with the ground, walls and cubes:

Note that I am freezing the X/Z rotations to avoid the ball rolling around and “losing its up direction”, because I will soon need to check for the distance between the “feet” of the player and the ground 😉

We can get this component in our Awake() function and set its velocity in the FixedUpdate() method.

If you run the game again, you’ll see that you can now move the red sphere with the left stick or the WASD/arrow keys 🙂

Rotating the camera

The second action we need to implement is the camera rotation.

Since we decouple the player’s and the camera’s rotations, this could lead to inverted/inconsistent moves if the camera was rotated too much… so I’ll clamp the camera rotation to a pre-defined angle to avoid having too counter-intuitive movements!

Overall, the code is pretty straight-forward:

Here, I’m not taking Camera.main directly for my _camera variable because I use the Cinemachine package to auto-follow my player – so I need to pass in the Cinemachine virtual camera as reference. That’s why I’m turning the variable into a serialized field, shown in the Inspector, so I can drag the camera to it.

Note: by the way, if you’re interested in learning more about the Cinemachine package, leave a comment and I might do a tutorial on this! 😉

Run the game again – you’ll now be able to rotate the camera with the mouse or the right stick to better inspect the scene around the player.

Making the player jump

To wrap this up, let’s add our jump action! Whenever we press the “Jump” binding and trigger its OnJump() callback, we’ll want to add some vertical force to the rigidbody:

However, in this simple prototype, I don’t want the player to perform double (or infinite)-jumps – so I’d like this to only happen if the player is grounded.

To check for this, I’ll add a little empty game object at the “feet” of my player, “GroundCheck”, like this:

Then, I’ll reference it in my HeroController (don’t forget to assign it in the Inspector! 😉 ) and do a little raycast downwards: if the raycast does hit the ground, then it means the hero is grounded and he can jump. Else, he’s currently mid-air.

You can even optimise this check by putting the ground plane on a specific layer, “Ground”:

Note: if you want the black cubes in the scene to be walkable, then you need to also put them on the “Ground” layer 🙂

And adding this parameter to the raycast check:

Whether you add this optimisation or not, your player is now able to jump around, yay!

Conclusion

Unity’s new input system is still in development, but it is a very promising and powerful package that really helps with some essential aspects of input management: mapping switches, in-game binding remap, cross-platform binding sharing…

Even if it takes some getting used to (especially when you’ve spent years with the old input system), I definitely see advantages to this system that are worth looking at!

What about you: have you used this new input system? How do you think it compares to the old one? Feel free to tell me in the comments!

Leave a Reply

Your email address will not be published. Required fields are marked *