Running async code in Unity… in edit mode!

Let’s see how to create custom panels in Unity – and have them run async code!

This article is also available on Medium.

Lately, I’ve been working quite a lot with Unity’s built-in editor features, and in particular all the tools it provides for creating your own UI. Did you know that, in fact, you can create all the windows and Inspectors you want in Unity… and that things can get pretty complex?

Here is some custom UI I’m currently working on for a procedural mesh generation lib – see all the inspectors and windows with blue icons? Those are my own! 😉

Note: by the way: the Unity team itself uses these tools to build the UI of the Unity editor, so it’s clearly quite powerful! 🙂

This system is amazing and it lets you create tailored-made utilities for your projects. This really helps whenever you start working on your own objects and classes because, oftentimes, the auto-display Inspector Unity gives you is a bit off – and they’re not to blame here, you’re the one who’s doing crazy unexpected stuff. So what’s really nice is that, by giving us access to these editor features, they let us replaced these auto-generated and “not-tweaked-enough” UI with our own.

But it can get a bit intricate when you start interacting with time-dependent logic in your lib: for example, what if your custom editor is for a class that can run asynchronous code? Since you’re still in edit mode and time doesn’t flow like in the run mode, can you actually simulate time and get this code to execute properly?

Yes – but this involves using C# tasks! 🙂

So, today, let’s see how we can use this async concept to run delayed actions from a custom Unity UI editor!

A little bit of context

Now – why am I talking about this precise question, exactly? It does seem a bit far-fetched, no? Who would want to have async code in edit mode?

Well.

At the moment, I’m working on a little Unity project to create procedural meshes easily. So I’m instantiating shapes, changing their materials and colours, scaling them up or down, moving them around the scene… and this is all nice and fine, the usual Unity workflow works well for preparing and editing your scene by hand with all these contents!

I’ve created lots of custom windows or editors to easily modify and tweak those objects – all of which is pretty straight-forward once you’ve gotten the hang of it. But as I was getting a bit more fluent with creating my own UI panels, I eventually hit another wall.

I decided I would add some animation to those shapes. Like, instead of moving my quad with the position handle in the scene, I wanted it to go from the origin point to a little offset point in 2 seconds. And because I’m a lazy dev, I didn’t want to have to wait for my scene to start each time I tried out my anim; so I wanted to have this animation be runnable while still in edit mode.

“Easy!”, I thought. I’ll just create a button that calls my Play() function in my custom editor, and then I’ll code up some coroutines to properly schedule the animation events.

Fair enough: I prepared a very basic test function that would print “Hello world!” after 1 second, linked it to the new button in my Inspector and pressed “Play”.

And then, nothing happened.

So this is the story of how I reforged my code to work properly using C# tasks… 🙂

How can you create custom editors and windows?

Just before we dive into the async programming part, I’ll just quickly recap the basic of creating custom UI in Unity.

“Editor scripts”

First of, to make custom panels, you have to write a special type of scripts, called “editor scripts“. Contrary to the usual scripts you write for your in-game logic, that you can place anywhere in your assets folder, these special scripts have to be placed in an “Editor” folder at the root of your assets directory.

Then, the classes you write inside them usually derive from one of two classes: the Editor or the EditorWindow (those classes are available once you import the UnityEditor package):

  • Editor-derived classes or custom Inspectors: they are shown in your “Inspector” window, just like a normal Unity-component that you drag on an object and that creates a little slot in the Inspector, usually with some exposed properties you can tweak
  • EditorWindow-derived classes allow you to create brand new windows in your Unity editor: instead of just configuring and modifying the display of a slot in the Inspector, this time you start from scratch with a blank “canvas”, a new dockable window that you can show in the menus at the top and then stick in your editor layout somewhere

I won’t talk about the EditorWindow today – I’ll just focus on custom Inspectors derived from the Editor class.

Displaying some basic data with the default layout

When you create a new Inspector, you want to display data about a given component on your object. You could try and relook the Inspectors of Unity’s built-in components, but usually it’s mostly to create your own editor for a MonoBehaviour class that you wrote yourself.

For example, let’s say I have the following – very basic – class:

If I compile this code and drag my class to an empty object, I will get Unity’s auto-generated Inspector:

There are three things to note here:

  • the type of the variable determines the type of the field you see: for example, floats and strings are shown as basic field inputs while colours have a specific colour picker-input
  • variables have default values that also depend on the variable type (0 for floats, an empty string for strings, a “clear” zero-alpha colour for colours…)
  • we don’t see all of the variables in our script, only the ones that we declared as public

Now, this last point relates to the most important thing to understand when working on custom editors: data serialisation.

Data serialisation

Basically, when you have an object in your Unity project, it is saved as an asset somewhere in memory along with all the data it contains. But this save (and then reload) process implies that the data can indeed be written to the disk and then read back again, i.e. that it can be serialised (and later deserialised).

So a serialisable script is a piece of code that can be deconstructed and later reconstructed by Unity, to restore it to its current state, if you reload your project.

Default built-in types like floats, ints, strings and so on are serialisable; when it’s a class that you made, you can use the System.Serializable attribute to make sure Unity’s recognises it as serialisable:

But then, because Unity serialisers have to work in run-time and therefore require some performance adjustments, Unity actually has a few rules about what fields can and can’t be serialised:

  • it has to be public or have the SerializeField attribute
  • it has to be of a serialisable type: either a built-in type, some specific Unity types like Vector2, Vector3, Color… or a custom class that you marked as serialisable
  • it can’t be static, const or readonly

Note that arrays and list containers can be serialised, but not dictionaries; and that your classes have to follow a few rules if you want to be able to mark them as serialisable.

Note: also, for the ones really into C# inheritance – sadly, Unity doesn’t handle polymorphism for serialisation!

This is why some of our variables aren’t visible for now: by default, C# variables are private and not serialised by Unity. So the myFloat and myPrivateString fields are private (implicitly and explicitly), not serialised and they don’t show up in the Inspector.

You can force serialisation (within the previous constraints) with a SerializeField attribute, or conversely prevent the serialisation of a field with a NonSerialized attribute:

This is a very quick peek at serialisation, of course, and it is very deep topic that you should look into more if you plan on doing a lot of custom data and custom UI – but that’s enough for today!

Making a custom Inspector

Ok so – let’s say we want our Inspector for the MyScript class to have some additional buttons at the bottom to run some actions. We’ll have two functions:

  • Square(): that computes the square of the myPublicFloat field and prints it in the console
  • Multiply(): that multiplies the private myInt variable by the length of the myPublicString variable and prints the result in the console

Here are the two functions in the MyScript class:

Note that I also made the myInt show in the Inspector using the SerializeField attribute 🙂

And thus we want two buttons, “Square” and “Multiply”, and we want to show them below the field slots.

Let’s create our brand new MyScriptEditor class (remember to put it in an “Editor” folder” at the root of your assets directory):

Note: the “Editor” suffix at the end of my class name is just a convention: you usually add “Editor” or “Window” to your UI-custom classes to make them easier to identify.

Here, we just import the UnityEditor package and then say that this class is a custom editor for the MyScript class.

Then, we can override the OnInspectorGUI function: this is the function that Unity calls regularly to “redraw” the Inspector in the window panel. This function can use objects from 4 Unity modules: GUI, GUILayout, EditorGUI and EditorGUILayout.

GUI and GUILayout come from the UnityEngine package: they contain tools for the Immediate Mode GUI (or IMGUI), available both in the editor and in-game. EditorGUI and EditorGUILayout come from the UnityEditor package and they only work for custom in-editor UIs. All of these have utilities to create input fields, toggles, dropdowns, sliders, buttons…

The difference between the Layout and the others is that GUILayout and EditorGUILayout gives you some auto-layout (like: they can place each UI element below the previous one), whereas the GUI and EditorGUI tools require you to provide a precise pixel coordinate and size for the elements. Usually, the Layout tools are quicker and easier to use, but the others can be interesting if you have a very specific and/or demanding piece of UI that requires some super-precise placement 😉

Anyway – let’s add our two buttons! What we’re going to do is:

  • first, call the base.OnInspectorGUI() function: this calls the OnInspectorGUI() of the parent class, here the Editor class, and gives us Unity’s auto-generated layout of the field slots
  • then, add our two buttons in the end

If you save this and recompile, your MyScript component Inspector will now have the same field slots as before, plus the two buttons 🙂

But there are two issues with this UI panel: the data is not properly serialised (so, it might not be saved) and the buttons don’t do anything!

If you change some values in your fields, deselect/reselect the object, you’ll see that your modifications are still here. But this doesn’t mean the save mechanism is completely ok: if your data was modified by something else, from another script for example, you will get inconsistent and outdated data in your Inspector.

To avoid this, we need to access the data that this editor is showing via the built-in serializedObject variable (that we inherit from our Editor parent class) and make sure to update its representation at the beginning (so we read the latest values) and apply our modifications at the end (so we save the latest values):

For the buttons, we simply need to call the functions on our target MyScript component.

This time, we don’t want to access the serialised data but its “live” script version, the actual MyScript instance in our scene that we can call methods on. This is available via another built-in variable, the target variable.

Because this variable is of object type (the C# lazy type), we need to unbox it to our custom MyScript to insure we can access the methods:

I’m using the OnEnable() entry-point to set this variable: this function is called whenever the Inspector is activated, which here means whenever you select the object.

We’ve now linked the buttons to the actions and, if you click the buttons in your custom UI Inspector, you’ll get debugs in the console depending on the current value in the Inspector field slots! 😉

Getting asychronous!

What is asynchronous programming?

Ok – that was a simple example with a standard workflow: you update some values in the UI, then you call functions that instantly perform an action and return.

The functions we’ve made so far (Square() and Multiply()) are synchronous, meaning that the entire logic of the method is run all at once, from start to finish, and it’s only when the function is done that the rest of the script can regain control and schedule new things.

There are, however, lots of cases where this blocking behaviour is not recommended: say you are reading a large chunk of data from a file, a URL or a database; or you are waiting for an API to respond to a request. If you treat those processes as synchronous processes, you run the risk of having your program just “stop” for a while, until the data has finally been read or the API has sent back the response.

This could mean for example that, in Unity, your game “freezes” completely: everything is still and nothing else happens until the process is done computing.

Of course, this is a big issue that programmers have been working on for a while!

And, among other things, they have come up with a solution: asynchronous programming and threads.

The idea with async code is that, rather than starting a process and having it run entirely in a single shot, you run the logic in little baby steps from time to time.

So, at first, you start the process; it executes some if its logic and then, when it reaches some predefined “escape points”, it gives back control to your main routine so it can go on running some of its own logic. Then, the asynchronous process takes back control for a while and runs until the next “escape point”.

And so on…

What this allows us to do is to have multiple long-running processes at the same time that don’t collide with each other and run seemingly in parallel. True parallelisation is impossible (a computer is intrinsically a sequential beast), but you can simulate it by switching from one process to another very fast.

And to do this, there are lots of tools you can use: threads, coroutines, C# tasks

All of those are really complex and very interesting – but here, we’ll focus on the two that Unity provides us with to run delayed/async code: the coroutines and the tasks.

Coroutines and tasks

If you’ve browsed some of Unity forums searching for a way to add a timer to your game, or a fade out screen, you might have seen some people talk about incrementing a timer variable in your Update() function, and others talk about coroutines.

Roughly put, Coroutines are a specific type of function in Unity C# that allow you to “yield” and create those “escape points” I mentioned before. I gave more info about Coroutines in an episode from my series on how to make a RTS game in Unity, so if you’re not too familiar with them, make sure to check it out 😉

C# Tasks are another way of doing asynchronous programming. This built-in C# tool lets you create async code with the await/async keywords, just like Promises in Javascript or the newest iterations of Python for example, making it really easy to code and to read.

Basically, by queuing tasks in an internal buffer and piping them together or placing them in parallel, you can easily have your code execute after a while, when a condition is met, once another task has finished running, or if it has failed… and all this, without bloating your code with heavy syntax! You simply define your functions as async and, from that point on, you can use the await keyword inside of the function body to get the result of an asynchronous task.

Coroutines are quick to code and they can be executed directly using the StartCoroutine() method in any MonoBehaviour class. Tasks require you to import the System.Threading.Tasks package, and they rely on a (tiny) bit more boilerplate, but all in all it’s about as easy to implement.

An advantage of tasks over coroutines is that they can return values and that they can wait for each other, but coroutines are usually a nice choice because they can be injected in your code directly without has many changes as with tasks.

Note: this is because Unity is not thread-safe, so tasks can quickly lead to some pretty annoying thread issues…

So – why use tasks for an async in-edit mode process?

Going back to my problem: I wanted to run some process (namely, an animation) by clicking on a button in my custom UI panel. I tried with coroutines, and it failed miserably.

Why? Because there is no running time in edit mode. The coroutine is simply not playing, it isn’t called. So I had to go with tasks instead because those run independently of Unity’s edit mode “frozen time” 🙂

A basic example

Instead of diving into my own animation stuff, let’s see how this can work on a very simple example. I want to make an AsyncTaskManager class that can print “Hello world!” in the console after a given amount of seconds.

So – I’ll just make a script with a delay float variable and one function called RunTask():

For now, this function is a basic public C# function, with no async code whatsoever.

We can make a very basic custom Inspector to go along with it, like this:

The PropertyField is a GUI-utility that automatically displays the right type of field input depending on the type of the selected property.

If I create a new empty game object and add my AsyncTaskManager component to it, I get a very basic Inspector with just my delay variable slot and the button beneath it:

I can already click the button, and I will instantly get a “Hello world!” debug in my console. But now: how do I wait for delay seconds before printing it?

The wrong way: with coroutines

As I said, I first tried with a coroutine, so I had the following code:

This worked fine in play mode: if you have this code, run the game and then click on the “Play” button in the custom Inspector, you do get a debug after delay seconds. But when I clicked the button in edit mode… nothing happened.

So let’s fix this and switch to tasks instead! 😉

The right way: with tasks

Changing this code to use tasks is quite straight-forward:

  • at the top of our script, we import the System.Threading.Tasks package
  • then, in RunTask(), instead of calling the StartCoroutine() function, we have to create a new Task instance, and then await it; this means we have to make sure that our RunTask() function is async (to allow for the await keyword inside it)
  • finally, the _ExecuteTaskAfterDelay() function has to be async too, it has to return a Task object and, instead of using the yield instruction, we will await a specific Task.Delay instance

Be careful: the Task.Delay construct uses milliseconds and not seconds, so we have to convert our delay variable!

All in all, this gives us the following code for our AsyncTaskManager class:

And with that fixed version, I can now click my button and run my async logic even in edit mode! 🙂

Conclusion

Making custom panels in Unity’s editor is very useful whenever you have very custom data and you need specific tooling. Unity gives us various utilities to create our own editors and windows.

These tools work great and pretty much directly for usual workflows but, when you start adding some async code to your scripts, it can get a bit messy. Because time is “frozen” in edit mode, using the Update() method or a coroutine won’t work – so we have to use C# tasks.

By the way, if you want to learn more about tooling and Unity’s editor features, don’t hesitate to have a look at a recent tutorial I made on visual debugging and editing in Unity using gizmos! 🙂

What about do you: have you done some custom UI in Unity? Have you used the await/async keywords before? Let me know in the comments!

Leave a Reply

Your email address will not be published. Required fields are marked *