Let’s see how to click on game objects in Unity at runtime, with 3 levels of difficulty!
This article is also available on Medium.
This tutorial is available either in video format or in text format – see below 🙂
In a previous tutorial, we discussed how game objects are at the core of the Unity dev workflow, at every step of the process.
But even though they can serve various purposes, and have very different components, the first thing that pops to mind when we talk about a game object is usually something that you can see, move and interact with in your 2D or 3D scene.
Hum – wait: interactions? How exactly can we touch, transform and overall exchange with a game object? That’s pretty obvious in edit mode – you just take your mouse, go over some object in the scene view and press the left mouse button. But what about when the game is running?
Today, we’re going to focus on one type of interaction and see how we can “click” on a 3D object in our scene at runtime. Note that we won’t talk about UI and 2D stuff this time 😉
Ready? Let’s go then!
Level 1: Using OnMouseDown
Before you code anything, you need to make sure that Unity will be able to recognise the target object as something that is “interactable“. By default, the objects in your scene are just visuals that the camera will see and print on the screen while the game runs. In order to make these objects part of the virtual world and have them respond to the physics, you have to add some specific components to them.
Rigidbodies are for computing gravity and velocities on the objects, it’s about active physics; but today, we’re more interested in getting the objects to exist in the scene as physical objects in a passive way. To do this, we need to make sure they have a physical collider.
Unity provides us with various collider shapes: this diverse gallery of components should allow you to find one that more or less matches the shape of your object, no matter how complex it is.
If you really need something that matches the geometry exactly, then your best bet is probably the MeshCollider. However, this collider type takes a heavy toll on your computer because it requires a lot of computation. At the other end of the spectrum, the BoxCollider is a very simple and crude component but it requires far lighter computations. This means that if you’re interested in a global bounding box and you’re fine approximating the bounds of your 3D object in terms of physics, it could be better to rely on simpler shapes.
No matter which one you choose, once your game object has a collider, it will be part of the physics event loop and will catch things such as: “someone just clicked on me”.
And that event will automatically trigger one of Unity’s built-in function: the
OnMouseDown() method. This function can be defined in any MonoBehaviour C# script to react to this event and run some custom logic:
But it is essential to remember that only the scripts on the game object will receive the click event; not the parent nor the child game objects. This means that you’ll want to put this new script containing the
OnMouseDown() method on the same game object that has the physics collider – otherwise, the function will simply not get triggered! 😉
So – this
OnMouseDown() method is pretty useful and straight-forward to implement. And here we are: we can click on an object and get either a debug, or even change its colour! It sounds like we’re done, right? Well, not quite…
First, there is an issue with this method: if you plan on using Unity’s new input system (the one that is still in development but is very promising and should make cross-platform input handling a breeze), then you won’t be able to use
But more importantly,
OnMouseDown() is actually a wrapper around a more fundamental concept that you should at least be aware of: the raycasts.
Level 2: Using physic raycasts directly
Behind the scenes,
OnMouseDown() relies on physic raycasts to check whether the player’s mouse was on the 3D object when they pressed the left mouse button. So let’s dive a bit deeper and deconstruct the mechanics to better understand how it works.
The basic idea is to create a ray with an origin point, a direction and a length; then, you ask Unity to list all the colliders it encounters on its path.
In our case, if we wish to refactor our “click” script to use a raycast, we’ll want to have the ray originate from the current position of the mouse cursor and be cast along the camera’s forward vector, but all of this needs to happen in the 3D world.
So, you can either use Unity’s built-in camera space transformation methods to switch between the screen and the world coordinate systems, or directly take advantage of the
ScreenPointToRay() shortcut to create your ray. Then, all that’s left to do is call the
Physics.Raycast() function with this ray and store the resulting first hit, if there is one, in a
RaycastHit variable. Finally, we can check this hit against our own game object and this will check whether the player clicked on this specific object in the 3D scene:
Of course, that’s what
OnMouseDown() does on its own and it looks way more difficult than the first method. But the important thing is that, from there, raycasting gives you a lot more possibilities.
Another nice thing with raycasts is that they can be tuned to only take into account some of your objects, thanks to Unity’s system of layers.
Level 3: Tuning the Raycast with layers
Sometimes, you want your click to go “through” some objects, to ignore part of your scene and only consider a subset of objects for the interactions. This is fairly easy to do thanks to Unity’s layers system.
When you edit a game object, you’ll see that, in the very top right corner, you have a little dropdown where you can choose which layer the object is on:
Unity will create some layers by default, and those can already be enough for simple cases. For example, placing an object on the “Ignore Raycast” layer will naturally prevent it from being caught by the physics raycast search, and so it will also prevent any
OnMouseDown() callback. This can be useful if you want an object to act as a wall or a physical collider inside your scene, but you don’t want it to block your mouse clicks on something that is behind:
For more complex examples, you may need to create your own layers. To do this, just go to the dropdown and click on “Add layer…”, or open the Project Settings > Tags and Layers panel. Here, you’ll find the list of all the layers Unity created by default, and you’ll be able to define additional ones.
When you’re done adding your layers, don’t forget to actually assign them to your game objects! 🙂
Finally, back in your script, you can create layer masks and apply them to your raycast and other collision check calls. These masks are defined as integers, but those ints are a bit peculiar: they are defined using bit-shifting. For example, in the previous step, I created a new layer in the “User Layer 6” slot. This means that I can reference this layer in my script with an int variable that equals 1 bit-shifted by 6.
The reason Unity uses bit shifting is because it is very fast and because it makes it easy to “-“compose” layer masks together, just by using bitwise operations: the bitwise OR
| will add layer masks, the bitwise NOT
~ will reverse the mask to be everything but the selection, and so on.
This episode was a quick intro to interacting with objects in the Unity game engine, and it was also a nice opportunity to touch upon some basics concepts from the physics system. We’ve looked at 3 techniques to click on game objects in the 3D scene at runtime.
I really hope you enjoyed the tutorial, please feel free to tell me if it’s been helpful in the comments! And of course, if you want to see more of this content, you can support my work by liking and sharing this content 🙂