This article is also available on Medium.
In this post, we’ll talk a bit about graphical projections and camera set up, and then we’ll do some user input handling and reuse the notion of coroutines we saw last time.
At the end of this tutorial, we’ll be able to move the camera with the mouse or the keyboard, and to zoom in and out:
Graphical projections: perspective, orthographic, axonometric…
Video games are displayed on 2D screens. So, as they started to put more and more 3D objects in our games, engineers and programmers have had to find various views of representing this 3D in a 2D space. Luckily, graphical projections (techniques used to show 3D objects in a 2D view) are hardly new stuff – scientists have been drawing complex 3D machines on paper for centuries. Over the years, plenty of ideas have come up on how to best represent the 3rd dimension in 2D: some try and reproduce the way our human eye sees things by incorporating perspective with vanishing points, others decompose the 3D object to show all of its sides separately, others try and mix the two to show as much of the object as possible while not deforming it too much…
In a RTS game, we are confronted to this question since we have 3D objects (our units, the trees and rocks on the ground, etc.). In those games, the camera is very often orthographic – this way, you get a literal bird’s eye-view of the world which helps with micro- and macro-management of your armies and production. More precisely, we usually use the isometric projection – this type of projection is a subtype of parallel orthographic axonometric projections, as shown below:
When computers were not as efficient as they are now, orthographic projections were an amazing way of simulating 3D for video games because they’d let artists create sprites that programmers could then paste next to each other in a neat grid – and you’d get a 3D feel. Also, you didn’t need to worry about scaling the visuals depending on the distance to the camera: your sprites had one set size and there was no need to dedicate compute power to recomputing it live. It also let the game take care of the camera for the players so they could focus on the game at hand, rather than moving both the characters and the camera.
Isometric projection in itself is interesting because it gives quite a comprehensive and well-proportioned 2D representation of our 3D objects: if you have a cube, so with edges of the same length, an isometric projection will scale them proportionally in the 2D representation, thus we still get the same equal lengths.
However, it is way harder to mentalize the directions with this rotation. If you’re going “up” in your view, then you should actually be moving along a diagonal in the world space. This will make camera movement more complex and it will become particularly cumbersome in later episodes, when we want to make a minimap (that will not be rotated 45°). So, instead, we’ll be using a pseudo-axonometric projection where we face the objects (on the left), compared to the “real” isometric projection (on the right)!
We can still simulate the isometric view (and in particular take advantage of it showing objects proportions better) by rotating the objects mesh in the scene at a 45° angle on the global Y-axis. More precisely, we’ll apply a rotation on the “Mesh” sub-object in our unit prefabs, for example for the “House” building:
This way, we get the best out of both world while reducing the mental overhead of computing the camera field of view 😉
Note: even in a true isometric projection, the vertical axis may not be scaled the same; plenty of old “isometric video games” actually used the dimetric projections, in particular to avoid pixel aliasing. Nowadays, computer graphics have improved enough for anti-aliasing to kick in spontaneously and take care of this, so we can revert to “true” isometric projections if we want. But most of the RTS games you might think of (for example the ones I cited in the first article of this series like Age of Empires, Caesar, StarCraft…) have this orthographic view that gives a unique feel to the game.
To create our pseudo-isometric camera in Unity, we need to place it a bit above the ground, give it a 30-0-0 degrees rotation and use the “orthographic” mode for the Camera component:
In orthographic mode, moving the camera closer to objects doesn’t have any effect on their size on screen – it’s the orthographic size property of the camera that determines the scale of 2D projections.
Note: you should play around with the orthographic size in order to get a zoom you like 😉
Translating the camera
RTS usually offer two ways of moving the camera: you can either use the arrow keys or the mouse. By pressing one of the arrow key or sticking your mouse to the matching border of the screen, you’ll start translating the camera in that direction.
The camera is not allowed to rotate so we constantly keep the isometric (or dimetric) projection.
Note: it is possible to rotate the camera in some RTS video games, but we won’t implement it in this project.
Finally, the near and far clipping planes of the camera determine the valid distance range for scene objects — anything closer than the near clipping plane or further away than the far clipping plane will be invisible. Because we might have mountains on the terrain, we should make sure that the camera is far enough from the ground (so the mountains don’t get too close). We’re going to take care of that by having the camera follow the altitude changes of the terrain.
Setting up the arrow keys movement
Alright, we’ll start with the easy part: using arrow keys to translate the camera. We already know quite a lot about user inputs: we’ve written multiple snippets of code with functions from Unity’s input system, like
Input.GetMouseButtonDown(). This time, we’ll simply switch to
Input.GetKey(), another method that allows us to directly pass in the key code of the key we want (here, the arrow keys).
But we need to be careful that the camera goes the right way! Since our camera is rotated a bit (to point at the ground), we can’t move along the scene’s global axis base, or else the camera will slowly drift on the wrong path and move towards the terrain. We need to use the camera’s local base. Unity provides us with 3 local axis, X, Y and Z. The local X axis is perfect for us to move from “left to right” on screen. However, the local Z is not exactly the one we want: instead of moving the camera horizontally from “bottom to top” on screen, it will move it closer to the ground, in the direction it is pointing to. Basically, we have to compute a custom “forward vector” that is the projection of this local Z axis on the global horizontal plane:
This can be done thanks to Unity’s built-in vector methods like
Vector3.ProjectOnPlane(). We’ll modify the camera’s altitude according to the terrain reliefs by having a vertical raycast from the camera to the ground and then re-translate the camera along this axis at a given distance from the hit point:
Note: Unity’s built-in
RequireComponent helper is a neat way of enforcing that whichever object has this script also has the given component. When your script absolutely needs the component to function properly, it makes sense to add this requirement 😉
Adding movement when we hover a screen border
Let’s now add the second translation input: putting the mouse on one screen border. To do this, we’re going to use UI elements in our Canvas: by placing some small bands all around our game view, we’ll be able to manage mouse events in these areas.
First, we can update our
CameraManager script: the new variable
_mouseOnScreenBorder is a reference to the screen border where the mouse is currently located (or -1 if the mouse is not on any border); we simply need to check if the border is “valid” (i.e. not equal to -1), and translate the camera like we did with the arrow keys before if it is:
OnMouseExitScreenBorder() are the callback functions we will call from the UI elements to manage our mouse events. For now, they simply update the
We can then add our UI elements. First, we create a new panel, “ScreenBorders” – then, inside it, we can add 4 children panels that make 4 small borders all around the game view (shown in red below):
Make sure that the “ScreenBorders” parent does not allow raycast, otherwise it will block our raycasting for building placement:
Finally, on each border, we can add an EventTrigger component, and then add pointer events for mouse enter and mouse exit. Those events use the callback functions we defined in our
At that point, if you put your mouse on an edge of the screen, the camera will move just as it did with the arrow keys. The only problem is that if you go to pick a building in the right panel… your camera will move a bit when your mouse hovers the right border UI element!
Improving the hover check by adding a little delay
To fix this issue, we have to integrate a little delay when the mouse pointer enters a screen border before actually moving the camera. This way, if the pointer exits before this delay has passed, we’ll cancel the move and we won’t have the unexpected-move problem anymore.
In the last tutorial, we saw how we can use coroutines and
IEnumerators to have delays in our functions. So let’s change our callback functions to use a coroutine – and keep a reference to this coroutine so we can cancel it when the mouse exits the border area:
Now, the camera will only start moving after 0.3 seconds; if in the meantime the mouse has already exited the border area, the move will be canceled and briefly hovering the area won’t trigger a camera translation. You can change the delay as you wish but 300 milliseconds is a common value for user inputs.
Zooming in and out
Remember that in orthographic mode, zooming in and out simply means changing the orthographic size of the camera. To add the zoom in/out feature to our RTS, we’ll check the mouse wheel scroll value and use it to update the orthographic size. We don’t need to get the actual scroll amount but only if it’s positive (zooming in) or negative (zooming out).
It’s also good practise to limit the zoom within a given range. This prevents the camera from going through the ground or flying off way too high. To do this, we can simply add clamping to our zoom method.
The final touch: an easy-to-miss bug to fix
There is however, a little bug in our game at the moment – to be honest, I missed it at first! If you select a unit and move the camera, you’ll see that the healthbar we put over the unit does not adjust properly, and it starts floating around on the screen away from the unit.
To fix this, we need to update our
Healthbar script so that it also checks for an update of the camera position. We just have to add a reference to the camera’s transform and check for its last and current positions in our update loop:
Yay, we can now move the camera around! We did a bit of vector projection and talked about the difference between perspective and orthographic views.
Next week, the tutorial will be about a little on-the-side feature: a customizable day-and-night cycle to light our scene differently on the (in-game) time of day!
And in the meantime, there will be a small interlude tomorrow to improve our current event system…