Today, here’s a third batch of shaders from my Shader Journey – some screen post-processing effects!
This article is also available on Medium.
In the previous episodes of my new CG series: “Shader Journey”, I talked about some toon and hologram shaders that can be applied directly on objects. Today, I want to explore another type of shaders: the post-processing shaders that affect the final render image output by the camera and transforms this 2D data to make cool screen effects…
A quick overview
In this episode, I’ll talk about 5 shaders: the Colorize, the Invert, the Night Vision, the Blur and the Contour. Since they are post-processing shaders, I decided to have a very simple scene with just a Sprite Renderer that renders a CGI image I made for my Iso-architecture series:
But, of course, you can take whichever image you’d like! 😉
Anyway – as you can see, this image is pretty soft and dark overall. But post-processing can completely change the feeling! Here is the result after transforming this image with the various shaders:
Together, these 5 shaders combine all of the features that you see listed in the bottom-right corner of the image.
So, are you curious to see how I made them? Then, let’s dive in! 🙂
Foreword: Making the camera render post-processing effects
For today’s episode, since I’m not working with “on-objects” shaders but “on-camera”/”on-screen” shaders, I had to learn how to apply these post-processing effects in the render pipeline. Turns out: it’s pretty straight-forward!
All you have to do is add a little C# script on your camera that contains the following code:
And as soon as you put a material with a post-processing shader in the
postProcessingMaterial slot, you’ll see the image transformed in your “Game” tab!
The idea is that the
OnRenderImage() Unity’s built-in entry point is called whenever the camera renders a new image and, with that script, it will integrate your material to the render pipeline to apply the post-processing effect.
Be careful: the source render image (i.e. the image rendered for this frame before your post-processing effect) will passed in to your shader in the
_MainTexture variable, so make sure you have it defined! 🙂
[ExecuteInEditMode] forces the script to be run while in edit mode and insures that you see the post-processing effect on your camera even when not running the game.
Shader n°1: Colorize
This first shader is really simple: it just multiplies the render image by a given colour to “tint” everything. In my case, I chose a red tint, so I get a redish filter over everything:
The point with that first step was to get familiar with Unity’s post-processing shaders, prepare my workflow and get used to Unity’s shader image built-in data structure and vertex shader.
Writing a shader means, among other things, defining the data that you’ll be working with throughout the computation. This info can basically be split in two parts:
- the input data: this is what Unity will feed your shader at the start of its computation, all the info it will have to complete its computation. You can create your own data structure (often called
appdata), but Unity also has some already defined in its
When you work with 3D objects and “classical” materials, you can take advantage of Unity’s
appdata_base structure, or sometimes
appdata_full. These contain mesh-related data, like vertex positions, normals, tangents, etc.
But, of course, not all of these make any sense when you work on post-processing shaders! Here, you’re not receiving 3D info but 2D info: your input is the camera render, which is a flat image. That’s why it’s interesting to use Unity’s
appdata_img input data that just has the positions and UV coordinates of the render image.
- the interpolators: those are used between the vertex and fragment shader phases by the GPU to infer values for in-between points. Once again, for post-processing shaders, you can benefit from Unity’s
v2f_imginterpolator that only transfers the projected vertex positions and UV coordinates.
Finally, the associated
vert_img vertex shader automatically uses the
v2f_img constructs to handle all of the vertex shader phase (you use it by define the proper
pragma in the shader file:
#pragma vertex vert_img). So you’re left with only writing the fragment shader which is, to be honest, the real meat of post-processing shaders 😉
So here is the code of my first shader – it’s pretty short thanks to all these built-ins, and it’s just about sampling the main texture (so, the source camera render image) and tinting it with the user-defined colour:
Shader n°2: Invert
Similarly, the second shader was really quick to do. It inverts the image – this simply means that you have to “reverse the colour”, so you return
1 - <input color>.
Because my original image is pretty dark, this gives me a very bright result with a white background:
This makes for a somewhat ghostly effect that I think looks very nice 🙂
Note that, if I were to take a more colourful image (like the “light” version of my Iso-architecture living room), I’d get more crazy colours after applying my Invert filter:
Shader n°3: Night Vision
This is where things start to get a bit more tricky! This shader combines 4 effect -: a lens deformation that bends the middle of the image, a grid overlay, a vignette and a bland green tint:
For the deformation, I drew inspiration from this tutorial by Alan Zucconi where he shows how to use a displacement texture to create a deformation in your image. In my case, I used one of Unity’s built-in textures, the “Default-Particle”, to get a smooth circular texture:
This deformation is done by directly manipulating the UVs and “moving” them around.
I then added a texture-based grid effect. The idea is just to sample a cross texture and have it repeat, then composite it with a low opacity to put it as an overlay:
For the vignette, I re-used the notion of signed distance functions that I talked about last time to get a simple blurry circle mask:
All of these preliminary effects are computed as floats, meaning that they are grayscale masks. The final step is just to use this greyscale along with the green tint.
To make it a bit more interesting, I also made the tint change very slightly over time, and the lines slowly move vertically:
Shader n°4: Blur
This shader relies on the common box blur effect to make a “fuzzy” image. The code is inspired by this article from Santosh Nalla but I re-adapted it to use both the X and Y axes and to have a user-defined blur “accuracy” (i.e. how many neighbours are used for the average computation).
To increase the “crazy” effect, I also added a property to control the surexposition of the image: by turning it up, you can voluntarily force the values up to the whites and get a saturated image.
Here are some results with various blur sizes, blur strength and surexposition values:
Shader n°5: Contour
My last shader uses some convolution and the Sobel operator to do some edge detection on the image and isolate the contours:
This shader is mostly a rewriting of Fearcat’s edge detection shader to properly fit Unity’s API.
The idea of this post-processing effect is to use the Sobel kernel on the X and Y axes to compute first the horizontal edges, then the vertical ones, and finally add the two to get the final edge mask. This mask can then be returned on its own (to get only the edges, see the image on the left) or as an overlay on the render image (see the image on the right).
Once again, this set of shaders taught me a lot about shaders in Unity! This time, I had to shift my frame of mind to work with images and apply post-processing effects on 2D data. But I could still re-use some previous core skills like texture sampling or signed distance function computation!
I hope you like this project so far – and as always, feel free to react in the comments if you have ideas of effects or shaders I could try 😉