Shader Journey #2: Holograms

Today, here’s a second batch of shaders from my Shader Journey – a few hologram and screen effects!

This article is also available on Medium.

Last week, I talked about my new CG series: “Shader Journey”. In this series of posts, I will log my adventure through the world of shaders! In the first episode, I showed a few basic toon shaders. Today, let’s work on something a bit more VFX-like: holograms 🙂

🚀 If you want to see the code for these shaders, you can take a look at my Github 🙂

A quick overview

In this episode, I’ll talk about 5 shaders: the Hologram Basic, the Hologram Wireframe, the Hologram Screen, the Hologram Texture and the Hologram Stand. Those are shown in the video above, and here is how they look on various shapes:

Together, these 4 shaders combine all of these features:

  • hologram VFX
  • “add” / “alpha” blend mode (with culling and z-write testing)
  • Fresnel (with customisable intensity and colour)
  • vertex displacement
  • texture mapping
  • signed distance function (SDF)
  • 2D Perlin noise

So, are you curious to see how I made them? Then, let’s dive in! 🙂

Shader n°1: Hologram Basic

For my first hologram shader, I decided to go with something pretty basic.

At the core, it’s just about adding the Fresnel effect I discussed last time to an unlit shader. But this “unlit” style is not just a plain colour, like the Simple White shader I showed in my intro article.

Rather, it relies on some depth buffer settings and a specific blending mode.

Depth buffer and blending modes

Something very important when writing shaders is to remember that objects don’t live alone in the space, they are surrounded by other things that will also get rendered to the screen. Meaning that when thinking of your shader and computing your fragment colour, you should also think of how it will interact with the other fragment outputs that have already been or will be renderer to the screen from the scene (and even the “background” render target itself).

Deciding how your shader output will be combined with the others and the render target is done by setting its blend mode. The blending of your current shader output with the previous output is always written with the same form of math formula:

Here, we only control the “A”, “B” and “OP” (operation) parameters. But, even then, we can’t just pick any value we want for those factors! These blending constants have to be chosen from a list of predefined values that each correspond to well-known predefined blending modes.

In Unity shaders, this is specified via the Blend and BlendOp commands at the top of your Pass block. By default, the blending is off, so you’ll get a render opaque. But you can then use one of the few common combinations of “A” and “B” to achieve specific blends (as explained in the Unity docs) – here is the BlendOp is left to its default “Add” value:

Blend SrcAlpha OneMinusSrcAlpha // Traditional transparency
Blend One OneMinusSrcAlpha // Premultiplied transparency
Blend One One // Additive
Blend OneMinusDstColor One // Soft additive
Blend DstColor Zero // Multiplicative
Blend DstColor SrcColor // 2x multiplicative

In addition to this blend mode, you also need to tell Unity when the object should be rendered in the pipeline (i.e. before/after which other object(s) it should be put on the render target image) and whether or not it writes to the depth buffer.

The pipeline ordering is done by specifying the RenderType and Queue keys in the Tags section of your Pass. By default, the object is “Opaque” (and it has an implicit “Queue” value of “Geometry”). This is what creates an opaque object.

By changing the RenderType and Queue values to “Transparent”, you tell Unity that the objects using a material with this shader should be rendered later on in the pipeline and expect some transparency blends.

Then, to tell Unity whether or not this shader writes to the depth buffer, you use the ZWrite command – in my case I turned it Off so that my objects would be transparent and “non-blocking” for the camera rays.

Here is for example the difference between the “opaque” mode (no blending, z-write enabled) and the “traditional transparency” mode (simple alpha blending, z-write disabled). In the first case, the green object blocks all of the camera rays and we hardly see anything behind; in the second case it is transparent and lets the camera see the blue and purple objects that are further away:

In my case, depending on the shader, I used both this traditional alpha blending mode and the additive blending mode for my hologram shaders.

This shader uses the additive blending mode – additive blends are usually great for VFX because they automatically overlay the object with the rest of the action and make bright visual effects:

Adding some vertex displacement

This shader, because it didn’t involve to much complicated code, was also a nice opportunity to play around with the vertex shader.

So far, the shaders I’ve shown aren’t doing anything in the vertex shader part – they are only passing data through to the interpolators so that this data can be read and consumed by the fragment shader.

But you can actually do stuff in the vertex shader, too!

In particular, the vertex shader is the perfect place for applying some displacement to your vertices. This can result in various effects: you can get spikes on a sphere, a deformed object, waves on water… or simply a “hovering” animation like here 🙂

Basically, the idea is to use Unity’s built-in time variable to animate the vertices of the mesh periodically and have them go up and down over and over again. To constrain this movement and make it periodic, I used a trigonometric function (the cosinus), multiplied it by a (customisable) user-defined amplitude and applied this “shift” to all the vertices in my mesh.

So, every frame, the shader first moves the points of my objects according to this function and then passes this updated position (clipped to the screen space) to the fragment shader (through the interpolators), resulting in a simple animation that is computed very efficiently without actually touching the transform of the object!

Bonus: A twist on my Fresnel

Just a final note on this shader: compared to my Toon Fresnel that used the local normal vector directly to compute the Fresnel effect, here I’m using the world normal vector. This is because I wanted the object to rotate around their Y axis, but of course not the Fresnel!

With this little trick, I’m able to get a rotation-independent Fresnel effect 😉

Shader n°2: Hologram Wireframe

Should I do it right… or fake it?

When I started on the net for wireframe shading, I quickly stumbled upon geometry shaders. Those are another “stage” in the code of a shader, somewhere in-between the vertex and the fragment shaders, that are notably powerful… and weird 🙂

The whole thing with geometry shader is that, as their name implies, they have access to the geometry of your mesh. So, in addition to knowing the vertices, they also know the triangles (i.e. the faces) that link them together. And, even more crazy: they can have compute triangles on the fly to dynamically add geometry to your mesh.

Here is for example an article by Bill where he talks about vertex and geometry shaders in Unity and shows how he created a little head with only a sphere:

Now, for wireframe, this is pretty useful because it allows you to compute the distance from your fragment to the closest edge on your mesh, and therefore know if this specific fragment should be rendered as the “fill” part or the “wireframe” part.

Sounds nice, right?

Except…

I work on Mac. And as I soon discovered, geometry shaders don’t work by default in Unity on Mac. That’s because Unity, when on Mac, runs with the Metal API unless you configure it differently, and Metal does not support geometry shaders. So it is possible to get geometry shaders in Unity on Mac, but it requires a lot of config (to switch to the OpenGL API) and those have some drawbacks.

The geometry shader version…

Ok so – I decided to jump over to my Windows computer (that does support geometry shaders), drew inspiration from code snippets like this one (that is adapted from Unity’s official VR wireframe shader) and eventually ended up with this kind of renders:

And then, I realised: I’m actually not sure that the geometry shader-based wireframe shading is the effect I want. While this is nice and really interesting to make, the point for my hologram shader is not to have a “debug of the geometry”. It’s just to add a nice griddy-like effect. And, let’s be honest: triangles aren’t that neat here!

So, wait: why not just use a grid, then? 😉

… but I actually went with the texture-based version!

To simulate a wireframe but in a more controllable and aesthetic way, I decided to use a basic texture as a mask and have it colour my surface along those specific lines.

I simply made this “cross” texture (it’s available in the Github repository 🚀):

Then, by tiling it on my object, I got a basic grid mask to I could scale up or down and that wrapped nicely around my object:

Note: of course, if you’re using custom meshes, you’ll have to make sure you unwrap the UVs of your mesh properly 😉

This is simply a grayscale mask that allows me to distinguish between the “wireframe” and “fill” parts of the surface. By multiplying the wireframe colour and the fill colour by either this mask or its inverted version, I can apply my colours on either part of the surface and create my wireframe hologram effect!

Note: I’ve also used a custom level of detail (LOD) for this texture to get a smoother grid effect with the tex2Dlod() function!

Using an alpha blending mode

Once again, I used a specific blend mode to get more of a “hologram” effect – this time, it’s a common transparency blend where you combine the alpha values of the source and the destination to get a simple transparent effect:

I didn’t use an additive blend mode because, this time, I wanted to play around with another parameter: the culling!

Making it double-sided

As explained in the docs, culling is “the process of determining what not to draw“. It’s the parameter that tells the render engine whether it should rather display the faces facing the camera (default), the faces looking away from the camera, or both.

Here, I’ve used the Cull Off command in my shader to completely disable this process and have both the front and back faces be rendered.

That’s why if you look carefully at the middle row of demo objects, you’ll notice that this second shader shows the back edges as well: you can see “through” the object, you can see the inside of the back faces on the other side:

Note that I’ve also “reversed” the Fresnel mask compared to my other shaders: I’m lighting up the center of the mesh instead of the border 😉

Shader n°3: Hologram Screen

Although it is meant for a super simple geometry (a flat plane), this third shader was the longest to make, because it contains various features on top of each other!

Applying a simple texture

Ok so – the main part of this shader is a simple application of a 2D texture. To apply a texture on a surface, we simply use its UV coordinates, optionally a Unity texture sampler to re-tile/offset it and read each pixel of the texture to remap it onto the object.

At this point, the shader gives a simple enough result – we’ve just mapped the image on our plane:

But to get a hologram-like shader, we don’t want to apply the texture as is. In truth, we want to completely disregard the actual colours and rather have one global hue that tints everything. Here, I want to have a blueish hologram – so I’ll simply take the input texture as a gray heightmap and multiply this alpha mask with the user-defined tint colour.

This gives me something like this:

Adding a grid effect

For this shader, I also wanted to have a grid effect – but this time, I wanted it to be generated procedurally, in other words to auto-compute the grid mask based on maths. This allows me to easily change the grid thickness and cell size with properties instead of being constrained by the grid texture.

To do this, I basically compare the current Y UV position to some periodic thresholds and add a smoothstep() to make it a real line with antialiasing. Doing this along the X and Y axes gives you a procedural grayscale grid mask:

Then, all that’s left to do is apply the grid colour where this mask is white and the base 2D texture we prepared before where the mask is black. For the grid colour, I simply computed a very dark version of the main tint to get a consistent tint overall:

Adding an animated “old TV-screen” band effect

Another nice procedural effect we can create is a little “band glitch”, like on old screens. Once again, it’s just about comparing the Y UV coordinate to a specific threshold and apply a little smoothstep() – for example, if the threshold is 0.5, we have a band in the middle of the plane:

But of course, the best part is that, thanks to shaders, we can actually make that band move over time! This is done again with Unity’s built-in time variable and a periodic function; this time, I used the frac() method to get a periodic discontinuous function (that’s why the band “loops around” to the bottom of the plane at the end of a cycle):

Using signed distance functions (SDFs) to get rounded corners and a border

The final feature I want to add to this shader is the possibility to have a border and rounded edges for the screen. This will further improve the “old TV” style 🙂

To create rounded edges, the trick is to use a signed-distance function to create another procedural mask. We can also add quick antialiasing with the fwidth() function. Here, this gives something like this:

Of course, the size of the corners and of the border are customisable via properties! This is our “in-screen” mask. We just have to invert it to get the actual border mask:

And now by applying these masks to our previous effects, we’ll get a really nice TV screen!

Bonus: An inner light to better delimit the screen

By computing another smaller SDF and compositing it with the rest of the screen in additive mode, we can add a little inner light in-between the border and the screen:

And this gives us a really great hologram screen shader! 🙂

Shader n°4: Hologram Texture

Compared to the previous one, this shader is really simple: I simply applied my texture as a grayscale mask and coloured it with the user-defined Color property.

I’ve also re-used the “culling off” trick to get a double-sided effect, and went for an additive blend mode:

Shader n°5: Hologram Stand

My final shader was mostly a re-adaptation of this shader by Andrii that creates a procedural set of light rays using a common 2D Perlin noise.

I’ve simply removed some parameters that weren’t relevant for my use case and added a bit of vertex displacement to transform the base cylinder mesh into a widened cone.

Conclusion

This new set of shaders was another really interesting piece of work that taught me a lot about blending modes, vertex displacement, signed distance functions and even a bit of the basics of geometry shaders!

I hope you like this project so far – and as always, feel free to react in the comments if you have ideas of effects or shaders I could try 😉

Leave a Reply

Your email address will not be published. Required fields are marked *