How I automated the marketing of my Blender 3D models (1/2)

Aka: “I’m tired of spending hours doing exports – let’s call Python to the rescue!”

This article is also available on Medium.

A couple of days ago, I decided I would re-work on my CG skills, and so I eventually made a little low-poly knight character, fully rigged and with some basic animations.

I was fairly happy with the result but, as I was actually producing clips, pictures and wireframes to share on my socials, I quickly discovered something terrible: it took me nearly as long to prepare, export and organise those outputs as it did to make the model in the first place!

The thing is that creating this kind of turntables, short animation previews, wireframe shots or quick playblasts is really useful whenever you want to share your 3D artworks with other people. Be it to fill your Instagram or to upload the model on a platform like CG Trader, it’s often pretty cool to have a gallery of “views” to show…

… so I couldn’t just accept it to be such a pain to make! And thus I went on to coding a little Python plugin, the “Model Views Exporter” (MVE), to quickly automate this boring step and directly have a nice portfolio for my 3D models!

Note: for more info on why the Blender+Python combo is really interesting and how to setup Blender for Python scripting, check out these two articles I published a while ago!

Here is the first part of the dev log of how I wrote this plugin – I hope you’ll like it 🙂

An overview of the plugin features

My goal with this MVE plugin is to automate as much as possible the production of screenshots (and anim clips if they are animated) for my various 3D models.

I want to get outputs from different common point of views (POVs): the front view, each side view, the top view, a perspective view… I also include in these POVs more peculiar setups like the turntable that shows the model rotating around.

I also want to be able to get outputs with and without the wireframe enabled, because showing the wireframe allows the viewer to quickly assess the complexity and topology of the mesh, but it makes for an overall heavier look.

For animated models, I obviously want all the animations I chose to be exported and named properly so that I can quickly browse through the portfolio later on.

This means that, in the end, the tool should be able to differentiate between “static” and animated models – to do this, we’ll simply take the currently selected object as reference and check if it is an armature or a basic mesh.

Finally, a really nice feature would be to automatically compute the camera size/position based on the model size…

Preparing the POVs

The first step is to prepare a list of all the point of views (POVs) available to the user. I’m going for a classic 3D workflow, so I’m going to consider 6 (+2) common POVs:

  1. front view
  2. left view
  3. right view
  4. top view
  5. “perspective” view
  6. turntable
  7. (back view: disabled by default)
  8. (bottom view: disabled by default)

The last two POVs are usually not that relevant, so I’ll make them available but disable them by default. The “perspective” view is a somewhat vague term for: “any camera that has a slight angle and shows more than one side at a time”. But it will actually be orthographic, for now!

My idea to properly frame the model in the shot is to first create an anchor that is placed at the model’s location but up half its height, and then create one camera for each POV that looks at that anchor. This way, I’ll just need to store a fixed offset for each POV and place it relative to the anchor using this offset, and the tracking constraint will directly compute the right rotation.

This anchor can be created with an empty Blender object, like this:

In our case, we want the anchor to be placed at the model’s location with a slight offset – we’ll say that the model is the currently selected active object. If no object is selected, we want to cancel the export process altogether:

Ok – let’s take care of our cameras, now!

In this first version of the plugin, I’ll focus on orthographic cameras and leave the perspective cameras for later. The advantage of orthographic cameras is that you just need to define the size and direction of your camera, and you can forget about the exact distance to the target (as long as there is no clipping plane issue) 😉

So here is my list of available POVs:

You see that I simply defined some offsets, as well as a boolean flag to know if the POV should be enabled (i.e. used during the export) by default or not.

Then, here is a basic function that creates a camera for a given point of view – we simply pass it the anchor object we created before, make a new camera object, set some of its parameters and give it a “Track To” object constraint:

For the orthographic size, I’m taking the largest dimension of my model with some hardcoded margin factor (MARGIN) to make sure the model is properly framed.

Finally, all I have to do is iterate though my POVs, check if they are enabled and create the matching camera:

Note: I will talk about the specific case of the “turntable” POV in the second part of this article 😉

If I run this script, I see that my scene gets populated with the anchor and my 6 enabled cameras all pointing to it:

To clean up my scene and avoid stacking multiple cameras as I re-run my script, I can implement a little util function, delete_obj(), and call it on the anchor and the auto-generated cameras during my process:

This will make sure that I restore my initial scene by the end of the script and that I don’t have weird objects laying around 😉

Exporting our images

Now that we can create our various cameras at the different POVs, the next step is to actually take a picture from this angle! The point here is to neatly prepare the scene render options so as to export the right visuals and then write the image (or the movie) to the disk to a logical and well-chosen path.

Preparing the output path

To create this path, I will define some prefix and suffix that I’ll concatenate to make my output path. These will eventually be exposed to the user but, at the moment, I don’t have any UI, so I’ll just write them directly in my script.

My prefix will be the name of my model: “knight”. My suffix will be computed automatically based on the current POV:

Taking a screenshot

Alright, time to actually create some images! To begin with, let’s take care of still images and make basic screenshots.

I won’t be doing real renders for now: instead, I will directly do a “viewport render”, meaning that I take a screenshot of my current 3D view. It is way faster than an actual render, it will allow for wireframe display later on and it can even be enough if you want to show your model with a solid shading and just some basic materials or colours.

This means that I need to access my 3D view, because I need to make sure I’m in “solid” shading mode. This requires a little trick of looping through all the areas in the Blender screen, but all in all it’s fairly readable:

Now that I have this reference, for each of my POVs, I can use to do two things:

  • assign my scene camera and go into “camera view”
  • set my scene render shading mode in my new export_pov() function

For now, let’s output PNG images with a transparent background:

And tadaa! I now have 6 images in my given BASE_PATH folder, one for each point of view 🙂

Exporting an animation

Creating an animation is not really more complicated than taking a single picture; we simply need to pass the animation=True parameter to the bpy.ops.render.opengl() function.

But, to avoid overloading my disk with too many images, and to avoid a second step of merge, I don’t want to output series of PNGs but I want Blender to directly make a movie. To keep it light, I’ll use the AVI JPEG format.

To do this, I’ll add an animation parameter to my export_pov() function and change my scene render options depending on its value (to include the name of the animation in the output path).

I also need to make sure that I toggle the “transparent” option off because this is not supported for JPEGs. And last but not least, I should also make sure that I properly set the frame range so that it matches my animation:

Note: I’m removing the final frame from the animation range to have a slick loop without any “pause” in the middle!

If I call my export_pov() with an animation name in the animation parameter, then I’ll get an AVI movie that contains the right frames (instead of a single PNG image).

Note: next time, I’ll talk about assigning the right animation to the armature and making sure I am exporting the animation I’m naming 😉

Improving the render

To end this first part, let’s see how to improve this viewport render by removing the overlays, gizmos and other util tools that Blender shows in the 3D view by default, and by showing the textures of the mesh.

For now, the outputs we get are polluted by a lot of extra objects, relationship lines, handles, etc.:

To avoid this, we can re-use our space3d variable and set some options to toggle the various overlays. I’m going to make a setup_scene() and a reset_scene() method to make sure I restore my initial parameters when I’m done:

I can even force the colour mode to textures and hide the armatures to get a cleaner shot:

With these little improvements, I now have nice pictures or movies that are prepared and exported automatically, with pretty names and easy-to-read options 🙂


The “Model Views Exporter” is already pretty useful as is, but there are still lots of features to implement! In particular, we need to add a UI so that it’s easier to use; we’ll also turn it into an installable Blender plugin and we’ll add lots of options to better tweak and choose the export parameters.

See you next Monday for the second and final part of this dev log on the MVE plugin!

As always, feel free to tell me in the comments if you have ideas of articles I could write about CG and automation, or of useful Blender plugins I could make 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *