3rd place THU Golden Ticket Challenge winner Julian Santiago shares how he made Misdirection using Blender
I created this piece for 2015’s THU Golden Ticket Challenge and it won 3rd place! Due to the tight deadline I had to use a few tricks to speed up my workflow without losing the quality. This is the first time I’ve used Blender as the main program, so it was a big learning experience for me.
The theme of the contest was “The Tribe,” so I knew right away that I would need to create multiple characters. I chose to have the characters front and center and to only have a simple background, so it was important to create characters with varied proportions and personalities. The idea was to have a band of misfits taking on a physically imposing foe, so I tried to design a group of warriors who don’t look like they should be warriors.
I started out by sculpting the characters using Blender’s multiresolution sculpting and dynamic topology sculpting. For those familiar with Sculptris, dynamic topology is similar to that, it allowed me to focus on the silhouette, proportion and design without having to worry about the topology. I only sculpted the girl in the front, the Roman soldier, and the horse head. The two characters were then used as a basis for the remaining three characters.
Modeling and Retopology
After sculpting, I proceeded with retopologizing the sculpt using Blender’s retopology tools. Because this project was only meant to be a still image, I focused more on having evenly spaced geometry and didn’t bother too much with having a mesh optimized for animation. To speed up the process, I took some of my old models and reused some of their body parts, namely the eyes, teeth, and hands. Afterwards it was a simple process of reshaping and combining models to complete all the characters.
I posed the characters using very basic armature. This is also the stage where I added hair and sculpted the clothes, facial expressions, and deformation fixes. The sculpted clothes were rendered as is and were not retopologized, this sped up my workflow at the cost of higher RAM usage at render time. After this step, the project entered a “lock-down phase.” This meant that I would no longer be able to make major changes to the posing and positioning of the characters and the camera because of how I planned to do the textures.
I created the trees using a free program called ngPlant. Blender has its own foliage generation add-on called sapling, but I found that I had more control over the look of the trees using ngPlant. I generated a number of tree variations and placed them in the scene using group instancing.
Projected UVs and Texturing
This is the step which I believe helped me the most in making the deadline. Instead of unwrapping and individually texturing everything, I projected the UVs from the camera. I then used the external editing function of Blender’s Texture paint mode to take a square 8k screengrab of the viewport from the render camera. This screengrab was used as a template to paint maps in Photoshop which were shared between all objects. Because all of the maps were shared, I could simply apply a finished shader to any object and have it look right without having to adjust anything.
When Projecting Textures Won’t Do
While most of the image was textured using the previous technique, there were situations where I had to take a more traditional approach, such as with the patterned cloth and the girl’s hammer. For these, I just created a secondary UV map and textured them with traditional texturing techniques. For the other objects that needed tiling textures but had no second UV map, I used blended box mapped textures. Blended box mapping is a feature that is a bit difficult to set up in the other programs I’ve used but is a built-in feature in Blender.
My approach to lighting was to first get a realistic lighting set up before placing art-directed but still plausible light sources. I started with an overcast HDRi, a light source for the sun, and light blockers to simulate a dense coverage of leaves. I then took an initial render and examined the scene to look for possible sources of additional light. I set up the lighting so that the three background characters would blend in with the background the most, the Roman soldier would have mostly rim lighting, and that the foreground girl would have the highest contrast between light and shadow.
To finish off the asset creation phase, I created a smoke simulation for the fog and a cloth simulation to replace the girl’s sculpted cape.
Because of my general unfamiliarity with rendering in Blender, I chose to render only a beauty pass and matte ID passes for the various elements of the scene. I split up the scene into 3 elements: the trees, the characters, and the fog. Post production work was done in Blender’s compositor and consisted mostly of color grading and noise reduction. I isolated various parts of the image using the mattes ID and masks and carefully went over the entire image until I got the mood and colors that I wanted.
After the compositing and grading phase, all that was left to do was minor clean ups and paint-overs which were done in Krita. I cleaned up some render artifacts and painted some more foreground fog. I used Krita for this step for no other reason than I wanted to try it out.