Character Breakdown

Hamish Bryant


Hamish Bryant

Character Artist


Hey, my name is Hamish Bryant, and I’ve been working in the Games Industry for just under 4 years now.
I initially started working predominantly on real-time hair for games but have moved to doing Character Art in general, in addition to specializing in hair.


In that vein, I’ve wanted to work on updating my folio as it predominantly showed my professional hair work but lacked recent character artwork.

Initially, the goal of this project was simply to make a bust of the actor Christopher Lee, but then feature creep took hold, and it evolved into a variety of exercises such as material shaders, a full robe, etc.

I tend to always experiment when I model; I always want to learn new ways of doing things to close the gap between software and what I want to achieve.


Reference & Inspiration

When I decided to fully model the amazing Christopher Lee in his role as Saruman The White, it was the perfect excuse for me to rewatch the films.

I was always hugely fond of the Lord of the Rings trilogy growing up, and Saruman had such gravitas on screen that it inspired me to try and capture that stern and brooding look in a real-time character model of my own.

Once I was set on making the likeness, I was also excited to make the robes and full character inspired by the behind-the-scenes footage of Ngila Dickson, costume designer on the Lord of the Rings trilogy, explaining the thought process behind the design of the robes.

In the movies, the robe looks white but is composed of varying tones of cream. This added with the embroidery, makes something that could have been quite dull on screen have a lot of personality and detail for the eye to explore.

With this project, it became a goal for me to see if I could create an interesting-looking garment without relying too heavily on many added elements and different materials.


This character was worked on in two main pieces, the head and the robes (plus the time sinks, more on that later). For the head, producing the likeness involved a lot and a lot of trial and error.

I’ve always found likeness to be one of, if not the most difficult part of character modeling. A slight change can have a lot of impact, so it can be quite frustrating, but it’s also an incredible exercise in learning to better read references and understand the relations between the shapes of the face and what makes a face identifiable to a specific individual.

As you can see in these shots, there was a lot of iteration, and things might have only started to come together really towards the end. None of it looked very good early on as I was still finding my footing.


If I have any advice, it’s to be patient, especially if you are new to likeness, and also find time to do anatomy studies like copying scans starting from a sphere.

You can do this in ZBrush with the split-screen function; simply make sure the scan is the only other subtool visible, and you will have a duplicate of your camera to copy a scan.

You will quickly start to pick up why things feel off and improve at a much steadier pace with a bit of practice copying scans.

The face was easily the most difficult part of this project as covered a bit earlier.

There isn’t much to say except likeness requires experience and time to achieve. Having a good grasp of the fundamentals of anatomy is crucial before even considering making a likeness.

I’m also guilty of making this mistake and trying to jump right into the fun part early on as a student.

As for brushes, I didn’t use much more than “Clay Buildup” and “Move” and maybe “Standard Brush” here and there. I also recommend getting the “Gio” brush from “Zbrush Guides”; it’s an improved “Dam Standard” that also adds some pinching. Perfect for skin folds and organics, etc.

I think gathering good references is key. Especially with older people, it’s very important to make sure your reference is coming from the same couple of years.

Faces change quite a lot over the years, and a few years can have more dramatic changes than others.

For celebrities, Getty Images is incredibly useful for this; you can filter a specific date range which is particularly useful when basing a likeness on an appearance in a certain movie or show.


When gathering references, also remember to keep in mind the lighting; strong lighting with shadows is great because you get to see how light behaves across the surface of the face, and it will help you get a better understanding of the shapes on the face.

Compare these two images, one provides a lot of information on the plane changes, and the other the lighting is too flat.

Also, keep in mind the focal length of the image; this can be hard to guess but will affect the face significantly.

A good rule of thumb is if the picture is taken from the front and both ears are visible, then the focal length is high; if the face is distorted and the ears are less visible, it is low.

This is an extreme example, but it can affect how you perceive the shapes of the face.

I would recommend sticking to a focal length of 50mm in ZBrush if producing likeness just because it’s what generally feels closer to most references.

Fun fact, for his role as Saruman, Christopher Lee is wearing a prosthetic nose.

These sorts of things can throw you off when trying to get a likeness, so look up if the actor wore prosthetics for their role or strictly used images from the movie you would like to make a likeness from, though that is quite difficult.


For tertiary detail and skin, I used Vface, I partially followed Amy Ash’s tutorial here.

But I was also using my topology and UVs. So to break down the steps:

  • Use Zwrap to wrap the Vface scan I chose to my character’s head
  • Using Xnormals, I made my character’s head the bake low poly and the VFace that was wrapped to my character’s head the high poly
  • I apply the base color and the displacement map (in two separate bakes) as the base texture to bake
  • This means the textures are now mapped to my UVs! Make sure you bake as an EXR to not lose detail.

As a bonus step, you can split the channels of the displacement maps and import them into ZBrush as a displacement texture, create a layer and have more accurate control over the intensity of details.

The great thing about these displacement maps is that the R, G and B, channels control different frequencies of detail so R is large, G is mid and B is all the skin pores, etc.

I chose to do this by transferring maps because I want to minimize the amount of lost information and keep the purest form of that data which is the displacement textures that come with the VFace.

Afterward, the head is baked like a regular high poly in Marmoset.

The albedo I got from the scan was very close to what I needed so editing was minimal except adding more sunspots and showing a little more age and fatigue on the skin.


The main blockout of the robe was made in Marvelous Designer; when working in Marvelous, I try my best to keep things simple. I have fallen into the trap of trying to add every tiny seam and make things perfect in Marvelous, but for this project, it wasn’t necessary.

For this project, I didn’t intend to make the whole body under the robes, so based on some reference, I made a quick sketch from the reference of Christopher Lee to try and get the body proportions close enough.

This gave me a base to work on in Marvelous Designer; it isn’t perfect, but I knew I would adjust the robe manually after simulating.


I recommend getting the cloth to a good spot with major folds and then adjusting the robe in ZBrush afterward.

The job of Marvelous Designer is to provide realistic fabric simulations, but we are artists and we may want the fabric to sit slightly differently on the shoulders or have the sleeves in a slightly different position so don’t spend too much time fighting with Marvelous and let your artistic sensibilities come into play.


Some small tips for working with Marvelous:

If you have fabric layered upon one another remember to freeze the bottom layers and create the next ones on top to avoid the first layer sliding.

Use the cloth layer function to help Marvelous understand what is the bottom layer and what is the top, etc. It should help reduce clipping.

On top of that, don’t be afraid to use the pinning tool to get things to stay in certain positions.

Use rough proxy models of elements and take advantage of morph targets to get effects like tension from a belt around the waist etc.

One thing you might find when looking at the ZBrush sculpt of the robe is that it is fairly restrained in that I didn’t go into a lot of detail sculpting many micro folds and damage etc.


I knew from the start of this project that I wanted to achieve a high level of detail scalability; by this I mean that even when looking at the clothing from very close, I wanted the textures to hold up.

So instead of trying to bake this detail down from the high poly model, I made heavy use of tiling textures.

Texturing with Tileables

The robe itself is super long and is a homogenous piece of clothing. This doesn’t leave much room for making separate materials so any texture map would have to be very large to accommodate for a high level of detail.

To circumvent having 8k textures or large material numbers, I planned to use detail maps extensively.

This means I’m tiling a smaller texture across the surface in real-time in-engine to get more detail.


I have to shout out Laura Gallagher’s (from the Outgang platform) Textile generator tool which was a huge time saver, instead of creating all the weave patterns myself in Substance Designer.

I was able to leverage this to create my linens, etc., which in combination with some Substance Designer graphs of patterns for the embroidery allowed me to create these textures.

I also used a second UV set to allow me to tile the “lapel trim” on the robe so it didn’t lose detail. The fact that these are repeatable patterns makes this a really interesting way to texture.

These were also supported by a base color, normal, roughness for each material that added more mid-frequency details.

This way any blurry texture from the large UV coverage was hidden and blended.

The material graph in UE was quite a piece of work, and I’m sure tech artists are pulling their hair out right now at the optimization, but for practice on this project, it worked.


For a basic rundown of the material:

  • An RGBA ID map was used as a way of masking the different detail normal sets on the fabric
  • Each detail map came with 3 textures, normal, alpha, and height. The alpha here was uniquely used to mask where the silver embroidery would be.
  • The first detail map was blended onto the base normal and subsequently each normal was then blended on that result (but masked to their respective areas)
  • Each detail texture had its properties that were also blended in the same order.

I gave myself the option to be able to adjust the properties of the embroidery in the material editor to better make adjustments rather than having to export from substance a set of maps each time I wanted to make a change.

In this case, the details are so small that anything but a value from 0 to 1 for the parameters like roughness and metalness, etc. would have been a waste, hence why I didn’t create texture maps for these.

This is an example of what options each detail set had; the UV set 02 also had similar options.


Time Sinks

There are several elements when making personal projects which should be the last 10% of things to do but somehow take 90% of the time.
Either because they are lengthy or just pretty distractions.


I’ve acquired a fair bit of experience in the creation of real-time hair over the years, and I’m hoping in the future to release a tutorial going over some of the fundamentals and tools, if I find the time.

To break down quickly what I did here, I used GS Curve tools to place my hair cards and Xgen to generate my textures.


GS curve tools are a great set of tools but I recommend getting the BCM GS curve tools helpers by Cristian-Marius Bulliarca.

This just allows us to use marking menus and in-viewport controllers to adjust the twist etc. of our cards.

Some edge cases make working with curve tools a bit more difficult. The curves tend to spin when editing the roots making very flat hair like this tricky. The distribution is also not based on the curvature of the card. So to have a smooth curve of the hair we need to have a lot of divisions. You can fix this by adjusting the topology later.

I tend to make use of the bind curve, control curve, and soft select to control the curves with the GS curve tool.
It’s essential when working with modern AAA-budgeted titles as the number of cards becomes too much to manage on their own.


Grouping is very important as well, on the asset, I use the tools layer system to group cards of similar density together and I use regular groups in the outliner to control the visibility of different sections.

This can change depending on the asset, however, sometimes I use the layer system to control different sections of the hair.


I used Xgen for my cards; there are several tutorials on this, so I won’t go over this here, but it’s fairly simple.

Xgen can look scary, but it’s quite basic what we use. I mostly just use the sculpt brushes to get the shapes I want.


I have tried using Fibershop, and the hand-drawing strands function is very cool, but the tool is quite unstable and there is no way to manually tweak the procedural cards.

If this were improved upon, I could see myself using it more in a professional setting.


The place the Palantir took up in the project kind of ballooned not because it was too difficult but because I enjoyed myself trying to match it as it is in the movies.

The material is in multiple layers to help produce the final effect. The eye is masked using the camera vector so it is always looking at the viewing and faded around the edge of the sphere. This way it looks as if it’s inside.

Then we have the fire that with a displacement texture and motion 4-way chaos helps warp the effect.

I then have a texture separately controlled to mimic the clouds in the movie version when the palantir is interacted with. This is simply a panned texture movie diagonally across the UV space of the sphere.


For the renders, I used the Metahuman lighting project provided by Epic on the Epic store marketplace. This is a great base to get some lights down and, from that, I created a bunch of variations and new scenes to get a variety of lighting scenarios.

For this, I would recommend experimenting and keeping in mind the 3-point light system as a base.

Then, on top of that, try adjusting the focal points with key lights and fill lights. I definitely would not say I’m advanced when it comes to lighting; it’s an entire discipline of its own after all!

Don’t ignore your camera or post process! Focal length and exposure settings can change the look of your renders.


This was by far one of my most enjoyable projects to work on; my love for the Lord Of The Rings trilogy carried me through it, and trying out loads of different techniques and new things is really what I enjoy most about personal projects.

I love to learn new things, and everywhere on this asset I experimented with new approaches.