Agent Kruger

Character Breakdown

Sten Chermain

test_04
967a8ecdd64d690265494df0e45874e8

Sten Chermain

Character Artist

Introduction

I am a self-taught 3D Character Artist from Reunion Island, France.

Over my career, working as a Senior on many AAA titles... wait. I am just a student with zero professional experience yet. Who should you listen to? The LEADS/AD! Not me, please. But I still wrote this novel for the most curious among you ;)

Project

For context, this is the artwork I will break down:

Also, let’s thank my mentors once more: Brad Myers, for all the advice, tips, and feedback he gave me throughout almost the entire process; and Vincent Menier, who gave me feedback during the final stretch of the project.

If you have any questions or think I should have done things differently, feel free to comment. I’d love to learn new ways too!

References & Inspiration

I decided to go with the ‘Earth’ version of Kruger (instead of the ‘Space’ version) because I like the concept of dirty and grounded sci-fi mixed with the military uniform (which I also consider as sci-fi if you project yourself 1000 years into the past).

Earlier in my journey, I told one of my mentors that the reason I always picked ‘complex’ characters was because I thought it would help me get a job more easily. But this is absolutely not true.

The reason is simply that I love the feeling of ignorance or gaslighting that runs through my body when I look at a piece of human technology that goes FAR beyond my comprehension, like a rocket engine. If I don’t understand it, it’s magic, but if it’s real, it’s SCI-FI, baby!

00-REFS

Tools

  • ZBrush, Marvelous Designer, Maya for modeling (95% of the modeling was done in ZBrush; only the pants and the corps’s cloth were made in Marvelous Designer, with some hard surface pieces in Maya (subd))
  • Maya for Retopo and UVs
  • Substance Painter
  • Marmoset for baking
  • Unreal for the final render
  • Davinci Resolve for post-process
  • Photoshop for various tasks and post-process
  • …and yes, Xgen Legacy

Goals

My goals for this project evolved a lot throughout the process. Originally, I simply wanted to make a faithful 3D “translation” of the Agent Kruger character from Elysium. I didn’t even have a real “quality benchmark” notion at the start.

The lord himself, Dmitry Bezrodniy, had a huge influence on me. This environment and the corps wouldn’t be here if it wasn’t for the highly story-driven flavor he puts in his artwork, which completely changed the way I think of a “portfolio piece.”

F***! I don’t want to make portfolio stuff! I want to make ART; I want to create SOULS. I want to tell stories! I want to change the world, bro…

Along with the environment and the pose, my reference already had a spicy expression, but if it didn’t, I would have added one anyway because this plays a major role in the storytelling potential.

In my eyes, it’s the most important part of an artwork—not technique. Of course, the technique shouldn’t be so bad that it breaks the immersion by introducing an uncanny valley. Things like unrealistic textures, bad anatomy, clipping geo, etc.

The story is the WHY, while the technique is the HOW. This very storytelling aspect will be the focus of my future projects.

To come back to the goals: the turning point of this project happened around June 2023 when I witnessed Sefki Ibrahim’s Pedro Pascal likeness made in Unreal.

This is the moment when my brain just clicked! All of a sudden, I was aiming for Callisto Protocol-level characters… like this shift in mindset literally happened overnight! And this is because I knew I had enough information up my sleeves to achieve this kind of quality!

Achieving photorealism in Unreal exercise

After this high moment, I needed to be honest with myself: I had never done anything like this before and didn’t know if those “information up my sleeves” would suffice. But I needed to see how far that knowledge could lead me.

So, I took an entire month and a half off the project to improve my facial sculpting skills and familiarize myself with all the variables involved in creating this kind of photorealistic portrait in Unreal Engine

At first, I started by isolating the variables. I focused on the “rendering and lighting” variables initially, which meant no modeling, no texturing, and minimal “lookdev.”

To do this, I used scan head meshes from 3D Scan Store and Texture XYZ for the geometry and textures. I used Metahuman base mesh and materials to handle the lookdev side.

The process is actually pretty quick once you get the hang of it. Here are the results I achieved in Unreal:

Kruger’s face

After seeing those results, my dopamine was fired up! I was even more motivated! If I could pull off something like that for Kruger, it would be a home run!

The two remaining challenges were the modeling and the texturing.

The texturing, which later revealed itself to be extremely important, pushed the facial quality to a whole new level once properly harnessed with the Unreal material (more on that later).

My objective for this likeness was to do my absolute best to match this picture one-to-one:

01-MAIN_REF

So, I started by taking my neutral likeness model and began sculpting the expression using the spotlight.

I matched my spotlight to the millimeter (I thought), baked it, and then imported it into Unreal. I also applied the basic, unedited Texture XYZ scan texture with the pores cavity map for the specular. Everything else was the native Metahuman head skin material (roughness and SSS).

Here is how the first iteration turned out:

02-START

Head material (Skin)

Before carrying on I thought I’d make a quick explanation over the material I used.

So for the skin material, I started with the basic Metahuman head skin material (M_Head_Baked) and changed the nodes for the subsurface scattering and normal.

To understand the material node modification (for the normal nodes), you need to understand how I handled the normal map for the head:

For the pores, the highpoly’s subdivision level needs to be pushed to the highest level (7 in this case).

The consequence of this is that it takes a VERY long time to export, update in Marmoset, and then bake it (this mesh is about 25 million points).

So the shortcut I found in this situation was to bake the pores ONCE and bake the expression separately.

Baking the expression by itself didn’t require super high resolution as I had already baked the pores previously on a separate normal map.

So it only required 5 or 6 subdivision levels which was a lot faster to export and bake.

Once I had both maps baked, I would blend the two together in Substance Painter afterward. This also allowed me to increase the normal strength for the pores in very specific areas (and decrease it in others). Same thing for the expression normal map.

But the real strength of this process isn’t in the control it gives for the normal intensity…

This method made the whole iteration process a LOT smoother! You guessed how?

*Note: for the pores I also baked the cavity.

To come back to our modified Metahuman material, which had modified normal and subsurface scattering.

For the normal, the change I made was basically aiming to make the iteration process much quicker.

Instead of having to go in Substance Painter to mix the pores and the expression map each time I modified the high poly expression, I thought: “wouldn’t it be nice if I just had to refresh the map in Unreal with just one click?!”

And so this is basically what I made: I created an alternative “debug” mode which blended the base pore normal map with the expression normal map.

Of course, this was not meant to be used in the final render; it only served as a preview.

But when you are trying to hit a likeness up to the pores and wrinkles, which requires hundreds of back and forth between four softwares, this trick was a divine gift!

Now, for the subsurface scattering, the original Metahuman material only had a single parameter.

The customizations I made brought more control over the SSS. These modifications enabled the use of an SSS map with basic functions like contrast adjustment and changing the minimum and maximum values of the map.

Additionally, I incorporated an area map (painted in Substance Painter) that allowed me to locally tweak the SSS intensity for the eyes, nose, mouth, and ears areas.

A good tutorial to get into material editing in Unreal that I used was “Unreal Master Material for Skin” by NICK RUTLINH.

For the eyeballs, lacrimal fluid mesh, and eye ambient occlusion mesh, I used the base Metahuman material (I did not change the nodes there, but I changed the values of the parameters of course).

Hundreds and hundreds of iterations later…

I grinded and grinded again… and finally, I started getting somewhere!

At the beginning, I was scared to “change the scan texture” because “it’s a scan, so it’s perfect, and I can only make it worse by hand painting it.” Needless to say, I had to get my hands dirty and break that limiting belief.

We need to be ready to change it drastically to make it fit the vision and achieve the goal. Or, in other words: We are here to make a massacre out of this skin! And that’s what I did.

The area that required the most work was the eyes and metal implants’ wrinkles.

Aside from the base color and normal map, the SSS, roughness, and specular maps played a big role in the wrinkles. Dropping the SSS value for the wrinkles’ cavities worked significantly better. I also used a lower SSS value for dirt spots and dried skin.

Anyway, you can see what I did in the buffer views of the face I shared below.

05-CHANNELS

I didn’t use procedurals for the face texture; everything was hand-painted or stencil-painted on top of the scan base.

Another challenge I encountered was that the base scan didn’t have interesting pores on the eye bags, so I ended up sculpting the skin pores around the eyes and on the wrinkles.

Then I hand-painted the base color variations for each of these pores to match the reference.

For the metal implant, I had to increase the subdivision level on the head to ensure the skin transition maintained an organic look.

Improvements

After looking at this comparison gif, I actually think I shouldn’t have subdivided it once more… The change in the skin transition around the metal implant isn’t worth the extra subdivision level, in my opinion… ok enough mourning.

Lighting

Let’s talk about lighting now.

One important thing about the lights is that 90% of the lights described below are only active on the face channel and do not affect the body/outfit channel (except for the 2 main directional lights).

Main default: the main directional lights for the entire scene, so the main light for the body and environment as well.

DirectionalLight8: the “secondary main light” which was designed to create a very specific highlight on the nose that I couldn’t get with the main directional light.

I also had it affecting the body and environment because it created interesting highlights.

From the experience I led, rect lights are generally better for creating sharp and precise reflections/highlights on the skin than directional and spotlights.

  • Rect6: Main reflection for the right eye
  • Rect7: General underlit fix
  • Rect4: Left eye underlight fix
  • Rect13: Forehead slight brightness increase
  • Rect12: “Ambient” eyeball reflection separated channel
  • Rect10: “Hard” eyeball reflection separated channel
  • DL_Nose: Main reflection for the nose and frontal region/temporal ridge.
  • SpotLight4: General light for the right side
  • Directional9: Light from underneath to eliminate some unwanted shadows

Eyes reflections

Ah! And something else I wanted to talk about: creating specific reflections on the eyeballs.

To achieve this, I created two rect lights: one “ambient light” and one “hard light.” I then switched the eyeballs and those two lights to another dedicated channel so those lights could only affect the eyeballs.

One important detail is that these rect lights had a texture plugged in. The ambient light used a basic HDRI texture, while the hard light used a custom Photoshop doodle I made to land those harsh reflections exactly where they needed to be.

And yes, dozens of iterations were dedicated to ensuring the reflection shapes on the eyeballs matched the reflection in the reference picture.

Skin reflection

Sometimes the reflection on the skin just didn’t want to cooperate, despite my light source perfectly facing the surface. In those instances, I changed the roughness and specularity to “force” the reflections.

I’m not 100% sure if this is the right way to handle this situation, but that’s how I handled it here and still wanted to share my method.

Why both specularity and roughness, you may ask? From the test iterations I conducted, I preferred the result when both channels were employed rather than just roughness.

However, be careful with this method; you don’t want the spec/roughness to dictate the shape of the highlight, as this would give a wet look. Only the normal map and light should drive the highlight shape. The spec/roughness should only help it.

Hairs

For the hair, I prioritized quality over methodology. Look, I am not a hair specialist so I didn’t bother and went with an alembic groom instead of hair cards.

It is A LOOOOOOOOOOT easier to achieve better quality with splines.

So when it comes to the xgen descriptions:

  • For the beard I used 3: one for the main “body” of the beard, one for the whiter hairs, and one dedicated only for the hair underneath the mouth.
  • For the hair, I ended up using 5 descriptions: one for the side + back, one for the top part, one focused on the front part, one for details, and one for strays.
  • And then the standards: 1 description for fuzz, 1 for eyelashes, and 1 for eyebrows.

I followed one single tutorial for making the Hair in Xgen (even though it’s not really a tutorial)

One challenge I faced was getting a similar transition between the skin and the beard hairs’ implantation.

As you can see on the map, the skin under the beard has a darker tone; this helped the transition but also served as contact ambient occlusion.

But the magic started happening when I started modifying the metahuman hair material node. I added a gradient feature to the material which allowed me to change the base color of the hair roots.

Then I simply tried my best to have the root color match the skin underneath.

11-BEARD_ROOT00-1

Face

To summarize, the process felt like a fight on 9 different battlefronts simultaneously, namely:

  • Normal
  • Lowpoly
  • Albedo
  • Roughness
  • Specular
  • SSS
  • Hair
  • Lights
  • Post-process in engine

Below I have attached the video retracing the progression for the making of the face. Those are all high-resolution screenshots I took during the process. This pretty much retraces 95% of the whole process.

Zbrush Likeness

Just wanted to sneak in this tip before moving on: For the likeness in zbrush, try to know the focal length the camera is using and try to know if the head was at the middle of the frame when the shot was taken (or if the picture is a crop).

Because if it is a crop, this could mean that there is an amount of distortion affecting the shape of the face depending on the focal length used.

(The lower the focal length, the more distortion there will be near the edge of the frames.)

I am saying this because this is actually a mistake I made in this very project and why there is this Jamie Lannister look; the cheekbones and the cheeks are a lot too prominent.

There is (in my opinion) a distortion affecting my reference picture which is the root cause of this issue, I think. And I failed to address it while doing the Zbrush sculpt.

Also, the nose, but the enlarged nose wing was done on purpose here, I just preferred the expression when the nose wing was enlarged this way. (In this case, I sacrificed a bit of likeness for a more pronounced expression.)

Another Tip

Instead of unreal, my final viewport for the face close-up was Photoshop.

After most iterations, I would send the screenshot in Photoshop where I was making sure all the changes I made were matching the reference.

I actually also did this for the “main bust shot,” which was a complete other story… more on this later.

Alright, now that the face has been covered, let’s transition the discussion back to the outfit.

Modelling

As I said earlier, the majority was done in Zbrush (most of the hard surface and the anatomy).
But what I want to emphasize in this section is cloth. The shirt was made off of a scan that I cleaned.

Thanks to David Shrivers for providing the base scan.

So the process is quite straightforward:

Consider the base scan as a sketch sculpt and bring that sketch to a clean high poly stage by sculpting the folds. That’s it.
The base scan already has the primary shapes blocked in.

And depending on the quality of the scan, the amount of secondary shapes and tertiary shapes will vary.

Cloth sculpting

A brief explanation of the MEAT of this practice could be this: (this is my personal way of thinking about it, not something I saw in any tutorials or courses)

There are 2 aspects to creating realistic folds (I call): Blockout and Rendering.

First, what do I mean by blockout?

In this context, I don’t think in terms of primary, secondary, and tertiary shapes because the lines between those can get blurry at times.
Primaries sometimes flow and morph into secondaries. And some shapes start as secondaries and become tertiaries.

Here, the blockout means the “composition.”

Not at the scale of the whole character, rather at the scale of the folds themselves. It’s about how the folds realistically “fit” into and around each other.

Second, the rendering.

In this context, I am not referring to rendering in Unreal or Arnold.
By rendering I mean the polishing of the SHADING/CURVES. Ultimately, polishing those to a “high quality scan” state.

That’s the stage where you take care of all the gradients of the cloth.
You make sure that those gradients match the behaviors of the gradients of the material reference.

If we think of the blockout as the armature, the rendering is what you dress the armature with.

Scan Cleaning

Moreover, I had 2 other resources:

First, the pictures of the scanned object AT THE MOMENT of the scan.

This helped a lot for completing the blockout where the scan didn’t come out well, not so much for rendering though because the texture and lighting can hinder the perception of the gradients.

Second, a high-quality 3d scan of a similar material (from 3dscanstore).

This resource helps tremendously for the rendering aspect.

And after looking at this sort of untextured reference for a very long time, you will start to see the photos with a gray shaded mesh filter in your eyes.

So you become able to read the references even better afterward. You become able to extract gradient data out of a textured photo.

A trick I discovered later:

Using Marvelous Designer garments as a reference for the “rendering” aspect can also help, but make sure you are using the same material as the scan you are cleaning.

And I will advise against using this for “blockout” references. Let’s move on.

As you can see below I recorded the steps while cleaning the scan:

13-SCANCLEANING-1

It’s just like a blank text exercise we used to do back in elementary school. But here the words are replaced by the brush strokes.

The difference is that you need to extract the data from the references (with observation) to fill in the blanks.

Marvelous Designer

After completing the scan, my visual library was flooded with realistically rhythmed references, and I needed to express those in another tool for cloth called Marvelous Designer (as I did not have a scan for the pants).

I found a technique to “enforce” a more realistic rhythm in the folds and get rid of MD’s bias toward the “laboratory-grown folds” look (as I call it).

This technique relies 100% on pins and moving those during simulation until you get something decent.

I am not completely sure about this, but pins might preserve 100% of the shapes, so I prefer it over freezing or deactivation, as I had enough bad experience with the freezing function still smoothing my shapes after a couple of simulations.

Export Process

After generating the folds in Marvelous (up to a high resolution/low particle number), I bring everything back to ZBrush.

Oh! And how the hell do I export from MD to ZB? Here is my process:

  1. For the first step, we want to export 2 meshes: HIGH AND LOW, both from Marvelous (settings: single object, weld). Always start with the high with this technique.The high is simply your final garment with a high particle distance. THEN we do the low. I recommend saving your file here before moving on. For the low, you need to change from triangulated to quad, then change your particle distance to something much lower (this depends on the scale of your scene, but if you have a “normal” size scene, 15-20 should do it).
  2. Import both meshes in ZBrush and auto-group with UV.
  3. Split non-manifold geometry or, if not possible, remove sewing in MD. (Optional: ZRemesh becomes viable after this step, but I found that sticking to the original low topology always gives better results.)
  4. Then we want to “polish” the high: subdivide, then polish (there is no particular number here, judge how strong you need to polish with your eye), and project the details of the high to the low: subdivide and project all.Make sure you project, then subdivide one time, project, then subdivide one time, and so forth until you have enough resolution. Once you are done projecting, I recommend subdividing 1 or 2 more times (without reprojecting) so you can add some tertiary detail (final subdivision level count: 5-6).

After this step, you can throw away the high-res model from Marvelous.

Then, if you want to add thickness: delete subdivision levels and group loop + pull in with the move brush or panel loop. Careful here, though: this won’t let you reconstruct more than 4 levels.

If you wish to go up to 5 or 6 subdivisions, you need to add those last 2 after creating the thickness. DO NOT reproject the high-poly after creating the thickness because it will just kill the rims.

Unwrap polygroups, ready to sculpt!

ZBrush Finishing

In ZBrush, I will add the stitches, some noise, some contrast, and some memory folds with alphas (if they are present in the references). I do not add the fabric details at this stage.

At times, I may also tweak the folds in some areas to make them fit my vision when I think it will improve the composition.

That’s it for the modeling of cloth.

16-MATERIALS01

UVs

22 materials – So how did this end up with 22 materials? I organized the texture sets following 2 rules.

First Rule: “Material Types”

This character is composed of lots of different elements ranging from skin, cloth, metal, screens, and masked parts such as the fans. In Unreal Engine, each of those elements requires different shading models and sometimes different blend modes.

So the first rule I followed was to keep elements of different “groups” in separate texture sets (one group for cloth, one for metal/plastic (default lit), one for skin, etc.).

Once the groups have been sorted, the next mission is to end up with the least amount of texture sets per group.

Texturing

I don’t think of texturing as one isolated process. I will often refer to it as “texture/lookdev” because, at the end of the day, the texture is meant to work hand in hand with the material network they are plugged into.

I think of textures as “mask painting” that will tell the engine how the light should behave once it hits a given surface.

Hence, what this means also is that the Substance Painter viewport cannot be trusted. Or, like some other people say: you are working in the dark because you are not viewing it with your UE lighting setup nor with your specific materials.

The Substance Painter viewport is an arbitrary viewport.

So, a mandatory habit I developed is to regularly check in Unreal if the changes I am making to the map are getting me closer to my goal (or not). Plus, does it look worse or better than what I had before?

When I feel like I made a big change to the texture, instead of overwriting the previous map, I will sometimes keep that old map so I can compare it with the new one.

However, I want to mention that I use Tomoko Studio as the environment (in Substance). I think it helps to assess the quality of the texture when you are not looking at it in Unreal.

Another tip for texturing: I heavily rely on stencils-based paint and stencil-based procedurals. By stencil-based procedurals, I mean anchor-based layers.

It will probably be easier to understand with the video:

18-SP_STENCIL

I still use procedurals, though, but the stencils layers really are the star of the show because they introduce this realistic “rhythm” in the texture. The procedural layers are much more suited to creating “ambient” masks.

If I want “wiped dirt” at very specific places or if I am making edge wear, I just paint it. I am not going to try hard to make it all fit with some ambient grunge.

That’s a bad habit I had in the past.

I tried to do it all with procedurals, and I ended up with 30 modifiers on a single layer. Not only did this make it very hard to keep track of what was going on, but it also turned Substance Painter into a lagging hell!

So keeping things simple, named, and straight to the point was a definitive mantra I held during the texturing stage.

Material Lookdev

The metal, alpha, and screen materials are straightforward.

You can see that for the non-cloth material, I multiplied the base color with the ambient occlusion.

This is because, in Lumen, the ambient occlusion map isn’t being used, and you need to uncheck “allow static lighting” and set “global illumination” to “screen space” if you want the engine to show it.

So, multiplying the albedo with AO is a way to have it showing in Lumen (Note: the result will not be as good as the static lighting method).

19-AOEDALB

The material that I want to talk about here is the cloth material (for the outfit).

There were cases where I had some non-cloth parts in a cloth material (either because the non-cloth part was directly merged in the geometry or because I couldn’t fit it on a non-cloth texture set for lack of space).

So, instead of plugging in a new “cloth mask map,” I generated it from the roughness map. As the cloth had a significantly higher roughness value than the non-cloth parts they were merged with, it was easy to sort them apart using a contrast function in the nodes.

Another important factor was the fresnel function. This really made a significant difference.

You can also see that there is a normal detail section. This wasn’t used in the final render because blunt 4K maps were obviously better, but if we lower the map resolution to 2K or 1K, this made a big difference.

On top of the normal detail, I added an “albedo details” which is simply the alpha version of the same normal detail texture. This texture serves as a mask for mixing in a solid color in the cavities.

One very important thing when using these is to make sure you use the same alpha as in Substance Painter and that you exactly match the position, rotation, and size of the tiles.

This is important to have the normal detail be synchronized with the albedo and roughness; otherwise, it creates a mess and everything turns into chaos.

For the arm skin, I used Saurabh Jethani’s skin material – with albedo multiplied AO.

Pose

For the pose, this time I went with Character Creator 4 instead of Mixamo (under my mentor’s recommendation).

Character Creator guided auto rigging was really a huge step up compared to Mixamo auto rig because you can tell the software which part of your model should be treated as non-deformable and which part should deform.

https://www.youtube.com/shorts/LFUXhyjSMdc?feature=share

23-CC4

That being said, the time spent on CC4 only represents 10% of the total time spent on the pose. 90% of the remaining time was spent manually cleaning in ZBrush.

The cleaning work’s purpose is basically making sure there is no overlap, no clipping geometry, unstretching some UVs, and posing the straps.

This could cause some issues if you have smoothing groups.

As you know, ZBrush wipes out all smoothing groups when importing (also all materials!). What this means is that you have to reconstruct the smoothing groups each time you make a pose iteration in ZBrush (and reassign the materials).

Fortunately, there is a way to automatically reconstruct the smoothing groups and not lose the material attribution.

Important: This method uses vertex order, so you must not merge or split the original subtools in ZBrush.

Here is the method:

  1. Open two Maya instances: 1.1. For the first instance: import the “uncleaned” mesh with the smoothing groups. 1.2. For the second instance: import the cleaned version exported from ZBrush that has no smoothing groups.
  2. Copy the part you want to restore (from the second instance) and paste it into the first instance (where you have the smoothing groups). First, select the mesh from ZBrush, then shift-click on the other one. Run a transfer attribute with the options depicted below: Important: Make sure you freeze transform, reset transform, and delete history before running this operation.
test_03

This will basically make the smoothing group mesh’s topology match the ZBrush mesh.

Tip: For this method, it is preferable to never overwrite the unclean FBX with the cleaned FBX when exporting from ZBrush. I advise overwriting the original uncleaned FBX only once you have restored the smoothing groups and material for the cleaned FBX.

Environment

A quick word on the environment: The important thing to keep in mind when adding an environment is to not have the environment drive the attention of the viewer away from the character.

Adding an environment can be risky but can also build more immersion.

Rendering

Things started getting really challenging there.

All the work I did for the face worked fine in the viewport, but everything started falling apart when I rendered with the Movie Render Queue. The light was different, and there were shadows popping up from nowhere.

The skin looked different. It felt like all the time I spent fine-tuning the face went out the window. I started panicking!

Then I made a decision: I will “render” the face with high-resolution screenshots separately from the body and render the body with Movie Render Queue. I will then merge the face renders and the body renders together in post.

26-RENDERFACE02-1

One other mention: I used ACES workflow for this project using OCIO in color output.

I was making sure to introduce the depth of field “within the character depth.” Sometimes, I would also have 2 areas of depth of field: one before and one after the focus area.

This helps create a sense of depth and avoids the “toy look” or “miniature look” we sometimes see in people’s work, which is a massive immersion killer.

Post-Process and “Fine Tuning”

Once I got my ACES renders ready, I exported them in DaVinci Resolve, where I did very slight grading adjustments. Then I brought it back to Photoshop for the home stretch.

In Photoshop, the first part of the work I did was basically mixing some renders. I had one render with AO, one without AO, one with the environment FX, and one render for the background sky.

Then I merged the face renders.

After this, the remaining work was about adjusting the composition and adding some noise.
Having alternative lighting renders also helped fine-tune the composition.

Outro

To wrap this up, I will say first: I hope this wasn’t too boring of a read…

And second: Practice and ‘standards increase’ made me go through many “evolutionary” stages. Thinking hard and experimenting with all sorts of weird solutions.

Dealing with new challenges popping up every day that I clearly thought I wouldn’t be able to overcome… but did.

Like the beard root transition, the wrinkles, the eye reflections, the skin-specific skin reflections, the front hair, the chin hairs, the freaking eye bags pores, the metal implant transition, and SSS geometry.

And that’s only for the head… don’t get me started on the cloth, the environment/FX, the pose/composition, the ACES workflow, or the final image finishing touches in Photoshop…

As a last word… thanks for reading this far! Your hippocampus size just doubled, my friend!