After publishing my work, I observed that it garnered a considerable amount of attention from budding artists.
In this presentation, I’ll shift my focus from detailing the entire model creation process—which isn’t significantly different from what’s available in free tutorials or other articles—to explaining the logic behind certain decisions I made during the process, as well as addressing challenges.
Evolution of the Project
The phrase, “Transform your daily routine into Nodes,” poignantly uttered by Daniel Tiger during a Substance Day (a stellar artist whom I highly recommend acquainting yourself with if you haven’t already), struck a chord with me.
This notion, which emerged around 2018, prompted me to meticulously dissect my workflow, identifying recurring actions that could be converted into pre-made materials.
By avoiding the task of creating materials from the ground up, you can devote more time to refining aspects requiring a holistic approach—an undeniable advantage. This led me to the creation of a comprehensive texturing material.
However, while it rendered impressive detail on larger objects, it didn’t resonate as effectively with smaller ones; the intricate details were often lost. This challenge birthed the idea for this project.
Its criteria include a similar volume of paint-to-clean metal ratio as seen in vehicles, a moderate assortment of shapes, and a comparable range of materials.
Referencing always comes first
Initially, my journey was about seeking references. I strive to determine the object’s full name. If fortune smiles upon me and I find a factory blueprint, it’s a win. If not, the hunt continues through various channels, commencing with the actual name and eventually relying on visual descriptions.
Pinterest, with its exceptional search algorithm, is instrumental in fetching visually engaging images. In my case, the two most beneficial platforms were the ones selling these devices.
Often, the images provided were incomplete or, as is typical, distorted. At one point, I even approached the support team of one of these platforms in hopes of acquiring better visuals. My request seemed to go unnoticed, leading to some amusement on my end.
A cardinal principle I adhere to is to gather images from all possible angles. Unless a single angle suffices, multiple perspectives invariably enrich the perception of shape.
The human psyche is intrinsically lethargic, often assuring you that your creation is impeccable. But such confidence is frequently misplaced.
Selecting Visual Guidance
An essential facet in sourcing photographs that’ll alleviate your texturing woes is obtaining visual guidance. The ideal scenario is stumbling upon pictures of the exact object you’re modeling that already look appealing.
The challenge lies in striking a balance across all elements like paint, damage, oxidation, rust, and dirt. Transposing the visual state of one item onto another is commendable but comes with its own set of challenges.
For example, if the object you admire frequently contacts the ground and yours doesn’t, you’d have to extrapolate from the visual-only those elements influenced by ground contact.
To simplify: if you’re a newcomer aiming for a high-caliber asset and you’re struggling to find attractive images of your subject, it might be worth considering a different object.
For compiling reference boards, I employ PurRef. Here’s a glimpse into its organizational structure:
I advocate categorizing your images by varying angles to reduce search time. If your visual guide isn’t precisely of your item, create a distinct section for it.
The central theme is to maintain a clear foundational modeling structure. Overloading with excessive photos can be distracting.
Refining Forms Using Photos
The first step I take before delving into modeling, once I’ve gathered all the necessary references, is outlining the form based on the photos. Anything will do. I conduct detailed processing in Photoshop.
If I need to quickly understand something on the spot or grasp proportional relationships, I use Lightshot tools. Here are a couple of examples:
Deriving Proportions in Blockouts
Typically, I commence the blockout process either from the largest or the most complex form. I outline all the major and intermediate forms. I consider the blockout phase completed when all the fundamental lines are properly positioned.
To achieve this, you can use Photoshop and overlay a screenshot of the wireframe onto the reference photo from the correct perspective angle. To determine the perspective angle, you can use a free tool like fSpy or the perspective tool within Photoshop.
There are tutorials available on YouTube for this, as I recall.
Approach to Modeling and Scene Structuring, from High-Poly to Low-Poly
I can immerse myself in my work rather successfully, but at times, almost unconsciously, I find myself wandering around the object. Just rotating it and viewing it from different angles.
This isn’t inherently bad; it aids in revisiting parts that have already been worked on. However, it also increases the chance of leaving errors behind when I take these detours. Over time, I’ve learned to give myself micro-tasks within the object.
For instance, “Now I need to complete this particular part entirely.” This way, I set boundaries for my attention. “Let’s get this section right, and then we can take a short break.” This practice helps maintain focus and ensures that each component is given due attention before moving on.
It’s a strategy that prevents overlooking errors caused by distractions during the modeling process.
In essence, my modeling process doesn’t vastly differ from what you can find on the internet, but as a game dev artist, I’ve developed a specific logic for transitioning from one modeling stage to another. Two to three years ago, I switched from Maya to Blender, and in the latter, this logic gained even more significance.
At its core, it goes like this: Mid-Poly – High-Poly – Low-Poly/UV.
In the mid-poly stage, I bring out all the forms in their detailed versions. All necessary Booleans are present, but wires aren’t converted from curves yet. Complex geometry transitions are simplified, areas aren’t merged, and bevels are virtual (in Blender).
One crucial detail: since I almost always work within a specific polygon count, even at this stage, the geometric forms have the same number of edges as the final low-poly (LP) version. I see no point in doing the work multiple times.
For example, if the object requires 18 edges on a cylinder and it looks satisfactory with proper smoothing, there’s little sense in using 32 subdivisions for the high-poly (HP) version, only to optimize it down to 18 for LP.
Of course, there are various scenarios, but based on my experience, a significant portion of geometry can be modeled in a similar manner.
After migrating to Blender, the HP and LP/UV processes sometimes occur almost simultaneously.
Following the logic described above, the only difference between LP and HP is a few modifiers and UV unwrapping.
Here’s an example:
I make an effort to create UVs even during the mid-poly stage, once I’m satisfied with the form. There are instances where UVs might need adjustments due to fixes, but the majority of the UVs typically remain intact.
Some minor relaxation might be necessary, but it’s still quicker than starting the UV process from scratch. Another advantage of having UVs in place is that they can serve as convenient selection tools during the high-poly stage.
I followed the same logic when modeling the periscope. I first established the mid-poly version and made additional form fixes (sometimes shapes become clearer when minor details are present, which is why some artists begin modeling from detailing).
Then, I refined the fine details (cutouts, steps, bolts), created UVs, and saved the complete mid-poly model.
I proceeded to anchor the support edges and address complex geometry transitions, then saved it as the high-poly version.
For the periscope’s workflow, I reopened the mid-poly version and imported the high-poly model as a reference (in Maya, you can place it in a layer and adjust the Transparency setting).
While refining the model to the high-poly stage, you’ll likely make adjustments along the way. By keeping the high-poly version in the scene with the low-poly model, you’re less likely to overlook anything during the refinement process.
I optimized the LP by the principle of game assets, although I am not sure that it was necessary in this specific case.
The sealant proved to be an intriguing component of the process. Blender came to my aid in this aspect.
Unfortunately, the file with the stages didn’t get saved, so I’ll demonstrate it in a more schematic manner.
As this project was meant to be an experiment in micro surface detailing, I embarked on finding the optimal texel density. To achieve this, I allocated 8k alpha to the object and scaled the largest shells until I attained a satisfactory level of texture detail.
In my case, this turned out to be around 8500 pixels per meter (ppm). However, due to how the texture was displayed in Maya’s viewport, it seemed a tad excessive.
Yes, I had the capacity to paint even the tiniest scratches, but these might not be notably visible in the final render. It’s likely that a density of 6500-7000 ppm could have sufficed.
UV Unwrapping, Baking, and Microsurface Normal Maps
My approach to UV unwrapping involves starting with the largest shell and then attaching adjacent sides to it. This manual approach allows me to ensure there are no errors on the largest shells. I then proceeded with automatic packing UDIMs in Rizom.
Due to the high texel density, there were instances where the largest shell and its sides ended up on different UV sets. What was more challenging was when the entire sub-object was on one UV set while a couple of its smaller shells were on another.
This was the UV Pacing engine decision. In total, I ended up with 15 UV sets at 4k resolution (one of them at 4k \ 2). It might have been easier to use fewer 8k textures, but I wasn’t confident that my system could handle such a workload at that time.
For the primary normal map, I used Marmoset for baking. It’s straightforward and quite fast. However, certain elements did require re-baking in Maya, where chamfers and transitions tend to bake better.
Moving forward, I created a microsurface normal map. This step is essential to attain a more detailed cavity and ambient occlusion map. A strong, vibrant normal map is a pivotal ingredient for a beautifully textured object.
For this particular object, I generated a simple surface noise using a noise generator. It will be posted a little later on my Gumroad. Around 60% of the noise was generated this way. The remaining details were added manually using brushes or stencils.
This stage requires careful planning since you don’t want the noise to interfere with your work later. A visual guidance reference is highly useful in this phase.
Typically, I work on three separate Substance Painter projects whose output maps are linked together:
Cavity > Baking > Texturing.
This approach prevents overloading the texture project with unnecessary layers, although there are several exports for revisions.
Texturing approach. Test renders. STENCILS.
And here we come to the most complex part. Let’s start with scene pre-configuration. I usually use a standard shader and color profile – ACES.
If you rendered mesh maps in a separate project and exported them through the standard Mesh Maps preset, then after adding them to the project, they should automatically apply to all UV sets.
The logic of texture building is roughly equivalent to real life. Metal – Primer – Paint – Dirt – Extra. In texturing, I have a fundamental rule of “Three”: it sounds like this: “Everything that takes up a lot of space on the texture or will be in close-up should have three states: current, older, and newer.”
In terms of color expression, depending on the object, it will be approximately: normal (the most coverage), lighter, and darker. The ultimate goal is always the same: for the object to look as good as possible.
Therefore, you should use all available methods. In some cases, edges are well emphasized in light colors, in others in dark colors. Although this goes against the approach of honest PBR, if the transition of geometry is not clear, you can emphasize it with a subtle color shift.
Since I was planning to render in Octane, it was necessary for me to approve the paint color there first and foremost. Not always does what’s present in the reference look good in the render.
Furthermore, I would recommend examining all stages in the render. Finished the basic metal? Make sure the color and roughness look good. Added dirt? Verify that it’s noticeable.
Not all areas of the texture require a complex approach and close attention. In this regard, setting up the render scene is helpful in finding favorable angles, and based on them, understanding what requires detailed work and what can be kept relatively simple. The entire periscope was a significant practice for me in working on a large painted object.
An additional complexity is that it’s electronics. It quickly starts to look odd when it’s too worn and dull when it’s too new. As a result, I went through three complete overhauls from scratch before settling on this.
I’ll note that it’s crucial to assess the results of your work from different distances. What looks appetizing up close might become noise from afar. In this project, I enjoyed adding another layer of paint between the primer and the main paint on the surface.
Firstly, it softens the transition to scratches; secondly, it adds age. A small trick that allows separating these layers is adding a bit of Perlin noise with height and roughness to the old paint, as well as shading the edges darker.
(They were used and rubbed against; at some point, something scraped off the top layer).
I actively use height information to enhance the surface. I can’t remember who astutely pointed out that height adds tactility to surfaces. To achieve this, you can use stencils or simply brushes, depending on the effect you want to achieve.
Height works particularly well on metal due to its reflective nature. For metal, I add a slight noise that creates a sense of “granularity.” All the small chips were just painted in two layers (depressions and protrusions). The rule of three is most illustrative on metal.
Gray represents the normal state, light gray-blue (tint at your discretion) is polished metal, and dark gray (slightly purple or blue) is oxidation. If you can only see a thin strip under the paint chip, you don’t need to complicate things with multiple colors. One color will suffice.
Another significant challenge was finding the balance between dirt and paint. Dirt not only tells the story of your object but also helps differentiate planes from each other and emphasize geometry transitions.
If it’s too heavily dusted, everything looks monolithic; if there’s too little, everything will be uniformly shiny. I found a lot of help in Illya Dolgov’s breakdown. His idea with the dust cap helped separate the upper surfaces from the sides. I highly recommend reading it.
One thing I always use for texturing is stencils. At such a high resolution, this has become crucial because brush strokes become very noticeable. Almost all the dirt, most of the chips and scratches, as well as absolutely all the drips, are stencils.
My basic approach here is to create a custom generator that fills all the hard-to-reach areas, which establishes the foundation but doesn’t cover the main planes.
I can only add what’s necessary and practically don’t need to worry about painting unnecessary areas. Here are a few packs that I use most often:
Since Painter only provides settings for its default stencils, I’ve come up with a simple tool for myself that allows me to adjust brightness, contrast, flip, and invert.
You can use it both for stencils and as a substitute for fill layers in the layer stack (it has settings, unlike the default fill layer). The only limitation is that it only works with squares; it will stretch rectangles.
There’s a good way to create variety from a single mask: using the gradient node.
Here’s how I do it:
As I mentioned earlier, all oil leaks are alphas. Usually, I try to paint them (just because I enjoy painting such things), but in this case, the painterly look was noticeable immediately.
So, using the new alpha snapping method to the surface, I simply duplicated the alphas while erasing the excess. The updated Painter that allows duplicating fill layers while holding Alt is truly a blessing.
Rubber and wires are quite straightforward. On the wires, height plays the main role (later it will be used for displacement in rendering).
On the rubber cushions for the face, I combined an alpha with longitudinal cracks and larger cracks. Smoother rubber clashed with the rest of the object. It seemed newer. For glass and lenses, a combination of 4-5 dirt stencil layers was used. Only Base Color and Roughness were utilized. The importance of rich and varied roughness in game development cannot be overstated.
You can use layers with only roughness to create “history marks.” When balanced well, it makes the object more interesting. This alpha pack fits in perfectly for this purpose.
There can be numerous ideas: spilled but later wiped-off oil, dried water marks, wear from usage, and so on. I’ll remind you that this was my first project in Octane.
One problem was that the Roughness looked different under different light sources and only the HDRI. Since I didn’t have a final lighting solution, I decided to compare textures under the same HDRI that was used for texturing.
The result was that after adding light sources, a part of the roughness was simply lost. Overall, there’s a noticeable difference in how textures are displayed between rendering in Painter’s viewport and the path tracing engine.
This is something worth considering.
Have you ever noticed that at some point, you can just rotate the model, and everything seems to please you?
Do you take joy in those small details that you’ve scattered here and there? That’s the moment when you become biased. You stop seeing the flaws. When I feel like I’ve exhausted my ideas, I put the work aside for a period of time to take a break and “forget” about it.
Afterward, I critique myself, marking regions that look odd or uninteresting. Once those are fixed, it’s time to show it to someone you can trust with this role.
I can’t say which approach is better: getting as much feedback as possible from everyone and selecting only the most valuable or choosing a few individuals who can provide you with the highest quality feedback. Try both and decide for yourself.
- The visual guidance looked great in photos, but recreating it in the render didn’t work out, and the result appeared dull. I thought that with a significant amount of geometry, the texture loses its priority, but that’s not always the case.
- The project turned out to be very heavy, and my RTX 2080 Ti took about 40 minutes to open the project, with an additional 20 minutes spent unpacking the UV sets with the first stroke.Things improved significantly after updating my PC, but the sheer number of UV sets became a problem. After the update, it would have been better to spend time compiling UV sets with 8K textures. Fewer seams, less loading. Consider not only your capabilities but also your PC’s capabilities.
- I couldn’t get the “paint through UV sets” function to work in Painter and had to resort to the traditional method. It would have been better to find a solution. Lastly, what I’d like to say about texturing is a small recommendation.If you feel confident enough in the process but also realize that there’s some barrier you don’t know how to overcome, try investing 100-200 hours into learning Substance Designer.Start with the basics to get oriented in the program, and then go through 5-6 tutorials covering various materials. One definite advantage of Designer is that it will teach you to better understand the logic of the process, and you’ll also gain some experience from industry experts.
If you’ve never encountered Octane before, it can seem quite complex from the start. The YouTube channel “Final Result” and videos by Lino Grande were really helpful for me.
Here’s what my render scene looks like:
HDR serves as a pseudo-fill light for shadows and sets the overall tone of the scene. The key light defines the direction and is the strongest light source in the scene (usually at an angle to the camera and the object).
The actual fill light softens shadows and provides additional illumination in dark key areas (usually on the opposite side of the key light and at 30-60% of its intensity).
I’m a big fan of cinema, and I like to add cinematic feeling to scenes whenever possible. This is one of the reasons why I chose a cold color tone. To achieve more dramatic shadows, you can use planes that are invisible to the camera.
For image output, I used the 32-bit OpenEXR format. I also rendered it in the ACES color space. Here’s a tutorial.
For most angles, I needed between 5000 to 7000 samples, but it was more challenging with glass. Sometimes even 30000 samples weren’t sufficient, but rendering only the glass region came to the rescue.
Keep in mind that the BSDF model can yield quite different results (it’s a dropdown parameter in the settings of the Universal Material). In my case, I used Octane for all solid surfaces and GGX for transparent surfaces.
The Light Pass ID is an incredible feature. You can assign IDs to lights and disable the display of a specific light on specific objects. For example, turning off HDR or filling light reflections on lenses to avoid messy reflections. Or conversely, creating a light that only reflects on the lens to achieve a beautiful soft glint. Love it!
I will separately mention lenses and glass. All shader settings will be on the slides. For glass, a relatively simple material is sufficient. The only thing I can say from observations: if you want to emphasize dirt more, try to catch a glare from a light source at a sharp angle on the glass.
Lenses were adjusted separately, external and internal (beneath the glass). Here are the settings themselves. Here’s an excellent tutorial on glass in Octane.
Rendering disproportionately long objects in an interesting way can often be challenging. When framing, you have to choose what to keep and what to discard. Excess negative space can also pose problems.
Things can get even trickier when the scale of the main object isn’t conveyed well. During the first test render, all these issues immediately surfaced, and the solution was to place additional objects next to the main one.
The cable, besides being a different color, also has curvature, adding an “organic” feel to the image. Wrapping around the periscope gives the composition a more triangular shape and serves as guide in some shots.
The soldiers (purchased on TurboSquid) serve the same purpose: creating lines that lead to the focal point. In one of the concept art tutorials (sorry, I don’t remember who), the artist mentioned that it’s preferable to guide the viewer’s gaze in the artwork.
To prevent anything from diverting the gaze, you can create a kind of cyclical triangle of contrasting elements that compel the viewer to keep exploring your work. I’ll show examples on these slides.
Among the main challenges
- The representation of roughness isn’t quite clear: Glossiness appears different under varying light sources and HDRIs (I will delve into this in detail in the next project).
- Difficulties with adjusting dirt on transparent surfaces: I stumbled upon the fact that it becomes noticeably visible when a light source is placed at a very sharp angle to the camera and the glass, but it’s barely noticeable under normal soft lighting.
- The metal didn’t turn out as I had planned and was quite different from what was in Painter (I’ll also work on this).
- The first attempt at the final renders (the first of three revisions) was ruined because I used empty not only for camera direction but also for focusing. Most of the detail was lost due to blurring.
Here, it’s quite straightforward. I render with AOVs (Arbitrary Output Variables) from the start to have better control over aspects that might not have turned out as expected in the render.
Is a highlight too strong in one area? You can tone it down with diffuse. Is the metal lacking shine in the focal point? Apply reflections again. Need to tweak the background color a bit? The ID map for the object will provide a better mask. As usual, I use Photoshop.
I overlay all the AOVs, enhance shadows, perform color correction, and use the Camera Raw filter for the focal point and shadows. In the past, sharpening was commonly used on renders, but the Camera Raw filter has a texture parameter that works much better.
It doesn’t invent anything new; it simply enhances the existing information.
In conclusion, I’ll mention some commonplace things. Watch and engage in more tutorials (even if you have 20 years of experience, there will always be someone who came up with something brilliant). Always strive to find your visual guidance.
Set challenges for yourself and emerge as a victor from the struggle, but even if you can’t, you’ve gained experience. Develop new skills and, whenever possible, make time for creative hobbies.
If you encounter a problem you don’t know how to solve, Google it (finding the answer is much quicker than spending an hour trying to solve it yourself).
I’m someone who tends to complicate everything. Don’t be like me.