The Guardians of Light – Environment Breakdown – Son Nguyen
My latest environment “The Guardians of Light” is one of the most ambitious projects that I’ve worked on. It involves creating a large-scale playable environment with several different locations. The entire environment, in essence, consists of two main parts: the forest and the hidden temple. In this article, I’ll deliver a full breakdown of the project, in which every aspect of environment creation from blockout, modelling, materials, lighting and my approach to foliage creation will be explained.
There will be some reflections about technical issues that I encountered as well as the thought process behind decision-making during the project.
About the Artist
My name is Son Nguyen, and I’m a graduate environment artist from London, United Kingdom.
A bit of history about me, I obtained a Bachelor’s Degree in Graphic Design from FPT University Hanoi, Vietnam. Right after graduation, I started off my career as a background artist at a local animation studio then was attached to it for two years. It was the time I felt like I was capable of more than what I had performed, and my career held still despite my constant effort. “It’s time to make a move,” I thought to myself.
Taking the opportunity to study abroad was not just for getting out of the stagnant situation and reaffirming my career direction, but also was a breath of fresh air in terms of experiencing and engaging in a different culture. And it was definitely a life-changing decision. I took a Master’s program in Games Art & Design at Goldsmiths, University of London, and graduated with a first-class degree. Among all modules, the one focusing on 3D art drew me towards creating environments for games, which I then saw right up my alley. Since then, I’ve been honing my skills towards the environment art role and started getting into my stride. I’m currently looking forward to landing a junior role at a game studio to help create some fun game experiences.
I am always amazed by how game developers are willing to help each other and contribute their understanding and knowledge to the community. I love reading articles where they share their thought processes behind each decision-making to solve technical problems or to achieve a certain artistic effect. This article has been structured similarly, which covers four main sections: the research, the story, the design, and the technical breakdowns.
Hope you find something useful from this article and can apply these methods to your works.
Without any further ado, let’s get started!
Research & Pre-Production
I always set goals once embarking one a new project. A project might never be considered finished as we always have an opportunity to go back in another time and add five or ten percent extra work. By setting the goals, I know exactly when I’m done with my project if I’ve ticked all the boxes in the checklist.
The original purpose of “The Guardian of Light” was to create an aesthetically striking game-ready environment; on top of which is to understand how to utilize the trim sheets into the assets creation workflow and being fluent in the foliage creation using industry-standard software such as Maya, Zbrush, Substance Suite, Speedtree & Photoshop. Another box that needed to tick was to minimize using external resources in the creation process as much as possible. Eventually, apart from a few rock assets that I got from Quixel’s Megascan, everything else was created by myself.
I spent days studying the ecology of a rainforest, mostly based on the informative websites of National Geographic and Mongabay. To my surprise, this process was nowhere near as daunting as I anticipated; instead, I came across countless useful information and fed my curiosity. The rainforest is incredibly diverse, complex, and scary. Most rainforests are structured in four layers: emergent, canopy, understory, and forest floor. Each of these layers is in interdependence with each other and has its unique characteristics based on differing levels of water, sunlight, and air circulation.
Even though diving into some ecology for only several days is just like scratching the tip of the iceberg, it does give me an idea of what the composition of the rainforest should look like or which prominent features could be visible to the appearance of the foliage. For example, the plants in the understory are usually equipped with dark undersides, deeper colored leaves, and larger leaves than the canopy plants that are only several meters above due to the low-light condition in this layer.
Or the existence of a true shrub story is extremely unusual because of shaded conditions, and only scattered vines and shrubs are present. That information helps create and scatter foliage assets more accurately based on the characteristics of the rainforest. Nevertheless, an important aspect that should be taken into consideration is that pushing the environment arts towards realism does limit the freedom of creativity of an artist. There are always some unavoidable trade-offs between realistic and artistic aspects in any scenario. In my case, I only wanted to accentuate the impression of a forest, but not giving an actual forest. At the end of the day, what we usually look for is to be fully immersed in an aesthetically imaginative world, rather than be surrounded by still-life photographs.
It has always been fascinating for me to explore and ponder on unanswered things. I think it might be the nature of people too, which is to be obsessed with undiscovered things rather than a clear, comprehensive answer for an object, or a phenomenon. Absurd as it sounds, but there are serious scientific expeditions every couple of years in the Congo’s rainforest to find the existence of living, breathing dinosaurs.
The purpose of my project “The Guardians of the Light” is based on a similar idea, which leaves questions for curious people to solve by themselves. Why there is a weird-shaped, mysterious tree in a normal forest? Has the temple been discovered by anyone else before? What is the purpose of the fox shrines? My ultimate goal was to arouse the curiosity of the players about what was hidden and protected by the shrines inside the temple. You definitely don’t want to miss the rewards for exploring.
A very useful method I’ve learned from Kieran Goodson’s article is looking for a piece of music that may capture the feeling I’m trying to express, even if I might not plan to make a video. In this way, it allows me to visualize the cinematography along with the music, so I can draw out quickly imaginative keyframes and validate the direction of the artworks. I knew that I had found the right one when I came across The Last of Us’s theme song by Gustavo Santaolalla, which describes the heavy feeling of mystery, the danger associated along with a sense of optimism in juxtaposition. This was a perfect fit for the theme of the project.
Also, I applied Tim Simpson’s method for gathering references. I split the reference photos into three major categories included key, lightning, and the AAA benchmark. Referencing the AAA benchmark is undoubtedly necessary as it allows you to intuitively compare your artwork quality with the industry standard so you have an idea of what quality you should push yourself to reach. I also created another board, which includes different animated movie backgrounds and concept arts for games. In fact, I referenced lots of keyframes from Ghibli’s movies and concept arts from Uncharted 4: A Thief End from Naughty Dog’s studio. It is inspiring as you can study their composition, lighting, or color scheme and even borrow some elements from different pieces to make your own artworks.
For a more specific treatment with the foliage creation, I gathered an enormous amount of reference photos for various plants. These plants were then categorized into different boards in PureRef based on their species and layers, so I knew when I had found an adequate amount of references if I found at least one or two plants on each layer of the forest. The temple, however, was not developed around the accuracy of architecture, history.
More importantly, I also wanted to give it a unique look, so I gathered different kinds of buildings like Cambodian ruins, or Indian rock-cut temples. Also, my guardians – the fox statues – were referenced from the Inari shrines in Japanese temples. These Inari shrines typically meant to ward off evil spirits, play a vital part in conveying the story. I like the peacefulness, optimism inherent in the appearance of the shrines, but it could also be scary in the unusual lighting settings.
Blockout & Scene Setup
I block out the scene with Unreal Engine landscape sculpting tools and very simple placeholders modeled in Maya. In this way, I could quickly iterate to get the dynamic composition for the scene and estimate how many assets needed to create. It took several rounds of composing these placeholders and moving things around to a moment that I felt my composition is firm and readable. From this, I’m able to replace the placeholders with my building kit and refine it if needed.
Setting up the Master Material was the next thing I did in this initial phase. This initial setup allows me to tweak different material attributes and also enables the ability to switch between using separated textures (in case I want to use Megascan assets) and packed textures.
A very useful method I’ve learned from Kem Yaralioglu for an organizable, tidy, and easy-to-understand graph is to assemble functioning groups of nodes into different Material
Functions. Besides avoiding making the graph become a dish of miscellaneous noodles, this enables me to add respectively, and rather intuitively sequential functions on top of those basic ones as I’m moving on in later stages. For instance, a very handy moss covering system, which generates mosses on top of some objects like rocks, walls, and statues as well, was added this way. It was condensed into a Material Function and then was added into the Master Material right after I had done setting up the Vertex Blending function.
With initial setup and blockout, I grounded the direction of the environment art and moved forward the assets creation phrase.
Foliage and Vegetation
My foliage workflow stemmed from Peyton Varney’s walkthrough of his foliage creation. Initially, I created base leaf meshes from Maya before bringing them into Zbrush for some sculpting. In Zbrush, I generally use the 2048 x 2048 document size. Then, I divide the base mesh into two or three subdivisions before working from the large shapes towards the medium shapes, and finally, adding the small details. Once I am satisfied with the leaf, I duplicate and then slightly modify it into three or four variants.
My favorite brushes used for sculpting the foliage are Clay, Move, and Dam Standard. You might not want to spend too much time sculpting for a perfect leaf; it will not make a huge difference, except you have a really close-up shot of the foliage. Another tip from my experience is that you should not worry about varying the size of the leaves as you can do this with Speedtree later. Switching to flat sketching mode is also helpful as it omits the lightning values, so it is clearer to focus on the shape designing.
Then, instead of using vertex painting inside Zbrush (or in Photoshop) to create the Albedo map, I would do it in my rather favorite software, Substance Designer. The procedural approach, for me, is always preferable to painting manually for the textures. However, it still took quite numerous iterations as well as attempts to add/remove different features before I finally nailed down a master graph in Substance Designer. The graph only requires a Height map (which can be baked directly in Zbrush using the Grabdoc function) and can generate many different things for the leaf-like color variation, veins coloring, insect damage, witheredness, or even water droplets on the leaf.
That also was the point when I was thinking about setting up a custom node based on this graph that is fully parameterized, so it would be much quicker when it comes to texturing a huge amount of foliage assets. But since I haven’t pulled it off, I still had to do it in a diligent manner, which is duplicating the graph and manually tweaking the attributes inside itfor each of my plants. The outputs for each asset then are packed in an RGBA texture (for albedo and opacity maps) and an RGB texture (for ambient occlusion and roughness) before exporting it into Unreal Engine.
This also led to my introduction of Speedtree. You can create geometry for the plants in Maya too, but personally, I find Speedtree much more convenient. The procedural approach for modeling foliage and vegetation of Speedtree reduces a huge amount of time for each iteration. The demonstration series in Speedtree’s youtube grounded my foundation with this powerful tool, I would highly recommend checking that out. In essence, every kind of vegetation can always be fully created inside Speedtree regardless of having either simple or complicated structures.
My workflow of creating ground foliage assets is rather simple. I’ll bring the textures from Substance Designer and create the meshes for it in Speedtree. I would normally start with a Zone to zone out the plant, followed by a Branch Generator to create the shoots. The last thing would be either Leaf Generator or Frond Generator depending on particularly different plants.
For a more complicated setup, you can follow me on the path to creating this unique oak tree:
Making the tree structures
– Establish the base tree structure by creating the trunk and several rounds of branch generators.
– Set the mode of the branch generators to bifurcation. You should pay attention to the noise properties as this mode generates branches on curves or bends. Also, tweak the bifurcation properties inside the Gen tab (Threshold, Spacing, Balance, Align) until you are satisfied with how the branches are situated.
– Modify the length of the branches. The length of a child branch should be partly, but not entirely, dependent on its parent branch. You have to find a right balanced spot between ‘Absolute’ and ‘% of parent’ parameters in the Spine tab a bit to achieve this.
– Also, in the same Spine tab, play around with the properties inside the Orientation section. In some instances, you might have to draw out the curves inside the Start Angle parameter to resemble a particular type of tree branch. In my oak tree, I used a bell curve that means the child branches are fan-in on the beginning and ending, yet be fan-out in the middle.
– Last, add the frond generator to simulate the leaves for the tree. You can use the default textures in Speedtree, download atlases from Megascans or if you want to create your own textures, you can follow me in the next steps.
Making the cluster textures
– This cluster texture can be created in another Speedtree document. Similarly, you can set up the base structure with several bifurcation branch generators.
– Plug the last branch generator with a leaf generator. You can vary the leaves by introducing a bit of randomness in their size, deformation, and orientation.
– Adding some knots in the main branch and tweaking some noises in the Displacement tab. This can bring a deal of realism to the cluster textures.
– Switch the camera perspective to the XZ plane and frame the branch cluster in a square-sized texture. You might need to manually omit some chipping leaves or rotate a few of those to make the silhouette of the cluster become more readable. Capture the cluster (File > Export Material) and export the needed output maps.
– (Optional) You can also create a leafless version of this by simply deleting the leaf generator. Having this leafless version does help in breaking up the tree canopy that would have been merely green clusters.
Refining & Decorating
Either using the textures from somewhere else or creating it yourself, I suppose now you have a decent-looking tree that means you can move on to the refining stage. There are a couple of things I would like to do at this stage. Modeling some undulating shapes on the tree trunk’s surface, for instance, is a good start. You can do this by messing around with the properties inside the Displacement tab. Also, I found it a bit nitpicky but, I normally switch to the manual mode and slightly adjust the length or the angle of some weird branches.
Adding the roots and some knots was the last thing I did before optimizing the poly counts. It was just the icing on the cake but, you can also generate some epiphytes growing or attaching onto the primary branches. There are different ways including using other modeling software to achieve this but I’ve found a quick, iterable solution to deal with this inside Speedtree. The essence of solving this lies in zoning out the upward surfaces on the branches, where epiphytes can naturally grow. And then, you can generate mosses or epiphytes on these areas as the same treatment for the ground foliage. You can locate those areas by following this setup:
– Add another branch generator connected with the primary branch generator. You can think these generated branches serve as locators. Switch its Gen mode to Interval.
– In the Skin tab, switch the Type to Spine only to hide it, we don’t need it to be visible. Tick the Weld start option also increases the Down properties in the Prune section until those spines pointing downward are omitted.
– Create a Zone generator and link it with the former generator. In the Generation tab, decrease the Position attribute until these zones are only situated at the lowest positions of the spines.
This means now the zones cover only the upward surfaces of the primary branches, and that is it, mission accomplished. You can grow anything inside these zones and it will naturally stay on the primary branches.
Above is my approach to creating the foliage assets inside the Speedtree. The plants then can be exported directly to Unreal Engine. Setting up the materials for the foliage is also an essential aspect in achieving believable looks in the engine. I would highly recommend checking out Ben Cloward’s series of tutorials for creating foliage materials. The subsurface color is extremely important in the shader as it allows the light to pass through the leaves like in the real world. You can sample a separated subsurface texture too, but in my case, the subsurface color merely stems from the albedo in the shader. This also helps reduce draw calls in-engine because the subsurface requires an RGB map and you don’t want to sample too many textures in the shader.
I created my modular architectural assets inside Maya and Zbrush. Also, these architectural assets were embedded with an ornament trim to give it more details without actually modeling it.
This could be done by taking advantage of using a separated second UV channel for the trim. This approach was experimental for me as it took me a while for iteratively working between Substance Designer, Maya, and Unreal Engine to get to the point I felt getting used to it. For the trim texture, I model the base meshes for the ornaments in Maya and do some sculpting for it in Zbrush. I then capture separated Height maps for each of the ornaments by using the Grabdoc function.
Using as pattern inputs for the Tile Sampler node, these maps were then laid out in a square-sized texture in Substance Designer. You might even embellish it with more details such as cracks, surface undulation. The normal map is the only output needed to be used in my shader in Unreal Engine.
I follow a traditional workflow of creating assets in Maya and Zbrush. Taking this stone pillar as an example, here is how I make it:
– The pillar is modeled in a basic shape in Maya before switching to Zbrush for some sculpting.
– In Zbrush, I carve the edges of the mesh, using several custom brushes to add the cracks or sculpting undulated surfaces. One thing that might be helpful is that you can use the Displace brush to project some noise from rock textures (CGtextures.com is your best friend) to introduce some intense details. Sculpting the micro details is unnecessary as it could be created later in Substance Painter. I then use the hPolish brush to refine the details and clean up some noisy places.
– Duplicate the mesh one time for the low-poly mesh. Then decimate the mesh down to around 2000 to 3000 polygon counts.
– Unwrap the UVs for the low-poly mesh in the default channel in Maya. Create another set of UVs for the trim in the second channel (You can check out the tutorial about unwrapping UVs for trim sheets by Tim Simpson). You only need to unwrap parts of the mesh that you want the ornaments to appear. Export the mesh.
– Import the mesh to Substance Painter for baking and texturing. Then export the textures.
The mesh and textures now are ready to be imported to Unreal Engine. We need to create a custom shader to render the ornaments trim visible on the meshes. Below is the setup for the normal in my shader and I’ll explain it simply. First, I need to sample two normal textures, one for the mesh and one for the trim (which is set to use the second UVs channel).
They are plugged onto two FlattenNormal nodes with two separate parameters to independently control their intensity before being blended by using the BlendCorrectedNormalAngle node. It is optional but I even used an alpha mask for the trim to blend them more subtly.
I like the iterative process of creating game environments in general. It’s just like, you always have the ability to experiment with different materials and shaders at any stage in the development. My ground textures, actually, were created fairly soon in the beginning of the project. It had been used until the very end of the project when I still got some time left and decided to replace it by making a better quality one.
It didn’t require much time for me to create a newly forest ground material as I’ve already created one in the initial stage. The process of creating this material basically involves: create the ground dirt (which you can follow this tutorial on Substance by Adobe), then cover it up with leaves, rocks, needles, branches, etc. I also took the advantage of having many different leaves textures in my previous foliage creation stage. Their height maps were then dragged and dropped in Substance Designer and scattered all over the ground using the Shape Splatter node.
One tip I think might be helpful is that you should mask only a particular area on the ground for the dense scattered leaves and leave the rest sparsely, so it will not be overwhelmed by details.
This is also true, in nature, that the dead leaves from the trees are distributed unequally and usually in big clumps due to the affection of the wind, rain, and surface undulation.
The next thing I added to the material, which are some branches on the ground, were scattered in the same manner. Those branches were originally from Megascans and were baked for their Height maps in Zbrush. Last, I covered the ground up with needles and rocks.
You can also learn how to make these in Javier Perez’s tutorial of creating a realistic dirt road. When everything is laid out, it’s time to refine the details. In this step, I adjusted some attributes involving the scale, distribution, and density of each component (leaves, rocks, branches…) to stimulate the natural appearance of this material. Finally, I played around with one of my favorite nodes, the Water node, to create several puddles laying on the ground. With the final version of this graph, it took me just a little time to make another two versions: one with more crowded leaves and one with a higher degree of moisture. So I can vary the ground textures better in the engine.
The bark material, on the other hand, was my first time experience with Substance Alchemist.
This powerful tool allows me to create a fast, realistic material entirely from a single image on the web. The main challenge was to avoid the computer-generated looks and maintain a natural, easy-to-read material. I also took the output texture maps from Alchemist into Designer to refine some elements, mostly about modifying some height data and reducing the lightning information from the albedo. With only a small amount of time invested, out of the blue, I’m really satisfied with how this material turned out afterward.
Lightning & Post Processing
When it comes to setting up lighting for a scene, there are three important factors that an artist would be taking into consideration: the realistic, the artistic, and the technical constraints. For the first two factors, which were subordinated from the last, I would highly recommend checking out the book “Color and Light” by James Gurney. His book provides a clear and comprehensive understanding of how light behaves in different scenarios in real life. On the other hand, in terms of the technical side, it would be rude not to mention the Unreal 4 Lightning Academy series by Daedelus51.
The scene is wholly using dynamic lighting. It does cost up some performance, but in the opposite direction, the idea of baking the lighting on the whole map for the sake of using static lighting just went out of the window. In real life, the illumination involves three different systems: the sun, the diffuse light coming from the sky, and the reflected lights from illuminated objects.
This corresponds respectively with Directional Light, Sky Light, and Reflection Capture in Unreal Engine 4. So I set up my scene with a Directional Light, a Sky Light, and set both of them to “Movable”, which means they only cast dynamic lightning. The Reflection Captures, on the other hand, was not really impactful to the overall look as there aren’t many reflective objects in the scene.
The setup for the Directional Light was quite simple as I almost left everything as the default settings except tweaking the intensity and the temperature. I also turned on the Light Shafts to achieve some nice god-rays through the aperture of the trees. The shadow gets softened and blurry as the light source gets far away, which means, the sun itself will be only casting soft shadows. One of those tricks that I knew to stimulate that type of soft shadows is to decrease the resolution of the Cascaded Shadow Maps. The thing is if the resolution is so blurry it won’t render the shadows from the small ground plants, so it does need some tweaking to just the right amount of Cascaded Shadow resolution.
In the Sky Light, I switched the Source Type to SLS Captured Scene. Also, I replaced the default sky with an HDRI map. In this way, it can provide more accurate shadows that match the color of the sky rather than emits lightning based on a specified cube map. I took lots of HDRI maps from HDRIHaven.com and tried out which one suits the most for the scene.
I also placed several Spot Lights along with the scenes for some fake lightning. This could help lit up some dark places or enhance the readability of the composition.
I utilized the Mesh Distance Field Ambient Occlusion (DFAO). The DFAO has a huge impact in boosting the realism of the scene as it provides smooth, soft shadows in the fringes around objects. It does give a good sense of depth for the scenes, but the thing is, I felt the soft shadow all over the foliage wasn’t enough for portraying how we observe plants or shrubs in a dense forest. It is noticeable in nature that there always are some really dark shadows in the gaps between the leaves or canopies.
One solution I’ve found effective is to utilize Screen Space Ambient Occlusion (SSAO) in the PostProcessVolume to achieve a more concentrated shadow, combined with the soft shadow induced by the DFAO. You might need to switch the Occlusion Combine Mode in the Sky Light to Multiply to get this result. Also, in the Buffer Visualization, switching to Ambient Occlusion to visualize the SSAO and tweaking its properties until the AO becomes dense but fades in the far distance. You can compare the results in the images below.
There are just a couple of things I want to do in my post-processing phase. In the PostProcessing Volume, I turned on the Bloom, Chromatic Aberration, a subtle amount of Vignette, and applied the Sharpening material. Last, but not least, is the color grading. It does play a huge part in the final look of the scenes. It totally depends on individual preference when it comes to deciding tools for the color grading process, but I personally feel more control grading my scenes outside of the engine in Davinci Resolve.
Besides its comprehensive color editing features, this tool also provides an in-depth analysis of color data. It comes in handy as I usually trust the data more than my eyes, probably because of doubting the accuracy of the screens. Sometimes comparing your color data with that of AAA benchmarks, especially those with similar lightning setups, might be helpful as it could suggest how your color data should look.
My color grading settings in Davinci Resolve were exported in a 65-Point CUBE file. I then took it into Photoshop and applied the grading to the default Look-up Table (LUT) texture. The graded LUT was then able to be dragged directly into the Unreal post-processing.
Advice, Tips & Challenges
A common mistake I’ve seen quite a lot in beginners’ environment arts is that they are prone to put redundant details onto the scene. I made that mistake too. It seems like we have an obsession with detail because it usually is what rivets our attention in masterpieces. The truth is because the foundation involving the composition, shape languages, mood, and storytelling of these pieces had already been built up so well that we are not confused by anything, then the last thing our brain can read in detail.
My advice for this is to consistently take screenshots and check out its values and shapes in Photoshop or compare them with the previous ones after every significant change. It is a good habit to get into. One thing you can do in Photoshop to simplify the screenshots for better visualization of values and shapes is applying the Cutout filter (Filter>Filter Gallery>Cutout; Level: 3-5, Edge Simplicity: 6-8; Edge Fidelity: 1).
In this way, it is easier to detect noisy areas or whether the main shapes are clear enough. Then you can apply
necessary changes in Unreal Engine.
The main challenge for the project was to maintain a consistent quality for every single shot throughout the large-scale map. It involves tweaking lightning settings, especially finding a right angle that works in different shots. I also scrapped the idea of trying to make a perfect composition for every single shot with golden ratios, eye leading lines but rather a firm, readable composition. Still cost me a huge amount of time, but eventually, it ended up working quite well in every shot.
Lots of mistakes have been made, yet lots of lessons have been learned also. If it had not been for the constructive feedback of the Dinusty and Experience Point community, the project would have never ended on a positive note.
I hope you’ve enjoyed reading about my workflow and tips. If you have any further questions, feel free to email me at [email protected], or message me on Artstation.
Thank you so much for reading, see you next time.
Thanks to Son for allowing us to have such an in-depth look at his process. If you liked this prop breakdown and want to see more like it from other inspiring artist’s make sure to follow us on: