The Dungeon

Environment Breakdown

Leevï Galita


Leevï Galita

3D Artist


Hi, my name is Leevï GALITA, I’m a 3D artist working in the game industry.
I like creating stylized art, and it’s important for me to keep sharpening my skills whenever I have the opportunity.


  • Blender
  • ZBrush
  • Adobe Substance Painter
  • Adobe Substance Designer
  • Unreal Engine 5


Last year, I did a real-time scene called The Hideout. It was a way for me to dive into Unreal 5 newest features, such as Nanite and Lumen, and see how they could benefit my personal workflow. I really enjoyed the process and wanted to continue this exploration with a new project.

One key decision during the making of this scene was to only rely on mid-resolution models, and drop the high-to-low poly baking process. As I said in the dedicated breakdown, it’s just less data to manage overall, and I like it.

However, The Hideout was mostly composed of simple hard surface assets. With that approach in mind, I wanted to see how I would deal with a more organic scene, full of sculpted elements that require denser topology.
I saw this concept by Andrew Porter and immediately loved it. I want to thank him first for letting me use it. Since I don’t have a lot of spare time I can dedicate to 3D, this indoor scene with just a few duplicated objects was the perfect scope.



I didn’t really use a lot of references for that scene. The concept provided enough information to envision the work I had to do.
My main concern was material rendition. I gathered references for gold, stones and moss, to help me figure out how I would treat those surfaces.



As many have already stated, blocking out the space is a critical moment in scene creation. The more you figure out there, the easier the rest of the process will be.
I started by modeling simple shapes inside Blender and assembled them in Unreal Engine. The concept was easy to break into modules and reusable elements, due to the symmetrical staging. For most of the circular pieces, I created just sections to save time.

The thing I like to establish early while building an environment is the lighting intention. For sure at this stage, it’s usually pretty rough, but it helps define the areas of focus.
Here, I just placed a simple directional light piercing through a broken ceiling, and a spotlight directed at the top of the statues. It was more than enough to start. This type of lighting scenario is where you can appreciate the benefits of Lumen GI.

It works out of the box, with no particular fine-tuning yet. It’s also the step where I define some key camera shots.


The modeling approach was straightforward. I imported my blockout meshes into Zbrush and started sculpting with a few default brushes. I used Mark Ehlen’s bold pencil brush for some details and the already well-known Orb Cracks.


One advantage of working with Unreal 5 is Nanite. Whenever a sculpt was done, I could just use “Decimation Master” in Zbrush to reduce the polycount and send the mesh into the engine.
It was a quick way to validate the visual and adjust the intensity of certain details. I like checking assets in context as much as possible since it’s easy to overlook things while sculpting.

One important thing was to drastically decimate those assets without compromising their visual integrity.
I still needed to be able to unwrap them for texturing, so finding the right balance between preserving the look and the polycount was crucial.

Dealing with dense UVs

Even with decimated assets, the topology was still quite dense, so making UVs was more challenging and time-consuming. That’s why this was actually the last thing I did in the process.
I used the “smart UV project” in Blender, to give me usable UVs. They are not optimal, but decent enough to import the meshes in Substance Painter and start working.

When the texturing was done, I checked my assets in the engine. If at this stage I was not happy with something in the model, it was still easy to address. I could simply go back to Zbrush, re-sculpt, decimate again, redo auto-UVs and reimport in Substance Painter. All the texturing work done before would be reapplied, thanks to the “stroke preservation” feature in Substance Painter.

As long as I wasn’t 100% sure that I was done with the visual, I kept my assets with “crappy” auto-UVs.
Only when I was satisfied with the final render, I was taking the time to do proper unwrapping, just once. So basically, doing them at the end just helped me maximize my UV space, and increased the perceived quality of my painting work.


To create Uvs on dense models, I used the “pick shortest path” feature in blender in “tag seam” mode. It traces a UV seam from point A to point B every time I click. It’s the best way for me to handle heavy assets.
Of course, this whole process will never be as fast as unwrapping a proper low poly, keep that in mind. But I still favor the fact that I have less data to manage overall.


When it comes to the actual painting process, I kept things quite simple. I relied heavily on smart materials created inside Substance Painter. Let’s take this module as an example.


First, I created a tileable texture that served as a starting point for the base color. I like to do it in Photoshop for a more graphic look. I projected that texture onto my mesh in tri-planar mode. I already detailed this method more in-depth here if you are interested.

Then I layered various weathering effects like dirt, color variations, edge damage, etc. At this stage, I relied only on smart masks, so everything is procedural. I converted this layer stack into a smart material when I was done with the tweaking.
The final step was to do some manual painting, to add more unique and graphic flavor to the mesh.

Thanks to the smart material I created earlier, I could speed up the process for the rest of my kit. I used that approach for other assets too.

Textures for the red wall and sand were created inside Substance Designer. Here is a little recap of the main materials present in the scene.



For the vegetation, I sculpted the leaves, grass blades and flowers in Zbrush, and baked them on a plane.


Then I had to cut them out, since I decided not to use alpha mask texture here, and created a few plant variations in Blender.

The moss took me a bit more attempts to figure out, since I was not completely satisfied with the look of the early tests.


I ended up using small cards with an alpha texture. That mesh was plugged into the vegetation tool in Unreal, and with the fill option, distributed at high density onto an underlying geometry faking the volume.
The reason I used an alpha mask here, contrary to the leaves or grass, is iteration: I changed the shape of the moss strands a few times, so modeling wasn’t an option.


Note that the normals of the mesh cards are facing up, and the moss is not casting shadow in engine. It helped reduce the visual noise and flatten the look.


A final note on the vegetation: Obviously with that approach, everything needed to be flagged Nanite, otherwise it could have a huge impact on performances. In general, everything in Unreal 5 that can be supported by Nanite should be.
(That doesn’t mean it has to be high poly!)


The initial lighting intention didn’t change too much while building the scene. With the assets and materials coming in, it was more a matter of adding some supporting lights and balancing them together. The setup is still quite simple:

  • I kept the initial directional light piercing through the ceiling.
  • The spotlight directed at the statues got cooler and larger.
  • I added a warmer spotlight coming down on the golden chest, to make it pop a little more.
  • There is also a really dim rectangle light off-screen, behind my cameras, to help brighten dark areas just a bit.

VFX & Post Process

For the falling sand, I used the tutorial and resources provided by PrismaticaDev. I just made a few tweaks to have a more continuous effect. Thanks to him for sharing this for free.


Also want to thank my colleague Catherine Morin-Laporte for giving me a little Niagara crash course, that helped me achieve the little dust effect.

In the scene, I have a post-process volume adding bloom, chromatic aberration, and color grading using a LUT.


As I said at the beginning, I did that scene to see how my decision to skip the high-to-low poly process would work in a more organic scene. I’m satisfied with how this project turned out since I didn’t face any particular friction in the process.
I will continue to explore that path for my personal work. I totally see this as a viable workflow for real-time content inside of Unreal 5, and even for certain types of games.

However, when it comes to games, the industry as a whole is not there yet. Unreal Engine 5 is still in development and is just one engine among others. I would still encourage game artists, especially those who want to break into the field, to be mindful of proper optimized methods for real-time content.

At the time I write this article, Fortnite is the only Unreal 5 project available. Epic Games published two interesting articles about their implementation of Nanite and Lumen in the game. Worth the read if you are curious about practical applications.

I’m really excited to see how things will evolve in the coming years with Unreal, and maybe other engines exploring similar technologies.
If you want to see more of my work, feel free to visit my portfolio. You can also find me on Twitter.

Finally, I want to thank Games Artist for the opportunity, and thank you all for reading!