Criminals Phone

Prop Breakdown

Javier Benitez


Javier Benitez

3D Artist


Hi! My name is Javier Benitez, and I’m a 3D artist from Madrid, Spain. I started modeling in 2012 after realizing that traditional education wasn’t the way to achieve my personal goals.

Although most of my knowledge was gained from interacting with the community and exchanging feedback with other artists, mainly on Polycount, my first steps in game art began after I received training at a local art school here in Madrid, where I learned the basics of modeling, optimization, and texturing.

I had the privilege of starting my first job in games in 2014 at an outsource studio called Elite3D (currently 2K Valencia), where I was able to participate in a wide range of AAA projects before moving on to in-house positions. Some of the titles I have contributed to are Homefront: The Revolution, Red Dead Redemption II, Dead Island II, and a bunch of other projects at NVIDIA, Artstation Learning, and Dekogon.

After my experience at Artstation Learning, I started to realize that I really enjoy mentoring. So, together with Onyx Studio, I had the pleasure to craft the Onyx Academy, where I currently mentor and direct the operation as a whole, at no cost for my students.


As with most of my projects, it starts with a problem I’d like to solve. Nowadays it’s really difficult to do something new; someone has already made what you had in mind most likely if you’re looking into non-fictional realism.

Standing out from the crowd is hard too, and not everyone has the time to work on portfolio pieces outside of work, so the question arises, How can we make a simple model, like a phone, stand out from the crowd?

It was also a great first exercise to introduce my students to CAD modeling pipelines, using Fusion360, Moi3D, Plasticity, etc., to achieve clean hard surface models.


Ideation and References

References are the foundations of your 3D models. Neglecting them can easily make your art fall apart, but too many conflicting references could result in an amalgamation of details that don’t work together. 2-3 good references that each explain one area/detail are my way to go.

I divide my references into modeling references, where I look for the shapes from all angles, texturing, where I aim to capture the details I’d like to see in my PureRef board, and context/storytelling, where I gather ideas on how to present my objects.

It’s important to do good research on how your models work, as, for example, how the pieces move determines how they wear out, etc.

Reference boards help with this greatly, but it’s also good to use video platforms such as YouTube to find videos of people using the objects, fabrication videos “How it’s Made” style, unboxings… All in all, make sure you do your research before starting a model to make sure you respect its realism.


After an initial pass on references, it can always be good to go back and add more if some of these fall short, but I get easily confused when my ideas are heavily convoluted with too much information, so I will move on with this before adding anything else.

A few great ways to find references are by using second-hand websites like eBay. For this project though I used the product pictures as my references for modeling and decided to change a few things.

The buttons don’t have the same shapes, the speaker is a bit different… just the basics to adjust the model to my liking. I tend to do that a lot as an extra layer of creativity, as I don’t personally enjoy 1:1 representations of already existing objects.


For the story as a whole, this really varies between projects. I love a good afternoon with popcorn and a movie. I do this regularly enough to have overlap between these occasions and having an ongoing project, so I gather a lot of ideas from what I’m watching.

For this one in particular, True Detective’s season 4 was it. There’s a scene in which you get to see the inside of an evidence room with the actors holding a few bags of evidence, so I had a eureka moment and noted that down as an idea to explore.

I’m also a huge Resident Evil fan, and while commenting with a friend we remembered when he gifted Resident Evil 8 for my birthday, which has this sort of model inspection interface for when you find storytelling elements in the world, and that was it.

This makes me remember something I read recently. Resting, going outside, and doing seemingly off-topic activities also help us explore the world as artists.

I got this story just from that, making friends, watching a series with my loved ones… I call this “Actively Resting” – It doesn’t wear you down or contribute to burnout (quite the opposite) but also helps you grow!


Overall, and after good research in all departments, I keep things very simple. I use PureRef to gather a small collection of references – Shapes, textures, and stories.

I am sometimes very loose with my PureRef boards because I tend to collect things that I think would be useful as a reference.

Visual reference is great, but when you can touch, move, and feel the weight of the object you’re making… that’s next level. I didn’t have evidence bags at hand, but freezing bags did just as well together with my reference board.

Modelling the Phone

To model the phone, I used Fusion360 and Zbrush for the high poly, 3Ds Max for the retopology, Rizom for the UVs, Marmoset for baking and rendering, and Substance Painter for texturing.


Fusion360 coupled with Zbrush is a really strong and fast way to get results quickly. It allows me to focus on the step I am currently in. All I need to worry about for the high poly is the shapes.

At this point, it’s really not necessary.

I didn’t use any special tools inside this particular workflow, just aimed to get my basic shapes first, and started to add layers of detail accordingly, with my larger details first, then medium, and small.

After all the details are in, I spend some time revisiting the shapes that don’t help the overall composition. The total time spent on the fusion pass was about 2-3 hours while conducting a real-time session with my students.

After all the shapes are in, I export them with the highest possible refinement (amount of geometry). These settings vary per object, so exporting each piece individually and doing a deep check on the quality of the export is key for the next steps because of how ZBrush interprets geometry.

ZBrush won’t smooth your mesh unless there’s actual geometry in it or you subdivide your objects, which is not an option here since we are importing triangulated meshes. The solution is always to bring that subdivision from the exporting program, in this case, Fusion.

Using a rounder object with more shape variation from a past model, I can better exemplify my words above.


As you can see, the density of the mesh varies depending on the complexity of the shapes it’s trying to convert from solid to mesh. Flat areas get almost no subdivision while cylinders and other round parts get as dense as we need them to be.

After I’ve got my pieces exported from Fusion, I can begin the ZBrush polishing process.

Using Dynamesh to convert my geometry to something ZBrush can work with, mask by smoothness to select the hard edges, polishing the unmasked areas to remove any minor imperfection due to lack of refinement or Dynamesh artifacts, and finally polishing crisp edges on a fully unmasked object gives me a clean and smooth high poly.


If you look closely at the first of the four images above, you can see how there’s a slight stepping coming from ZBrush’s inability to smooth geometry without density.

After repeating this step with all the parts of the phone, I have a final high poly that I can use to bake my normals from. Note that each material has a different level of edge smoothness.

In general, rubber and wood can be smoother than metals or molded polymers and hard plastics, like the one the phone case is made of.

All of this will help our texturing to be much more believable later.


For the low poly part of this workflow, there are a few ways to get it done. I like a more manual and handcrafted approach, so I clean up my exported models manually, reconstructing the areas that are more difficult to clean.

You can use tools like PiXYZ to automate the optimization process, but they still require some touch-ups later.

If you decide to go with a more manual approach, like myself, the process begins with getting a non-triangulated version of the model from either Plasticity or Moi3D.

Unfortunately, Fusion360’s exporter doesn’t offer an option to export a non-triangulated mesh. The meshes we can export from CAD have a vast amount of errors, like on the top corner I marked below, where the loops don’t match and the shape must be entirely reconstructed.

This asset being a portfolio piece, I wanted to have the ability to do close-ups, which means you want to keep roundness in the silhouette.

Having so much density on the asset brought some new issues, and I wasn’t particularly proud of the low poly I generated for this model at first, let me explain.


A good way to start cleaning this up is to generate a model that has no n-gons and contains close to the final amount of geometry we want to use.

I tend to apply a few general rules, like triangulating cylindrical corners, as they otherwise would triangulate with face overlaps, applying the correct amount of density about the view distance, and constantly checking the silhouette of my model from all angles to ensure I’m putting details where it matters.

Also, as a good rule of thumb, if you activate the wireframe of a model and from your closest shot distance some areas of that model get flooded with the wireframe color, you probably have more density than necessary.

The results are good but they contain a few long and thin triangles that could affect the performance of the model, so those need to be taken care of as a secondary pass, which consists of detecting these together with areas where the density is high, and refining areas that can auto-triangulate incorrectly.


But these thin long triangles in models look fine, they unwrap, bake, and texture well… so where exactly is the issue? My students ask questions like these all the time, and they’re not simple to answer.

I did some digging around and I found this video which provides the basic logic for why thin long triangles can, while maintaining a lower poly-count, be damaging to performance.

Rough TL;DR: GPUs work with square pixels, so all the pixels needed to render that triangle compute as a whole square, but there’s a big chance that only a super small percentage of the pixel contains a part of the triangle since it’s long and thin.

The GPU has to render the whole pixel, then throw away some of the calculation and add that to however many pixels that triangle is covering… and this results in less optimal rendering.

And this is what I love about mentoring. Even though we have this conception that mentoring doesn’t provide any production experience, it forces you to understand the logic underneath your process so that you can explain it.

When something rings the bell of “I don’t know why I do that” to not fail your mentees you must dig more and more and more… to the point where there’s a lot of knowledge that must be constantly questioned, researched again, and updated.

And while the ego gets a bit hurt (how did I not know that), the feeling of progression is much stronger than ever. But I’m getting super carried away, so let’s continue.

Phone UVs

Unwrapping an asset like this is quite simple with the right tools. As part of my unwrapping and baking process, I always start with my object shading.

In order to get good bakes, you should generate hard edges every time there’s a different UV island in your model, and always aim to preserve good shading in your low poly to avoid getting too much shading compensation in your normals, like ugly gradients following your triangulation.

This can be a wasteful approach if done wrong, since different UV islands can have an extra cost on your model’s performance. For smaller parts, this can quickly start to add up to many different smaller islands.

One good solution to this problem is to reduce the angles between faces that would otherwise require a hard edge to shade correctly.

You can easily do this by applying a chamfer the size of your high poly’s edge smoothness, or in other words, that maintains the silhouette of your high poly.

This will result in a poorly shaded model, so as a last step, I always apply weighted normals, which help me achieve a flatter, more appealing result, even without any bakes.

Below are images:
From left to right, original low poly, chamfered low poly, low poly with weighted normals.


After applying this, we can now unify the UV islands of this model since we are only using soft edges on this one.

I tend to apply this technique whenever I see an asset starts to have too many small islands, prioritizing smaller islands first, and trying to find a nice balance between adding too much extra geometry and getting too many small UV islands.

This also can affect how textures downscale, as small islands can get so small they don’t have enough pixels to use at lower resolution textures.

For any islands that I wasn’t able to unify, or it just wasn’t worth it, I automatically cut my UVs by selecting all faces and hitting ‘Flatten by Smoothing Groups’, which nicely breaks all islands as I need them.

Once I have a base for my UVs, I move on to RizomUV, where I just straighten any islands that don’t relax properly.

This causes a small amount of distortion, which in the end is hard to notice, especially when using any tri-planar mapping, but it provides a lot of room for efficient packing, making your UVs cleaner and more optimal, and it’s a lot easier to apply any directional patterns on your textures.


All my edges that must be horizontal are marked as purple, and the vertical ones as green.

This is a very repetitive process, and it’s really scary sometimes, but it yields the best results and it’s super fast to make in Rizom if done methodically.

I like to optimize my time by assigning shortcuts to the actions I use the most.

After straightening all islands and ensuring there are no overlaps yet, my UVs are good to pack.


UV Packing in Rizom is extremely fast and easy. To get a consistent size of the UV islands, we can apply our desired texel density, 81.92px/cm at 2048×2048, which is a really big density for a prop like this, but since it’s a showcase, this is completely fine.

Doing this also adjusts the margin and padding for the whole scene, which is nice!

By setting our scale optimization range to ‘Off’ and our Orientation Optimization to 90º, we can get a nice first look at how much space the UVs for the model will occupy at the desired texel density.


As you can see on the left, we have a lot of space that I can’t fill in unless I scale up the UVs. This will mess up the texel density, so it’s not an option for me.

We have two other options in this scenario to make this work while respecting the texel density. One, we can further optimize the UVs by cutting them up and stacking them based on symmetries, for example, or two, we can aim to use a 2:1 texture instead, of a 2048×1024.

I chose to pursue the second option as I’m going all in with unique details that further push the storytelling of this asset.

After this decision has been made, I can add a rectangle on the top half of my UV space to limit where Rizom will pack my UVs. I will then start to use the auto-packer to organize the islands in batches, starting with larger islands, then medium, and small.

This way I can get the most amount of used space. The following video illustrates how this works.

As you can see towards the end of the video, without this method Rizom isn’t able to calculate a good packing for the islands and leaves some out.

This could be improved by augmenting the iterations used for the packing, but I prefer to not wait around and get a hybrid approach, with some manual work helping the tools work more efficiently.


After some final tweaks, like allowing 0.95 – 1.05 room for adjustment to the scale optimization range, and playing around with the precision of the packing algorithm, I’m able to get a nicely efficient UV set using around 85% of the 1:1 space.

You may see some smaller islands in some areas and wonder why they are thereafter optimizing the UVs to avoid these scenarios.

Some islands are lesser, non-visible faces that only prevent some parts from showing through when parts of the object are moved around, like the buttons.

They are flooded with Ambient Occlusion and shadows anyway, so if the textures are reduced, the consequences won’t be seen at all. It’s also not worth the extra geometry, as these are the first candidates to be removed if there needs to be a performance improvement pass for this asset.

The Bag

For the bag, I tried to maintain quality over performance. This runs away from the usual philosophy applied to game development, but I am a firm believer that the art I do should carry at least two out of three purposes – learning, solving, filling a gap in production, or having fun.

I had already learned my lesson after the extra density management on the phone, and my students had never had an introduction to transparent materials, so I decided to aim to have fun and try out new things.

The bag is modeled using 3DsMax’s cloth simulation modifier. While I could have used Marvelous Designer, I wanted to provide a good example by using as few licensed tools as possible to replicate working at a lower-budget production, where you don’t always get the software you need.

First, I modeled a sleeve from a box. This sleeve respects the volume of the phone inside, and is aimed to be simulated later, so I need to keep the density even (all quads).

After this first pass, the bag is subdivided and a cloth modifier is added to it and a simplified representation of the phone is placed inside the bag, and added to the cloth modifier as a collision object.


It’s important to have correct UVs before simulating, as this will ensure the deformation of your texture correctly matches the one caused by simulating and moving the vertices around.

The UVs for this bag are just two planar projections on the object before simulation, one for the front, and one for the back. You may notice there’s one wider island, that is the one containing the text.

The front island is mostly transparent so I don’t need almost any resolution for it. After I have my UVs, I can start with a first-pass cloth simulation.

By removing the gravity from the simulation, and adding negative pressure values – which makes the bag shrink onto the phone – I can achieve good results from the first simulation.

To add the fold on top, I grouped the vertices according to the areas I didn’t want to create a fold on, and preserved them.


The result is OK but it needs a further pass.

To get to the end, I simulated once more with a bit more negative pressure and fixed the results by using soft selection and carefully removing any unwanted errors, like too much distance between the flap at the top and the main body of the bag, or extreme thickness near the phone area.


All the steps above have the correct UVs but for showcasing purposes, I only added the texture on the first bag.

While working on a simulation, shape is important but ensuring the textures deform correctly is crucial.
I recommend checking for this at every stage of the simulation.

Texturing the Phone


First, I baked all my normal maps using Marmoset Toolbag.

Nothing fancy here, just the usual workflow as described in their ‘The Toolbag Baking Tutorial’. It’s really important to use the bake groups function in particular.

There’s one small detail, I don’t use skewing to fix the bending of the details on flatter areas of the model. Instead, I bake 2 normal maps, one with a soft cage, and another one without a cage.

This gives me the flexibility to mask between the two maps and reimport the models in case I need to tweak anything outside of Marmoset.

Unfortunately, when doing this, the skewing tends to disappear, so I can avoid painting the masks more than once with this workflow.
I also use the opportunity to remove any details that I will replace later in Painter.


After making sure my bakes are clean and free of artifacts, I can start with texturing. I tend to break down this process into three stages: materials definition, detailing, and storytelling.

A good rule of thumb for these is to first get your materials as they would look when coming out of the factory, then after transport and around 1-2 days of light use, and finally to add any elements that relate to your story, like more wear specific to the actions the object has been through in its life.

Each detail or group of details should say something about the history of your asset.

The material definition is quite simple if you break down the logic behind what makes your material look like it does. Glossy surfaces usually have a very flat microscopic surface, which leads to low roughness levels.

Plastics get molded from hot, but not flowing melts, so they don’t have such a good opportunity to lay perfectly flat, so we get a bumpy surface.

Following this logic, we can start to analyze each material, comparing it to our reference, and assigning single values that make your material look and feel realistic. It’s important to get subtle variations between the different materials in terms of color.

The keys on this one, for example, are slightly lighter than the body, which helps break the flatness of those surfaces.

Below are images:

From left to right, material pass, base colour, metallic and roughness.


Making a semi-metallic plastic is quite a challenge. I believe that we, as artists, are sometimes super scared to play with the values and we tend to forget that results matter and that as long as the material looks good under all lighting conditions, we can do pretty much anything with the textures we create.

The value for the semi-metallic rubber buttons is around 0.65 metallic, while the body of the phone sits around 0.95.

After getting my base materials, I add any markings, logos, or stickers by using projections on various fill layers masking flat colors.

This way I have the freedom to come back in the future to reposition any details that feel out of place.

After this first pass, I started to add surface details based on textures and noises. Let’s look at the metallic plastic’s full process.

Setting up the materials in a way that allows maximum control from one layer is crucial for my workflow.

For example, the main plastic on the body contains this micro pearl that makes it feel both plastic and metallic.

To achieve something like this with a good amount of flexibility, I can use a “User0” channel renamed to “MasksAnchoring” to capture a Cells 1 noise into an anchor point that is later used on the base color, roughness, and normals.


The normals input for this layer contains a simplified version of Ben Redmond’s Faceted Normals Filter that I replicated using Substance Designer. It helps me bring some anisotropy to the surface using the same “Cells_Tiling” anchor input, further accentuating the flaked look.

As a disclaimer, you probably don’t want to be applying details from such an up-close view. I only zoomed in this far to get a good shot for this article, but details should hold together under all lighting conditions and view distances.

After getting the smallest details in, I add a layer of dust to the roughness to break up the uniformity of the surface. I tend to leave the strength controls handy to have an easier time during the polishing stages.

At this point, I’m happy with the second pass, so I can move on to applying more detail to the rest of the materials.


Screen Glass

So far, we have established the basics of base materials and surface detailing. We can easily apply this to the glass, which is composed of only 2 layers!

It’s important to first have a way to preview our work.

By enabling the opacity channel and choosing the “pbr-metal-rough-with-alpha-blending” shader, we can now start to tweak the opacity values of our fill layers.


Everyone nowadays has a phone. I encourage everyone to leave PureRef on the side if you can and feel the surfaces as they behave in real life.

For this one, I noticed the few details my own phone screen had and replicated them with exaggeration to achieve a more worn-down look. Let’s start with fingerprints and smudges.

My process starts with using photo-based textures. I recently found a great library that I love,, which shines at providing nice results for things like these.


After applying one of these surface imperfections to the fill layer, I start playing around with levels and generate a more subtle effect.

Anchoring the results allows me to once again add them to the faceted normals filter, which nicely breaks up the specularity of the lights, to then combine it with roughness to get the best of both worlds.


I’m comfortable now with the base pass for the materials, so it’s time to move on to the storytelling pass. During this pass, I will try to mimic certain actions, which require movement and directionality.

For the scratches on the screen, I have a very similar setup to the smudges, just a layer taking its information from the channels I will use: base color and roughness, and an anchor point to wear down the logos based on the scratches.

Overall, I try to use combinations of very simple layers, especially when introducing my students to more advanced setups like anchor points or passthrough layers.


Based on the screen’s details, I can later decide where to place the wear on the rest of the model. A scratch on the screen must come from something scraping it, which will also damage other areas in its direction.

After identifying the strongest details, their accumulation, and direction, I can add a few areas of heavy damage to my model. I also added the details to any exposed areas, like some of the corners.


For this project, I also wanted to explore capturing and processing my brushes, so I used a set of alphas I photographed from a family member’s van.

I loved the directionality I was able to get because of it, and the 4K+ resolution of these alphas allowed me to scale them up or down as I saw fit. Extracting these alphas was easy, just inverting, adding some levels, and cropping the image does the trick.


Once I have a nice collection of these, I can import them into Substance Painter to use as stencils. The layer setup for the damaged areas is also quite simple.

Layering detailed base materials on top of each other can sometimes result in a lot of noise, so when I can, I try to keep the detail in the masks rather than the surfaces. The transitions will be more than enough to finalize any definition.

In red, the area with more importance, followed by green as a secondary, lower opacity layer, and blue, with much lower opacity. I used one paint layer per weight and used two extra layers to convert details from primary to secondary or tertiary.

This way I have the flexibility to add detail and blend the edges to generate a layered wear effect even after painting in each respective layer.

As you can see, it’s a slow but steady way of building the details you want. It’s also really flexible, which in production always ends up saving time during feedback rounds.

Buttons & Misc Areas

Adding wear to the rest of the object was nothing more than repeating the previous steps while seeking slightly different results. For buttons, most of the wear that I wanted to add was only to bring them together with the wear of the rest of the object.

I did replace the normals on the top speaker area with mesh detail, created by tiling a default cloth pattern and making it metallic. There’s not much to this process other than adjusting opacities and ensuring the look is unified.


After adding a few extra alphas and masking the damages until I’m happy with the results, the body of the phone is completed. I mentioned before that I like to keep my layers simple and allow a good amount of detail to come from the blending of simple layers.

Maybe because I love product shots this is key for me to have an easy-to-look-at model that has areas of interest as well as resting points, which also brings more attention to the overall composition.


It’s important to note that the details I added to this model had to be exaggerated quite a bit to ensure they were visible with the plastic bag on top.

Normally, I wouldn’t push the damages so much on areas like the screen, since these details should only be visible under stronger lights. As you will see later, this push on the details is compensated when the plastic gets added to the model.


This part of the model was by far the most complex and fun of them all. The process starts by getting the two layers that compose the graphics on the screen, the screen saver, and the information overlay.

I got the first one from Pexels, and the second one is a reconstruction of a reference image.


Once I have my source materials, I can start setting up the screen layer. The first step is to create a mask that represents the pixels on a screen.

To achieve this, I used a square shape that I tiled until it was as small as a large pixel. Getting a larger pixel will ensure the detail is kept at further viewing distances.

After this, I added my information layer. This looks OK thanks to the mask, but it behaves unrealistically. Pixels on a screen are either entirely lit at whatever brightness or unlit completely. Right now, some pixels are split, and only a part of the pixel is lit.

To get the information layer to fill a pixel before deciding its brightness, I blurred the layer and used the mask to apply slope blur to the results.

As you can see now, pixels are as filled as they would be in reality.


After achieving this for the information layer, I can anchor it for future recovery to do the same for the background. Recoloring the background provides some nice pop to the colors and differentiates the texture from its source.

Recovering the information layer is easy since it’s a black-and-white anchor point, which I can add to the background by blending it with “Linear Dodge”.


Adding the dead pixels was easy once the displays were set up. They consist of a few fill layers with flat magenta and cyan colors on the emissive.

They are placed using the UV projection and adjusted to follow specific lines and individual pixels.

Setting up the Materials for the Phone

I exported my textures from Substance Painter and plugged them into two materials, one for the body of the phone and a second one for the glass on the screen.

This second material uses refraction to mimic the transparency of the glass and is applied to a separate object, replicating its real assembly. This is the only difference with the body’s material.


This refraction layer uses the logo as a mask to ensure there’s no transparency on the “NVAVN” letters.

The LCD uses the base color as an input for the emissive as well, making sure the screen reads well through the smoked glass.


All the transmissive materials on the scene need ray tracing enabled to use their full potential, which will also benefit the calculations of the lighting.

Texturing the Bag

Texturing the bag was a challenge I had never faced before, as my whole experience with transparent materials relied on the ability to customize the shader without coding with tools like Unreal, Unity, or in-house engines.

Marmoset Toolbag has a nice collection of parameters and ways to extend the functionality of base shaders. At first, my logic told me plastic should be, as in reality, a refractive surface, so I enabled “Refraction” on the shader’s “Transmission” tab.

Very quickly after starting the setup, I realized that the main driver for how my textures should look was the shader itself, so I developed any textures that I needed to solve the new problems I was going to face.

I started the process with a very basic refraction setup. The first result shows a very thick refraction, looking more like glass than plastic. Lowering the refractive index from 1.5 (Glass) to 1 (no refraction) helps.

The plastic I’m going for is almost hair-thin, so it should not have this much importance over the phone or background.


I played around with the refraction settings, but I was completely unable to get something I liked from it. I struggled quite a bit until I realized that to further lower the opacity of this material, I needed to resort to an extra setting, Transparency.

Adjusting the transparency helped get rid of the glass-like feeling of the object. After a lot of headaches, I also discovered that the sky I was using had really wide lights, which made the material not feel glossy, even if the roughness was set to 0.

Adding a few lights with smaller diameters, I was able to better capture a glossy surface, but now the reflections were dim.


At this point, I remembered a colleague once told me to add metalness to glass to boost its reflection, so I started by adding “Advanced Metalness” to the Reflectivity setting which enables us to use a specular slider that gives a lot more power to the reflections on top of boosting the metallic to achieve the desired results.

This is sometimes a controversial topic. “Metalness should be either 0 or 1” is what a lot of people will say. I’m not a PBR “under the hood” expert, but I think everything we use to develop our art is a tool that serves art direction.

Artistic and technical rules are great for getting good results with a unified direction, but they can be broken if that means achieving what otherwise you couldn’t.

In this case, adding a bit of metalness to the plastic was the key to achieving the direction I was going for.

Adding Textures and Masks

After getting a good hint that this was going to be possible, I started to work on the maps I was going to need. Looking at my references, I saw that these bags have some non-transparent labels.

One of my references was almost a perfectly flat scan of a real evidence bag, so I went ahead and used it as my base.

I got a Base Color texture and an alpha mask first by cleaning up the edges and reflections in Photoshop. The front of the bag is just a flat color.


Thanks to the nature of this material, the only areas that mattered were the paper ones, since the transparent plastic is better off without having much information on the color at all.

I can also use the mask I generated or variations of it to control the refraction, metallic, roughness, and transparency.

Let’s check out the results under better lighting. Applying the concept discovered before, smaller lights will provide a better definition of the specular.

Changing the sky/background can also help reveal some flaws. In this case, refraction always had this darker cast on top of the background.

This works well for darker skies, as you don’t notice, but once the background gets lighter we can see how this looks wrong.


Solving this was yet another headache and a lot of playing around.

Since we are using transparency we could adapt any transmission properties our surface has to achieve a milky effect like the one on the reference below.


The best solution I found was to switch my transmission model from “Refraction” to “Thin Surface”.

There’s one more problem to solve. The material makes the plastic look way too thick. This is because of the evenness of the specular reflections.

Breaking these would make a more solid surface look a lot more malleable and thin. To achieve this, I added a wrinkles normal map I generated from a creases texture in Substance Painter.

It blends from the edges in, giving the feeling that this is a much more delicate and thin plastic.


Having extra controls on the shader such as a Fresnel mask I could control and contrast to ensure the correct amount of transparency and specularity is seen at the edges while keeping the flatter areas more subtle would have helped immensely.

I do not know about shader development outside of node-based approaches, so this was not an option for me. I could on the other hand fake the Fresnel effect by adding a gradient to the transparency map where the phone is placed and drawing some white lines on the scatter mask and transparency texture so they become whiter, simulating the welds these bags have.

This all results in the final maps, and the texturing for the bag is concluded.


I’d love to say that this bag was a huge effort in terms of texturing, but the reality of this material is that it was a really fine balance when it comes to lighting.

Strong lights would make the surface of the plastic look good, but blow out the phone’s brightness, and the correct lighting for the phone would turn the plastic invisible or too dark against the background.

The textures above help the details shine where they need to, but the key to this model was to light it correctly, which I’ll revert to in a moment.

Scene Setup and Lighting

Before we can start lighting this scene it’s important to have good camera and render settings. I always start by setting up my field of view.

A very low field of view will result in nearly orthographic views, which will considerably flatten your object’s three-dimensionality. I like my cameras to use around a 50-135mm lens for shots like these.

It’s very important to also enable the ACES tone mapping, which provides a much deeper tone-rich result. In the image below, Linear is on the left and ACES is on the right.


I further accentuate this richness with a very slight “S curve” on the curves editor, with an extra touch of interest by coloring the brightest highlights blue with another custom curve on the blue channel.


Enabling “Frame Opacity” and setting the appropriate render width and height (1920×1440 for this one) is also crucial to establishing a well-composed shot.

Speaking of composition, I like to think about mine as “weights”, where everything added to the frame, including lighting, adds or removes weight to the areas it affects.

Very bright or very dark areas will have certain weights and so will saturated colors or busy areas of detail and shapes.

Compositing the Shots

Further explaining this concept, the way I think of it is if I were to pinch the image with my fingers, these weights would balance the image from falling to one side or the other.

If I pinch the image below from the center, it would fall to the side, but if I were to pinch it from the “Point of Balance,” it would hold still.

Drawing a few lines that cross your most prominent details can help you detect which areas have the most chance of appealing to the viewer’s eyes when those paths intersect.

These lines are called “Leading Lines,” which are kind of the reading order of the composition.


Concepts such as these allow me to have a deeper understanding of where I want my objects to be on the screen and how I can bring the composition to feel right from the object position that I need by reinforcing these leading lines.

For this piece, I wanted to replicate the UI from Resident Evil 8, which had a very bottom-left heavy influence on the composition, as this is where they place large texts and the controls.

The UI is added using planes with alphas, which are parented to the camera. The icons are from Julio Cacko’s FREE Input Prompts Pack.

(Thanks, Julio, for creating such a useful pack!).

To adjust for this and create this point of balance, I had to bring more importance to the opposite side of the image, the top-right.

Following something basic as the rule of thirds is always a great start, but for this situation, I needed to break it a bit by moving my object’s weight farther to the right and creating larger, bright specular reflections on the plastic, for example.

The image above has been converted to black and white to remove distractions from color, with the hopes of providing a clearer example.


The image above has been converted to black and white to remove distractions from color, with the hopes of providing a clearer example.

These points of balance don’t have to really be anywhere in particular. I find it interesting to play with compositions until I get pleasing results, which usually means I have a balanced image with a few points of interest with different ranges of importance, far enough from each other that I can look at them without getting distracted by the rest of them – they don’t fight each other.

If you want to understand some of the basics of composition from a less subjective perspective than the one I provided, this article is a great place to get started:

I separate my lights into categories, and I tend to explore a different look every time I make a new asset, but I have a few golden rules for achieving realism, like having good contrast in my lighting, allowing for shadows as well as almost pure white bright spots to exist, and softening the shadows by increasing the light’s source size.

So starting from the smallest speckles, I start to add detail. Once the smaller specular reflections are in, I love adding colors to dimmer lights as they would help the shadows catch a bit more color and interest.

With the colors in, I can start adding the brighter lights that hold the most importance.

For this setup though, I couldn’t increase any light source sizes since this would increase the specular size as well, which would, as discussed before, remove the material definition from the plastic.


To compensate for this, I created two variations for my main light with slightly different positions and brightness – “Main_Boost”, “Main_A” and “Main_B” – which at slightly different brightness help blur the hard shadows coming from a really small bright light while keeping the specular reflections sharp.

To bring it all together, I can now enable my sky and get a good balance between the background’s brightness and the object’s lighting.


The results look compelling and in my opinion, well balanced.

Translating these concepts, like varying the background color to show things in different contexts, and switching up the background and angles provides enough variation to showcase the model in a small but digestible repertoire of shots.


If I could go back and change some things in the last two shots, maybe the normal map I created for the wrinkles could have been more obvious in different lighting conditions.

More proof of the sensibility of this material. What looked great in one shot needed touch-ups on a different one. Lighting was key on this one, and so was paying really good attention to the concepts I described here.

Writing breakdowns is great for this. You are forced to understand what you did so you can explain it, and this settles a lot of the key concepts that make things work.

This type of analysis is what makes us grow, so I highly encourage anyone to do this exercise. It takes time, but it helps you understand what your path for growth can be, even if the post only ends up being published on your blog.

And that’s it.

Conclusion and Advice

It was a blast to be able to break down this model. It’s not easy to choose a portfolio piece to make, so I hope this one provides a good example that anything under a good context can be elevated to a good piece that can be made quickly into something you can be proud of.

Before heading off, I’d like to thank my students at Onyx Academy for supporting my journey and giving my career a new meaning, and everyone who supported me to get to where I am today, especially Onyx Studio, and of course the Games Artist community for allowing me to spread my words to a larger audience.

I hope you enjoyed this breakdown and if any questions arise, please reach out to me via Artstation.

Thank you for reading, and have a great day!