A vista – defined by Miriam-Webster as “a distant, pleasing view through or along an avenue or opening.”
As Environment Artists we are often taught that we should focus upon the assets closest to the camera or the player and lavish upon them the most detail and attention. For the most part this is a solid methodology, as our time and resources are limited. There are also technical limitations to consider as well: draw distance, volumetric fog, billboards and the like all affect the quality of the elements in the distance of our scenes.
But the distant is often as critical as the close. Vista shots are, in my opinion, extremely vital for establishing the scale and mood of the scene as a whole. For example, Gears 5 has some amazing vista shots which use a combination of meshes and matte painting cards to create some breath-taking views.
(Vistas for Gears 5, Matthew Ellis: https://www.artstation.com/artwork/28Ww8K)
The multidisciplinary nature of vista art is really interesting to me. The combination of 3D, matte and HDRIs, as well as other components coming together to form a cohesive scene and have it perceived as completely 3D, is fascinating. So the genesis for this project was to learn some more about vista creation and compile some techniques that I can (hopefully) use in the future.
Before I carry on with the method I used, here are some great artists focusing on vistas to draw inspiration from!
Matthew Ellis: https://www.artstation.com/matellis
Ethan Ayer: https://www.artstation.com/ethanayervfx
Bryan Adams: https://www.artstation.com/delta307
Tony Arechiga: https://www.artstation.com/tarechiga
There are a number of world-builders out there for generating heightmaps and data for landscape creation, such as World Machine and Quadspinner Gaea. There are a few limitations there, however.
One, if you’ve never used a node-based software such as Designer, these programs can be confusing to learn and there isn’t an abundance of tutorial content available for them. Even then, most of the tutorials that I’ve personally found are the same generic hills and mountains that don’t create a lot of visual interest in your scene on their own.
Secondly, once you’ve generated your landscape, there’s still a lot of work left to be done.
You need splat maps to define where different material blending will take place, you may need Landscape Grass types (UE4) set up to populate your terrain with trees and foliage, and you will definitely need to supplement your rocky/more vertical areas with additional meshes- just to flesh things out. These are all established methods which work well, I just couldn’t help but wonder if I could try something which diverged from the norm a bit.
The challenge for me when starting out this project was to find a method which could mitigate some of these problems, and hopefully create a result which can realistically be used in production.
I had a huge ‘what if’ moment when considering how I could go about this project. I have a fair bit of photogrammetry experience- that is to say, I coerced my university professor into letting me do my final engineering thesis on the subject (despite it having nothing to do with construction)- so I feel like I had a rather good grasp of the process to get terrain from raw scan into engine.
The problem was that I don’t have a drone, nor any particularly interesting landmarks in my area which would make for visually captivating backdrops.
Then it came to me- what if I used Google Earth? There is some astounding detail to be found in the work Google has done with its 3D map, especially in well-known locations in the US where they’ve done the most scanning. Of course, using heightmaps from real-world data is nothing new, and websites like https://terrain.party/ allow you to download maps for use, but I wanted to try a photogrammetry approach, which could perfectly capture real-world colour, dimension and form.
I decided to try and capture smaller landmarks to see what the upper limit for this process would be. How detailed are Google’s scans up close, and how much screen space can they occupy in-engine before they don’t hold up to scrutiny?
I decided to use rock formations from the Cathedral Rock area in Arizona, USA. This was partly due to the fact that the quality and coverage of scans in the United States and surrounds seems to be much greater when compared to other parts of the world. They also look pretty cool!
The scan process is actually made easier by using Google Earth. It removes a lot of real-world limitations, like drone battery life or time constraints. You can simply tab into fullscreen mode in your browser using F11 and save your screenshots as you orbit your POI. I just used the Windows + Prt Scrn shortcut, which will automatically save your screen to the Pictures folder.
Once you have your multiple screenshots saved somewhere safe, you can then import them into the photogrammetry software of your choice for processing. I used RealityCapture, which is a paid software. but you can also use free alternatives like Meshroom: https://github.com/alicevision/meshroom.
I have not used the latter in recent years, so I can’t speak for its results.
Your ‘flight paths’, so to speak, should look something like this- with multiple orbits around your point of interest- attempting to capture it from every angle for reconstruction. I captured a few areas in the region for both the midground and background of my composition.
After this comes the process we all love the most- cleanup and retopo! The raw scan data is pretty detailed, and you could of course add some more details in ZBrush if you wish. For my needs, however, I just stuck with a quick ZRemesh, unwrapped and transferred my diffuse map- as well as baking out normal and AO maps.
You could also cut interesting formations from the photoscan and clean them up to use as a base for other rock sculpts! This is useful if you would like to set up formations in the normal way, with masks and multiple tileables being layered.
Now for some technical stuff- getting the scans into engine and making sure they look as good as possible. Tri count isn’t as big of a factor as I would’ve thought previously. The closer mountains originally consisted of just under 500k tris but playing around with the LOD bias inside UE4 I found that around half that amount would also suffice (keeping in mind that these formations take up a large amount of screen space).
Next comes the step that I found most tricky- delighting the assets. There’s a reason artists prefer overcast/cloudy weather when it comes to photogrammetry. Because there’s a lack of direct lighting, the subject is evenly lit from all angles in perfect conditions. This results in little to no delighting being required as a best-case scenario. Unfortunately, the people at Google understandably did not have these considerations in mind when they were out scanning the world.
This means that the lighting information in my scans was very harsh and resulted in very contrast-y shadows which were difficult to remove. I tried my best using the delight filter in Substance Alchemist, however the results were far from perfect. In the future, I might make sure to use terrain from Google Earth which has more favourable lighting conditions for an overall better result.
Finally, once your assets are in engine and you’ve decided on your final composition/layout, it’s time for the part which I think adds the most to the overall fidelity. And that’s supplementation of your photoscans. For this instance, I noticed that the foliage from the photoscan was the biggest eyesore. As expected, it results in blocky green formations which did not look good at all.
So I went into SpeedTree and created a simple bush which I could place over these areas, and then tinted the albedo and the subsurface colour to more closely match the scan to help with blending. It helped a lot for those smaller details, and the subsurface scattering in the sunlight was quite effective at breaking up the silhouette of the scan. This kind of supplementation can be done with other assets as well. You could add some trees where you like or merge some Megascans rocks to increase the level of detail where needed.
To finish off, let’s talk about optimization a little bit. I originally baked the scans at 8K and 4K, respectively. When inside UE4. I could effectively downscale the smaller mountain to 2K or even 1K without much noticeable lack in detail. Due to the size of the larger formation, quality tended to drop quickly.
It stands to reason then that this technique would offer better results when taking up less screen space. Needless to say, the scans should not be used as a playable area, as the fidelity will deteriorate extremely quickly at that point- not to mention collision and other issues. The traditional route of heightmap terrain and foliage assets will definitely work better in that case, for areas the player has access to.
That’s pretty much it! The entire process I used from start to finish for creating realistic vista shots for your scenes. I’d love to see if anyone uses or improves upon this method. A small disclaimer before I finish: I am not entirely sure of the legality of using Google’s scans for commercial use, so make sure you look up the relevant laws and legislation before doing so! If you have any questions or results to showoff, I can be reached via message on Artstation or on Discord at Cairo#7688!
I hope this small breakdown was helpful to some of you and thank you to Games Artist UK for the opportunity in writing this!