top of page

Unreal Engine for Mobile VR - Adventures in Optimization

  • andreww290
  • 3 days ago
  • 3 min read



Last year, MotR delivered one of the largest projects of our career. A 40-minute epic VR adventure where multi-player groups explore the ancient underground palace of Qin Shi Huang, the first emperor to unify China. We produced this amazing project with Unreal Engine for Wevr and HTC.

The Underground Palace of Emperor Qin Shi Huang
The Underground Palace of Emperor Qin Shi Huang

HTC's Vive Vision was a headset developed for large-scale venue use, with hot swappable batteries and rugged design. It has some great advantages, however the chip it uses for mobile vr rendering is only equivalent to a Quest 2. So we had our jobs cut out for us. Render a massive scale experience in vr with minimal horsepower in Unreal Engine, which is known for being "too demanding and bloated". We clearly disagree!


View from above the burial palace and surrounding city
View from above the burial palace and surrounding city

Unreal does indeed default to an "everything on" setup. Out of the box, UE5 enables lumen, nanite and most of the AAA bells and whistles. There's a fair amount of work done to disable these features and limit the engine to using classic low-end light baking. Starting by removing unnecessary plugins, we did a great deal of performance testing to give us information about polygon limits and rendering features we needed to disable. One good attribute of the Vive Vision headset is that is has a good amount of memory, so we were able to load more and higher resolution textures than we expected. However, resolution does have an effect on performance, so we chose wisely, where to implement higher res and where we could reduce. The headroom definitely results in some impressive scenes that seem to defy some of the hardware's limitations.


How many Terracotta warriors CAN we render?
How many Terracotta warriors CAN we render?

One of the most critical things we could do to render large numbers of objects is intelligent instancing. While UE has functions to automatically instance identical objects, we found being very organized and strategic was the most effective strategy. We made extensive use of PLAs (Packed Level Actors), and ISMs (Instanced Static Meshes) to massively reduce draw calls. Our draw call budget hovered around 100-200 in any active view. A very handy measurement tool we used a lot was Tools>Audit>Statistics.

This is something that can run during PIE playback to keep a close eye on what specifically was consuming our draws with a simple and clean readout.


Workers assembling the terracotta warriors in the Workshop area
Workers assembling the terracotta warriors in the Workshop area

Another huge optimization we made very early in development was to eliminate UE's built-in specular rendering. Default materials were all too complex to render in the hmd at solid frame rates. We toggled Fully Rough on all of our materials. We worked on art styles that were more illustrative and less physically correct to account for this limitation. But we also had need of reflective surfaces. We did this by making simple reflection panoramas plugged into the emissive channel of some materials. This maintained very performant materials with some reduction of realism.


Many treasures and gold decoration required custom fake reflection materials
Many treasures and gold decoration required custom fake reflection materials

There were also serious limitations in regards to the CPU and how many skeletal meshes we could run at a time. Multiple techniques were used to improve this. Firstly, our host is the most complex with the most joints, including all fingers. Many tertiary characters' hands were simplified to mitten joints from the outset (All fingers joined into a single joint chain except the thumb, which has another small chain). Another method we implemented was the use of Joint LODs on characters. Not only does the geometry get LOD'd but also the joint count as characters move away from the viewer. This frees up enough CPU time to allow multiple characters in camera at the same time such as the battle sequence.



LOD's for every static mesh that isn't a simple plane was a must for performance as well. Unreal's LOD visualizer was used heavily in every level to make sure assets were behaving as expected. We used the built-in Unreal LOD reduction tools for both static and skeletal meshes, which made generating and managing them relatively simple.


LOD Visualizer showing R,G,B in levels of detail from LOD1 to LOD3.  White/grey is for LOD0
LOD Visualizer showing R,G,B in levels of detail from LOD1 to LOD3. White/grey is for LOD0

There are so many other methods and optimizations we used to make this huge VR experience perform on mobile hardware. I couldn't really cover them all here, but I will be presenting some talks in the future, so stay tuned!


Andrewww


Find us here:

  • Instagram
  • Facebook
  • LinkedIn
  • Youtube

1040 Boulevard SE

Suite L

Atlanta, GA 30312

​

For General Inquiries:

info@motr.net

404-458-2719

​

For New Business:

Jack Ehrbar

jack@motr.net

C: 646-339-9851

O: 404-458-2719

© 2024 by Millions of Tiny Robots, Inc.

bottom of page