Real-time ray-tracing is not only hardware intensive to the point where it needs dedicated hardware to run complex calculations for advanced lighting effects but also heavy on VRAM usage. This has led to situations where 8GB video cards like the GeForce RTX 4060 Ti run into bottlenecks in heavy RT workloads.
VIEW GALLERY – 2 IMAGES
According to a recently published patent filed by Microsoft, it is all about using level-of-detail (LOD) to improve ray-tracing performance and quality as needed. Different LOD settings and rendering exist in games, where objects in the distance are rendered with less geometry or detail to boost performance. This also applies to things like texture quality and foliage, which look their best the closer your in-game character is to a particular spot.
Ray-tracing involves a new “residency map for a sub-tree corresponding to a bounding volume hierarchy of objects,” where ray-tracing detail is calculated based on an LOD system. Distant ray tracing will be of lower quality, but greater detail will be available up close, freeing up resources like VRAM.
The map will allow developers and DirectX to determine ray-tracing quality on-the-fly. Based on the patent, which you can read in full here, current RT rendering doesn’t include a LOD system, which could be one reason it’s so hardware-intensive. The result could increase RT performance across GPUs like the GeForce RTX 4060 Ti, the GeForce RTX 3080 10GB, and even consoles like the PlayStation 5, which has around 12GB of accessible memory for developers.
This isn’t a replacement for upscaling technologies like DLSS, FSR, and XeSS, as ray-tracing will always require serious GPU power. Still, it is another smart software-driven approach to increasing visual fidelity for all gamers.