Alongside the RTX Global Illumination SDK, Nvidia also launched Deep Learning Super Sampling (DLSS) 2.0 today. At the heart of DLSS 2.0 is an artificial neural network that uses Nvidia RTX TensorCores to boost frame rates and generate sharp frames that approach or exceed native rendering.
DLSS 2.0 was trained on tens of thousands of high-resolution images that were rendered offline in a supercomputer at very low frame rates at 64 samples per pixel. With the training weights for the neural network set, DLSS 2.0 then takes lower-resolution images as input and constructs high-resolution images. Nvidia then distributes this trained deep learning model to RTX-based PCs via NVIDIA drivers and OTA updates.
Utilizing Turing’s TensorCores providing up to 110 teraflops of dedicated computational power, DLSS 2.0 runs twice as fast as its predecessor. Nvidia claims that it is possible to run both an intensive 3D game along with a deep learning network simultaneously in real-time courtesy of the increased computational power. To further enhance the performance, DLSS 2.0 uses temporal feedback techniques to render only one-quarter to one-half of the pixels and still deliver image quality comparable to native resolutions.
Moreover, unlike the previous iteration, which required the neural network to be trained for each new game, DLSS 2.0 trains using non-game-specific content. This generates a generalized network that works across multiple games and ultimately leads to faster game integrations and more DLSS games.
DLSS 2.0 has three image quality modes for a game's internal rendering resolution: Quality, Balanced, and Performance. Out of the three, the Performance mode enables up to 4X super-resolution upscaling (i.e. 1080p → 4K).
Furthermore, DLSS is now built into a custom branch of the Unreal Engine 4 codebase. If you are interested in developing with DLSS, you have to first connect your Epic Games and GitHub accounts and then access the development branches on GitHub. Further details can be found here.