A technique that gets smarter over time.
NVIDIA’s new GeForce RTX graphics cards launched with several tantalizing hardware-based features that promised to improve performance and image quality. In addition to ray tracing acceleration, NVIDIA announced something called Deep Learning Super Sampling, or DLSS, which uses artificial intelligence to improve performance in games at higher resolutions and settings.
While DLSS was technically available on the RTX cards at launch, it wasn’t until recently that you could take advantage of it in games like Battlefield V and Metro Exodus. Here’s what you need to know about the new technology.
WHAT IS DLSS?
DLSS can be thought as a new form of anti-aliasing. That is something of an over simplification, but it’s useful as a concept. The technology basically uses a deep neural network – running off the Tensor cores in RTX GPUs – to pull out multi-dimensional features of a particular scene and then combines details from multiple frames to construct a high-quality final image. Aliasing refers to the jagged outlines of an object in a scene, and anti-aliasing mitigates this by filling in the gaps between pixels. This is all very much an approximation though, and there are many different techniques, such as multi-sampling, fast approximate, and super-sampling.
Ultimately, DLSS uses fewer input samples than traditional techniques such as temporal anti-aliasing (TAA), while avoiding the difficulties that arise in situations like a semi transparent screen floating in front of a background that’s moving differently. However, DLSS allows faster rendering at a lower input sample count, and then infers a result at the target resolution that can match TAA in image quality but with half the shading work.
HOW DOES IT WORK?
DLSS starts by extracting many aliased frames from the game, and then generating a corresponding “perfect frame” using either super sampling or accumulation rendering. These so-called perfect frames are rendered using 64x super-sampling and represent the ideal level of detail and anti-aliasing quality. These pairs of frames are then fed to NVIDIA’s supercomputer, which trains the DLSS model to recognize aliased inputs and generate a similar high-quality anti-aliased image that matches the “perfect frame” as closely as possible.
After many runs, DLSS will learn to produce its own results that approximate the quality of 64x super sampling while avoiding problems like blurring and transparency that plague approaches like TAA.
Next, the process is repeated, but this time the model is trained to generate additional pixels instead of simply applying anti-aliasing. This effectively increases the resolution, and both techniques allow the GPU to render at higher resolutions at higher frame rates.
ARE THERE CERTAIN SITUATIONS WHERE DLSS IS MORE USEFUL?
Supposedly, yes. DLSS results can vary across games, because different game engines give every game different characteristics. Other factors matter as well, such as how complex a scene is and the amount of time spent on training. NVIDIA says that it continues to train its deep learning neural network even after a game’s launch, and improvements will be passed on to users via software updates. More importantly, developers will need to add support for DLSS, so you can’t use it in every game.
That said, you’ll see the most benefit from DLSS when your GPU is the limiting factor in performance and is working really hard. For example, if your GPU was handily chewing through a game, DLSS would not be available because the GPU’s frame rendering time could already be shorter than the DLSS execution time. In this case, there wouldn’t be a tangible benefit to implementing DLSS.
On the other hand, if you’re pushing less than 60FPS, DLSS could step in to give you a boost. In other worse, this is more likely to happen at higher settings and resolutions, which is why DLSS isn’t available at 1080p for cards like the GeForce RTX 2080 and 2080 Ti. There are a few problems with this argument though, including the fact that you still need a really powerful card to maximize high refresh rate 144Hz and 240Hz displays. Ultimately, it seems like it makes more sense for NVIDIA to make the option available and leave it to users to decide if it works for them.
IS DLSS THE BEST OPTION ALWAYS?
Not necessarily. The technology is still very much a work in progress, and further improvements are still to come. For example, there have been reports of blurry frames with DLSS, especially at lower resolutions. Part of the reason for this is that NVIDIA reportedly concentrated more on high resolutions during development.
A 4K resolution offers a higher number of input pixels, around 3.5 to 5.5 million pixels, compared to just 1 to 1.5 million pixels at 1080p. In the latter scenario, there’s less data available, so it’s harder for DLSS to detect features in the input frame and predict the final frame. That said, NVIDIA says it is working to add training data and new techniques to train the deep neural network to handle lower resolutions better. Until that happens, you may want to use a more traditional technique like TAA at lower resolutions.