How Excited Should You Be About Nvidia’s New Geforce Rtx Cards?

It all comes down to game developers. By Koh Wanzi

Portrait of Tammy Strobel

It all comes down to game developers. By Koh Wanzi

My Reading Room

A small part of me expected NVIDIA’s GeForce RTX 20-series cards to be something special. After all, it’s been over two years since NVIDIA released the GeForce GTX 1080 and its Pascal architecture, a lifetime when set against the frenetic cadence of yearly product refreshes. NVIDIA had to have been working on something special, right?

As it turns out, NVIDIA did deliver, sort of. At Gamescom 2018, the company unveiled the GeForce RTX 2080 Ti, 2080, and 2070. While some thought that the Turing cards might have debuted under the 11-series name, NVIDIA decided to skip ahead to the 20-series. The choice of name is symbolic, and NVIDIA clearly intends to suggest that Turing GPUs are a generational leap and not an iterative upgrade.

The big new feature on Turing is something NVIDIA is calling RT cores, and they’re designed to accelerate something known as ray tracing. Ray tracing is more than just a way of simulating realistic lighting conditions and can be thought of as an alternative way of rendering models that is different from traditional rasterization techniques.

Put simply, ray tracing enables more accurate images than rasterization, but it requires a lot more computational power. Games that take advantage of RTX will be able to produce lighting and shadows that are truer to life, so game worlds look lusher and better than ever.

Then there’s something called Tensor cores, another new addition traditionally dedicated to machine learning. It’s not immediately obvious what use they would have in graphics rendering, but NVIDIA has some cool new tech called Deep Learning Super Sampling (DLSS) that can utilize these cores to upscale images with less of a performance hit.

Digital images are composed of pixels, which can give rise to jagged lines and blocky images at lower resolutions. This is usually overcome using one of a number of anti-aliasing techniques such as MSAA or SSAA, but the performance penalty can often be brutal.

However, DLSS supposedly offers quality on par with 64x SSAA, among the highest quality and most demanding forms of anti-aliasing, while being less taxing. It’s not difficult to see why this might be attractive since you’re technically getting more for less. In fact, NVIDIA claims that games that use DLSS could see up to two times better performance on the GeForce RTX 2080 than the GeForce GTX 1080 at a 4K resolution.

All this sounds fantastic, but the problem is that the potential gains all hinge on new technology and their adoption by game developers. At the time of writing, no game can take advantage of the RT and Tensor cores, so the vast majority of gamers are probably more concerned about how the 20-series cards will perform with traditional rasterization techniques.

Unfortunately, NVIDIA’s presentation at Gamescom concerned itself almost exclusively with ray tracing, paying little heed to conventional measures of performance. It’s going to be great we were told, as long as games are coded for the new specialized RT and Tensor cores.

That’s a huge caveat, and even Shadow of the Tomb Raider – one of the biggest games to support NVIDIA’s RTX platform – will only be able to take advantage of it in a post-release patch that currently has no release date. Of the 21 games that have confirmed RTX support, there are a couple of heavy hitters on board as well, including Metro Exodus and Battlefield V, but we’re still a way off from truly mainstream support.

Furthermore, NVIDIA has taken to quantifying performance in GigaRays/s, which measures how well the cards can handle real-time ray tracing. That’s an unfamiliar metric and one that’s not necessarily fair. For instance, the GeForce RTX 2080 Ti is rated for up to 10 GigaRays/s, whereas the GeForce GTX 1080 Ti tops out at 1.21 GigaRays/s. By this measure, the RTX 2080 Ti is up to 8 times faster than its predecessor, except that this wouldn’t be an accurate representation of what you’ll really get in real-world terms.

Since no games currently support the RT or Tensor cores, how well the cards can handle ray tracing is practically a non-factor. NVIDIA’s estimate of a 30 to 40 percent increase in conventional metrics falls far short of the nearly 80 percent improvement we saw moving from the GeForce GTX 980 Ti to 1080 Ti.

In terms of real, tangible gains, nothing quite beats Pascal, and the Turing cards, which NVIDIA is heralding as the start of a new generation of graphics, only deliver modest improvements.

Real-time ray tracing is exciting technology, allowing light to interact realistically with the environment and the addition of small details like softer shadows. After all, this is the same technology that’s used in movie post-production facilities, so to have the ability to do this real-time in games is pretty groundbreaking.

However, the problem is that most folks won’t have a card that can do this for some years yet. This means that there’s less incentive for game developers to add support for ray tracing, which is really what’s needed for the technology to take off. Furthermore, DLSS is NVIDIA’s tech, which might turn developers wary of favoring NVIDIA over AMD away.

This could very well be the future of PC graphics, but you probably won’t feel its impact until a couple of generations later.

PICTURES 123RF, NVIDIA