The hottest graphics cards in history launched in the past 12 months. There’s just one problem: you can’t buy any of them. Luckily, some of us have been offered an olive branch that can keep our frame rates climbing ever higher, despite a dearth of fresh new silicon. That’s come in the form of upscaling technology, AI or otherwise.
Upscaling technologies take many forms but they all aim to do roughly the same thing. That’s take a frame and make it bigger. The difference between most upscaling technologies is how they retain picture quality during that process. Basic upscaling will simply stretch pixel values like Mike TeeVee, resulting in a lacklustre image quality and a lack of clarity. While smarter upscaling technologies, such as those with clever algorithms, will infer information from a scene or enhance certain features to ensure a more defined, crisp image.
Upscaling isn’t a new concept by any means, but it is a tool in the PC gaming toolbox that is getting a whole lot more important. Not only because genuinely faster GPUs are hard to come by, but because the leaps in resolution and rendering are becoming so large, and so fast, that it’s becoming ever more difficult, and certainly more expensive, to make a significantly large leap in PC performance.
The pinnacle of PC gaming resolution today is 4K. Perhaps 8K for a chosen few, but let’s stick to the more realistic of the two. The shift to gaming at 4K has happened at a slow pace. It’s possible to achieve natively nowadays, and even at high frame rates, but one key accelerant to its proliferation among PC gamers has been the arrival of upscaling technologies that push silicon beyond its electrical capabilities.
I’m talking about Deep Learning Super Sampling (DLSS), FidelityFX Super Resolution (FSR), and Temporal Super Resolution. Once relegated to the role of anti-aliasing, these sorts of technologies have since found themselves in a pivotal role for playing the latest games at high resolutions and high fidelity.
The demands of games are increasing at such a pace that even the best graphics cards in the world have trouble keeping up. Don’t get me wrong, an Nvidia GeForce RTX 3090 will see you unbothered by the most demanding games for a while yet, but even yesteryear’s proud stallion, the GeForce RTX 2080 Ti, will struggle at 4K with ray tracing enabled in many games today.
The simple fact is: If graphics architectures are developing at a rate of knots, so too is game development, fidelity, texture quality, models, environments, monitor technology, and plenty else more.
It’s upscaling technologies that have allowed us to buck the new GPU trend while retaining solid performance, and their role is only going to grow. However great DLSS is today, after only a few years in development, can you imagine its importance in 10 years time? This technology has already come on leaps and bounds between DLSS 1.0, which was decent but notably worse than native rendering, to DLSS 2.0 and 2.1, which are close to the real deal with huge improvements in performance.
It’s difficult to imagine today, but there’s a high possibility that the next big leap in PC graphics comes from a software-implemented—perhaps hardware accelerated in some capacity—upscaling technology, the likes of which we’ve only seen in nascent form today.
I would suppose it’s even how the leap to 8K will be made in earnest—DLSS is already the way in which Nvidia envisages RTX 3090 owners to be able to hit that almighty pixel count and not have their PC melt down to a sad little puddle.
Despite the naming convention, 8K isn’t simply twice the resolution of 4K. A true 4K representation requires a graphics card to render 8,294,400 pixels every frame. At 8K that increases to 33,177,600. That’s a 300% bump in pixels, and one that will not be surmounted easily or affordably with hardware alone.
GPUs are already pushing close to the reticle limit—the physical limit of the lithographic processes used to manufacture these chips—and furthermore it is the price of producing such tremendously large chips that is going to cause headaches for pretty much everyone involved. Yields, packaging, power… you name it, there’s a limit to what’s realistically possible without sacrificing something.
That’s not to say there’s an end to GPU development—of course there’s not, you have some of the best boffins in the biz working on that problem—but there is a cost/performance ratio to weigh up. You could load up a multi-chip GPU with heaps of cores and call it a day, which is an eventuality I can absolutely see happening, but is there a ‘cheaper’ way of netting yourself a major performance gain alongside that?
The obvious answer in my mind is yes, with upscaling technologies taking over ever-larger portions of the work. DLSS has proven itself a mighty tool in the RTX toolbox, but that is by no means the end to this experiment. The success of DLSS will only spur on further development into the utility of upscalers and AI algorithms, as it already has with AMD’s FSR, and bigger and better upscalers will be looking to achieve the performance gains that were once assumed to be in lockstep only with bigger and better silicon.
So what might that look like? Epic’s solution, Temporal Super Resolution, is baked right into Unreal Engine 5, an important step to further implementation in a wide range of games. Then there’s AMD saying that FSR is just the beginning for its journey, whether that be with FSR or something completely different again. And both of those solutions are helpfully hardware agnostic.
Similarly, Microsoft seems more than interested in a DirectML-powered upscaling technology to rival the best, using its own machine learning API. It’s not a prime focus for us PC lot, but imagine the impact a powerful upscaling technology would have on the battle for console dominance. It could be a massive mid-gen performance boost the likes of which is rarely, if ever, seen in a console generation.
And if that’s being launched via a DirectX-based API then it’s also something which could easily follow DirectStorage from its console origins and find a home in a new OS, such as Windows 11.
Then, of course, there’s the current reigning champ of AI upscaling, Nvidia. DLSS 2.0 and its earlier iterations are already impressive enough, but the thought of 3.0 is nearly as exciting as the thought of the next graphics architecture after Ampere, and I don’t say that lightly.
Yet machine learning models can’t infer anything from what’s not there, so dedicated and powerful silicon isn’t going anywhere. But if these upscaling techniques are simply the first of many to exploit upscaling algorithms for more performant gaming experiences at higher resolutions and fidelity, there’s undoubtedly a rich vein of performance still left untapped.
That’s a pretty exciting prospect, don’t you think?
This article was first published in July this year, and with the global shortage of GPUs still very much in evidence, it still rings as true now as then.