It’s been a wild ride over the last few days since we first posted about Nvidia’s DLSS 5 and it’s become clear that the issues raised by machine learning in this guise should have given us pause before we went live with our coverage. We posted too quickly when we needed time to process everything we’d seen.
The community has concerns, and developers have raised various issues with us privately. While it has been clear for a while that the future of next-generation graphics technology will have a strong ML component, the question is whether DLSS 5 represents the next big evolution in games technology or if it crosses a line in terms of artistic integrity – especially since the demos we saw were running on what is effectively existing “art”, sometimes with radical differences.
We were excited by what we saw in our private demo and the scale and ambition in Nvidia’s technology is astonishing. In effect, it has created a video-to-video generative AI solution unlike any other. It doesn’t have access to original game assets, geometry, depth or per-material metadata, yet it’s still able to produce images with remarkable precision and temporal coherence.
Putting the facial rendering aside for a moment, scenes with under-exposed characters, flat materials or weak contact shadows gain believable depth and dimensionality. Plausible reflections, shadows and material response do take us one step closer to the offline rendered look. Hair rendering is also impressive. Processing path-traced strand lighting is massively expensive for the GPU, but in DLSS 5 you can see hair that looks more natural and comparable to real world photography. Here’s our screenshot zoomer as a reminder.
At its absolute best, DLSS 5 hints at a future where a neural network with an understanding of how light behaves and how cameras work could complete some kind of neurally rendered “finishing pass” on game visuals. So, in that sense, there’s definite potential here to see technology like this as a powerful tool for both new and existing games.
But this is more than a finishing pass – for good or bad, it’s transformative. The question is whether DLSS 5 is more intrusive than it should be. That’s where the criticism of the technology’s facial processing comes into focus. On close-ups, there’s plenty of geometric and shading information to guide it, leading to plausible changes – enhanced skin textures and improved depth. However, if the camera pulls back, there are fewer hard cues for the model to work with, leading to what could be called a more “speculative” interpretation – and perhaps the cause behind the massively controversial Grace image that fronted the DLSS 5 reveal. This is not good when dealing with familiar game characters who can look like they’ve had a face transplant. Are we looking at an output defined more by the model than the game data? If so, that shouldn’t be happening.
It may even raise consent and other questions surrounding artistic integrity. On site and witnessing the demos in motion, concerns about this seemed less of a problem when the games we saw had been signed off by the studios that made them – the contentious assets we’ve seen, likewise. Nothing from the DLSS 5 reveal released by Nvidia has not been approved by the studios that own those games. But perhaps the issue isn’t just about specific approvals by specific developers on agreed DLSS 5 integrations, but rather the whole concept of a GPU reinterpreting game visuals according to a neural model that has its own ideas about what photo-realism should look like.
While we’ve seen endorsements from Bethesda’s Todd Howard and Capcom’s Jun Takeuchi, to what extent does that consent apply to the entire development team and other artists associated with the production? And by extension, there is also the question of whether now is the right time to launch DLSS 5 at a time when the games industry is under enormous pressure, jobs are on the line and cost-cutting is a major focus in the triple-A space. The technology itself cannot function without the work of game creators – it needs final game imagery to work at all – but the extent to which it could be viewed as a worrying sign of “things to come” cannot be overstated bearing in mind the reactions elsewhere to generative AI.
Right now at least, the concerns surrounding DLSS 5 can be tempered by practical realities. By its very nature, the technology can’t work on every target device – only Nvidia GPUs, and likely only high-end Nvidia GPUs, based on the current dual RTX 5090 set-up. A standard rendered version of the game will be required for every other gaming hardware scenario. And a standard rendered version of the game is required for DLSS 5 to function too. So in that sense, it presents more like an advanced post-processing mod as opposed to a mandatory component of the game engine. Developers can choose not to support it. And gamers may not wish to use it, even if they do have the available hardware as there will inevitably be a performance penalty.
But we’re fairly sure that the die is cast and the direction of travel is clear. It’s been years since we spoke to Intel’s Tom Petersen in Berlin when XeSS was revealed and he was talking about something that sounds a lot like DLSS 5 in that interview. And looking at the hardware building blocks in RDNA 5, it’s all about increased fidelity via ray tracing and a move away from boosting standard shaders without exception towards a more balanced GPU set-up involving ML hardware. Neural rendering – in one way or another – is coming.
DLSS 5 as it stands is an astonishing piece of technology – but also the start of the big debate about the importance of machine learning in the next generation of games, where the conversation must include some kind of consensus on training models, the source of the data for those models, control of the outputs and some kind of answer to the authorship question. I suspect the ultimate answer is a game engine with heavy machine learning assistance, but with significant control from the development team. DLSS 5? Perhaps more of a first generation ML image processor as opposed to the ultimate solution – but certainly the catalyst for discussion on the future of generative AI in gaming.
