Nvidia's DLSS 5 is starting a debate about the role of AI in game development

last month, Nvidia revealed its latest version of DLSS, which has drawn negative criticism from gamers and developers alike. Criticism stems from tools that Nvidia uses to showcase technology, including changing the art direction and style of games Resident Evil requests and Starfield.

Since the DLSS 5 announcement, Nvidia CEO Jensen Huang has responded to the public outcry a handful of times, saying that gamers are completely wrong about their stance on the device. In his most recent comments, Huang, unsurprisingly, once again defended DLSS 5, saying it's another way for developers to improve visual fidelity. He took it a step further, viewing all AI-generated content as “beautiful.”

However, developers like Quinn Henshaw, a Unity development engine expert and Unity instructor at the Vancouver Film School, see the latest iteration of DLSS as a step in the wrong direction. From his perspective, DLSS 5 only has a negative impact on game development and art direction as a whole.

What is DLSS?

Nvidia's Deep Learning Super Sampling (DLSS) is an AI-powered graphics technology designed to increase visual fidelity and improve performance. It uses AI to render games at a lower resolution and then upscales the image to make it look better. It's similar to Sony's PSSR, which also leverages AI to move up the resolution pixel-by-pixel.

DLSS uses a type of AI called a deep neural network (hence the “deep learning” part of its name) to enhance images. DLSS was trained on supercomputers to recognize high-quality images and learn how to reconstruct low-quality images that were missed, therefore enhancing the image.

Nvidia announces DLSS 5 Image via Bethesda

It's not something developers use to build games, but it's something that runs with a game to improve its image and performance. Developers are able to tune this to varying degrees, but DLSS must be implemented and enabled by developers in order to function.

DLSS improves performance by reducing the amount of work a computer's GPU has to do. If a game is rendered in 4k, that means the GPU is processing millions of pixels. With DLSS, a game can be rendered in 1440p or even 1080p, and then use AI to upscale it to 4k.

Previous versions of DLSS have not been problematic. In fact, image upscaling is becoming increasingly common, and is generally seen by developers as a positive technological advance in game fidelity. “All that upscaling tech is so clever, and so much more [NVIDIA is] At the forefront of this, almost every software developer now has their own version,” said Henshaw.

Nvidia CEO responds to DLSS 5 criticism Nvidia, via Bethesda

AMD, for example, offers FidelityFX Super Resolution (FSR) as an upscaling tool. However, it does not require dedicated AI hardware such as Tensor Core, which is required to run DLSS.

“I'm pretty on board with a lot of the technology that Nvidia has created over the last few years,” Henshaw said. He has used earlier versions of DLSS to create technical demos in the past. Generative AI was first introduced to DLSS in 2022 with DLSS 3. This marks the point where Nvidia moved from AI-assisted rendering to AI-generated frames.

DLSS 3 used Generative AI for frame generation. The GPU creates a frame that goes between the frames rendered by the game's engine to improve image quality and performance. DLSS 3.5 took this a step further by allowing light generation through ray reconstruction. DLSS 4 improved the technology with further integration with frame tech generation and ray tracing technology.

Nvidia announces DLSS 5 Image via EA

However, DLSS 5 doesn't just upscale images with generative AI to make games smoother or look better. It starts to more aggressively influence how the final image looks, changing possible aspects of how the scene looks. Developers and gamers are raising this issue.

“It's not something that I can see any serious development studio, either at the indie level or at the triple A level, really getting into, unless there's a huge pressure from publishers to reduce costs,” Henshaw said.

In a video by YouTuber Daniel Owens, Nvidia's Jacob Freeman provided some more details on how DLSS 5 works. It's not actually changing anything at the game engine level, but it's essentially taking a 2D image and running it through its production AI technology to tweak the image. Artists can control things like color gradients and filter intensity, but they don't seem to have control over the final output. Freeman also explained that this is an early preview of the technology. DLSS 5 is expected to be released sometime this fall, so it stands to reason that the full extent of developer controls will be made clear around that time.

Developers argue that DLSS 5 takes away their autonomy

DLSS 5 is more aggressive with its AI generation than previous iterations of the device. It has been shown to enhance lighting, materials, and fine details, from improving an image to potentially influencing how those elements appear. It uses what it's been trained to improve how the image looks to make it look higher fidelity.

This is why Grace's face looks so different from her original image – because of what DLSS 5 thinks it should be. The end result has been panned by gamers and developers as “AI slop”, comparing it to a filter seen on Instagram. “I think anybody who's at least a little bit familiar with the internet and gamers would have known that it was immediately trashed everywhere. So it was kind of a shock to me,” Henshaw said.

Nvidia's DLSS 5 has developers worried Image via Nvidia

It's also where Nvidia CEO Jensen Huang first took issue with the public's reaction to the technology. “Well, first of all, they're completely wrong,” he said in an interview with Tom's Hardware. Huang later softened his comments in an interview with Lex Friedman, reiterating that he understood the criticism, but that he still sees all generative AI as “beautiful”.

Huang wasn't the only executive taken with the technology's potential. “When Nvidia showed us DLSS 5, and we got to run it in Starfield, it was amazing how it came to life. We played it. We can't wait for you all to do the same,” Bethesda executive producer and game director Todd Howard said during the initial reveal of DLSS 5. Starfield Recently DLSS 5 was used in a tech demo, with a 12 minute tech in action.

This goes back to what was pointed out by Mat Piscatella, video game industry analyst and executive director of games at Circana. It may seem obvious, but he said: “CEOs are generally going to be optimistic about a technology that can save them a lot of money.” Which is a sentiment that has artists like Henshaw worried about the potential implications of Nvidia's latest tech.

“The difference with DLSS 5 is that it's pushing artistic quality and taking control away from developers,” Henshaw said. “I see this as a big, big deal. If studios start implementing it, they're going to cut a huge number of actors.”

Henshaw worries that executives will see the tool as an opportunity to cut labor and costs. Even if the final product doesn't do well with consumers, the cost saved in paying the artists can balance the loss in the eyes of the lead. He also said that consumers are generally already sick of how much AI-generated content is out there, and are likely to respond with their wallets.

In Huang's latest interview, he touted DLSS 5 as another tool that developers use to improve the visual fidelity of their games. But Henshaw argues that this can only move the artistic needle in a more consistent direction, a common criticism of generative AI.

Generative AI in video games is generally frowned upon by the community. Current controversies include Crimson Desert Developer Pearl Abyss using Zen AI assets in its finished game. The developer took to Twitter to apologize, but this is only the most recent example that has drawn player backlash. Controversy had arisen before Baldur's Gate 3 Developer Larian Studios and Clair Obscur: Expedition 33 Developer Sandfall Interactive.

The problem with developer adoption of DLSS 5 underscores a growing concern in game development as a whole. As AI tools become increasingly integrated into game dev, developers have little say in how they are used. While most developers seem to have no objection to using AI as a tool in workflows such as programming or early concept stages, they are generally opposed to generative AI tools used in the creative side of things like voice acting, art, search design, and asset generation.

“With the industry as it is now, [developers have] Very little pushback. Maybe at smaller studios, but at bigger studios, there's going to be zero pushback. Developers have very little advocacy and power when it comes to big publishers and studios,” Henshaw said.

Ultimately, DLSS 5 highlights a growing divide in how AI is viewed in the industry. While companies like Nvidia and potentially Bethesda see this as a way to advance visual fidelity and streamline development, many artists and developers worry about what could be lost in the process. Whether DLSS 5 is widely adopted or faces resistance will likely depend on how much control officials are willing to give up and how players react to AI having a big hand in shaping what it actually looks like.

Leave a Comment