Early reactions to Nvidia’s DLSS 5 were swift and skeptical, with some observers likening the technology to an Instagram-style filter applied over gameplay footage. Nvidia CEO Jensen Huang refuted the allegations, but subsequent clarifications have helped outline how the system actually works – and where it can fall short.


Wasn’t DLSS working fine before, wtf did they do to it?
The waste is the point.
It needs to be more expensive, because that can be leveraged for higher valuations.
Haven’t you heard? Everything must contain generative AI now.
DLSS stands for “deep learning super scaling.” It was always gen-ai. Those extra details weren’t being revealed, they were being generated.
While true, the way DLSS 2/3/4 does it is to take a bunch of low res renders of the game over time while wiggling the camera very slightly, and stitch them all together to generate a new, higher res image that very closely matches what the original would have looked like. The GenAI part is essentially just a very advanced temporal blending function that’s really good at detecting and smoothing out edges.
DLSS 5 then runs an AI Instagram filter on top of the frame for “enhanced visuals”, because obviously we want our games to look like cheap AI slop.
✨AI ✨
But it was working fine and probably cheaper, this makes it worse. Where the fuck is QA?
✨ They replaced them with AI ✨
“Those responsible for sacking the people who have just been sacked have been sacked.”