- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]

I’m completely speechless. This looks so terrible I thought it was a joke, but apparently Nvidia released these demos to impress people. DLSS 5 runs the entire game through an AI filter, making every character look like it’s running through an ultra realistic beauty filter.
The photo above is used as the promo image for the official blog post by the way. It completely ignores artistic intent and makes Grace’s face look “sexier” because apparently that’s what realism looks like now.
I wouldn’t be so baffled if this was some experimental setting they were testing, but they’re advertising this as the next gen DLSS. As in, this is their image of what the future of gaming should be. A massive F U to every artist in the industry. Well done, Nvidia.



One is upscaling the image while preserving it as much as possible, the other is applying a filter to try and “enhance” it by drastically changing the image and ignoring artist’s intent. What’s hard to get?
This isn’t applying a filter, it’s
applyingrunning the image through a transformer network trained on advanced lighting methods like subsurface scattering to make materials more lifelike. It seems to change artistic intent quite a lot on these existing games, but frankly I’m excited to see what creators do with a game designed from the ground up to utilize AI-enhanced lighting. The DF video also states that this is an early preview (hence the dual 5090s) that is expected to change over time.If it was made for that the slopifier would be able to identify the light sources. Before that it is art and environment destroying irrelevant bullshit. From all the slop examples, the best Nvidia can deliver, it is shown that they ignore the lighting of the scene.
It is not. It is approximating the results of training data consisting of output images that have been rendered with subsurface scattering. It isn’t actually running the subsurface scattering algorithm.
Well that just sounds like subsurface scattering with extra steps!
… this is AI we’re talkin about, literally everything is trained. I thought that would be assumed, sorry for not being clear enough.
It’s not. it’s a completely different set of steps (at least at runtime). The Venn diagram circles don’t touch.
It’s a meme my bro
I’m well aware of the meme. You used it inappropriately.
For the joke to land it has to make sense in context. Otherwise it’s just random references.
How is “upscaling while preserving it” not the exact same philosophy as “enhance by applying a filter?”
You just don’t like the specific filter, it’s very literally the same process.
Because a pixelated circle being upscaled is a circle, but a pixelated circle being turned into a high definition pie is no longer a circle, and that’s especially problematic if the circle was just a cross hair or some other random circle like thing the AI thought was meant to be a pie.
Yes, both things are the same, but that’s like saying you had a tiny spider in your house and you were okay because it killed mosquitoes in your house, so you should be okay with having a colony of bats since they are also animals and eat mosquitoes. Yes, both are the same, but the scales and the amount of intrusion are completely different.
If your training data has a pixelated circle as an input and a circle as output, your neural network will “upscale” your pixelated circle to a circle. If your training data has a pixelated circle as input and a high definition pie as output, your neural network will “upscale” your pixelated circle to a high definition pie. Even if it’s the same algorithm in both cases.
Yes, that’s precisely my point. The difference is in what the algorithm is trying to do, traditional DLSS uses the image rendered in resolution X as output and scaled down to X/2 as input (for example), so it’s trained to upscale images, whereas this new thing uses who knows what as either, and clearly outputs something that is not an upscaled version of the frame.
Current DLSS intent: We can only render this at like 720p with enough frames, so let’s do that and use AI anti-aliasing tricks so that when we present it at 4k, none of the jaggies are visible on-screen like they would be with raw 720p upscaling.
DLSS5 intent: Using our pile of stolen artwork neural net that we can now render at 60fps+ let’s “reimagine” the entire look of the game as we present it on-screen, even if it was already running at 4k just fine.
TLDR; How big the neuralnet is and what your train it for matters.
Ideally you’d have a DLSS-like system trained specifically trained for only one game instead of a general system. Then you can train it on 4k with highest settings and you should get something that doesn’t mess with the style of the game.
You’re describing what DLSS 1.0 was I believe
Yeah, but they did that for like two games.
Yep. Maybe it could actually be “modules” that the individual devs submit with their game, essentially.
… How if flying a spaceship different from driving a car? They’re both controlled applications of kinetic energy to move people or objects.
At the end of the day, it’s all a pile of transistors and the only thing that is of import is the intent behind usage.
In one case it’s saying you can use a neural net to take something rendered at resolution A/4 and make it visually indistinguishable from the same render at resolution A.
The other is rendering something and radically changing the artistic or visual style.
Upsampling can be replicated within some margin by lowering framerate and letting the GPU work longer on each frame. It strives to restore detail left out from working quicker by guessing.
You cannot turn this feature off and get similar results by lowering the frame rate. It aims to add detail that was never present by guessing.
Upsampling methods have been produced that don’t use neural networks. The differences in behavior are in the realm of efficiency, and in many cases you would be hard pressed to tell which is which. The neural network is an implementation detail.
In the other case, the changes are more broad than can be captured by non AI techniques easily. The generative capabilities are central to the feature.
Process matters, but zooming out too far makes everything identical, and the intent matters too. “I want to see your art better” as opposed to “I want to make your art better”.
What…? It’s more like chemical vs nuclear rocket. You’re not even comparing the same thing while these are both the same things with different views. You don’t like this one, so suddenly it doesn’t met your arbitrary conditions to be acceptable, so now you’re coming up with incorrect analogies to try and make a point. Great job!
And you didn’t even read past the first sentence I see.
Saying they’re the same because they both use a neural network is roughly equivalent to saying things are they same because they’re both manipulating kinetic energy.