I hate videos for information like that. I’d read an article though.
But from your description, DLSS <5 was genAI - transformer models are the backbone of genAI. There’s certainly the possibility that DLSS 5 is a whole other bucket of crabs but idk.
It’s a very visual topic so using a visual medium to learn about it is ideal.
Again, I feel like it’s disingenuous to compare using pixels to predict local pixels accurate to simply using a higher resolution, to generating an entirely different image every frame. One of them sounds no different than using certain filters or post process, the other sounds like slop ass AI.
The problem stems from the term ‘GenAI’. These systems use math to predict things. There are a lot of valid mathmatical calculations to predict out there. Rendering lighting is one of them.
Human language and imagery isn’t one of them, which is what idiots have been trying to funnel through these models.
The fuck are you talking about? DLSS 5 has been adding wrinkles and entire facial features, in one demo it kept accidentally adding wheels to cars driving in the background. It doesn’t look like a filter or shader, it looks like ass slop.
I hate videos for information like that. I’d read an article though.
But from your description, DLSS <5 was genAI - transformer models are the backbone of genAI. There’s certainly the possibility that DLSS 5 is a whole other bucket of crabs but idk.
It’s a very visual topic so using a visual medium to learn about it is ideal.
Again, I feel like it’s disingenuous to compare using pixels to predict local pixels accurate to simply using a higher resolution, to generating an entirely different image every frame. One of them sounds no different than using certain filters or post process, the other sounds like slop ass AI.
The problem stems from the term ‘GenAI’. These systems use math to predict things. There are a lot of valid mathmatical calculations to predict out there. Rendering lighting is one of them.
Human language and imagery isn’t one of them, which is what idiots have been trying to funnel through these models.
The effect looks like a filter or shader. I’ve seen the comparisons.
The fuck are you talking about? DLSS 5 has been adding wrinkles and entire facial features, in one demo it kept accidentally adding wheels to cars driving in the background. It doesn’t look like a filter or shader, it looks like ass slop.
I looked at the still comparisons on the nVIDIA article.
Generative AI is a name for some ways you can use AI, not for its architecture.
There’s space to discuss if DLSS < 5 is it or not. But your argument is baseless.
The base for it is that it is generating pixels - and entire frames.
The difference between DLSS 5 and <5 seems quantitative, not qualitative.