It’s a very visual topic so using a visual medium to learn about it is ideal.
Again, I feel like it’s disingenuous to compare using pixels to predict local pixels accurate to simply using a higher resolution, to generating an entirely different image every frame. One of them sounds no different than using certain filters or post process, the other sounds like slop ass AI.
The problem stems from the term ‘GenAI’. These systems use math to predict things. There are a lot of valid mathmatical calculations to predict out there. Rendering lighting is one of them.
Human language and imagery isn’t one of them, which is what idiots have been trying to funnel through these models.
The fuck are you talking about? DLSS 5 has been adding wrinkles and entire facial features, in one demo it kept accidentally adding wheels to cars driving in the background. It doesn’t look like a filter or shader, it looks like ass slop.
It’s a very visual topic so using a visual medium to learn about it is ideal.
Again, I feel like it’s disingenuous to compare using pixels to predict local pixels accurate to simply using a higher resolution, to generating an entirely different image every frame. One of them sounds no different than using certain filters or post process, the other sounds like slop ass AI.
The problem stems from the term ‘GenAI’. These systems use math to predict things. There are a lot of valid mathmatical calculations to predict out there. Rendering lighting is one of them.
Human language and imagery isn’t one of them, which is what idiots have been trying to funnel through these models.
The effect looks like a filter or shader. I’ve seen the comparisons.
The fuck are you talking about? DLSS 5 has been adding wrinkles and entire facial features, in one demo it kept accidentally adding wheels to cars driving in the background. It doesn’t look like a filter or shader, it looks like ass slop.
I looked at the still comparisons on the nVIDIA article.