• 0 Posts
  • 221 Comments
Joined 2 years ago
cake
Cake day: February 14th, 2024

help-circle



  • Yeah, I dont know what the word is. My point is mostly that you can clearly see what it’s trying to recreate from a low-quality source. It seems to be keeping a very close adherance to the original looks? If you lay the images on top of eachother you can see the same creases around her eyes, the same nose shape, the same jawline, the same skin-colored makeup at the corner of her mouth to make her lips appear more defined. I think you’re spot on about the “training on conventionally attractive models” which is the part that I’m hoping improves with time. I don’t think applying this to existing works is appropriate at all but apparently it will use nvidia streamline so the mods will come whether we want them to or not sadly.




  • Zoom in on the before; the textures are fairly low res so it’s hard to see at a distance but she clearly has eyeliner along her top eyelid. I can’t really tell if the intent was for the dark undereyes to be eyeshadow or bags from being exhausted. But you can tell the AI is pulling from source coloration, not just adding things willy nilly like a beauty filter. The lip coloration, idk, that could be an artifact of the early technology or a hallucination. But to me it appears that she had some sort of lipstick on because it doesnt extend all the way to the corners of her mouth. I don’t think contouring is the right word but maybe someone here who wears makeup could add some detail?

    edit actually looking even more closely, i can’t tell if that’s eyeliner or just her eyelashes being dense. Either way you can see what the model is trying to replicate. I don’t think it looks good but it isnt doing a “beauty filter”. If the model is adding eyeliner, it appears to be confused based on the thick black line in the source image. Again, these are really low textures compared to what it’s trying to output so hopefully this improves over time like every other DLSS tech has.



  • This isn’t applying a filter, it’s applying running the image through a transformer network trained on advanced lighting methods like subsurface scattering to make materials more lifelike. It seems to change artistic intent quite a lot on these existing games, but frankly I’m excited to see what creators do with a game designed from the ground up to utilize AI-enhanced lighting. The DF video also states that this is an early preview (hence the dual 5090s) that is expected to change over time.





  • I’ll admit i rarely used the machine fusing, I was just talking about the weapon inventory system. I’d just pick up sticks or whatever was around and slam the first “good enough” damage monster part I had to it and kept going. It’s a lot better IMO than having to hang onto all the good stuff and constantly be underpowered because “what if i need it for a boss?”






  • FWIW, I think they did a much better job in Tears of the Kindgom. Your weapons still break, but you can carry around a basically endless supply of monster parts that you “fuse” to whatever base weapon you happen to come across and it makes them powerful again. Sometimes all you need is a stick to make a good weapon. Still annoying, but waaaaaay less of an inventory management sim IMO.