It just kinda makes no sense to me. How can you improve the framerate by predicting how the next frame should be rendered while reducing the overhead and not increasing it more than what it already takes to render the scene normally? Like even the simplistic concept of it sounds like pure magic. And yet… It’s real.

  • ericwdhs@discuss.online
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    Yeah, there’s a fair bit of criticism about the tech being better for the higher-end cards that shouldn’t need it in the first place. Another way this shows up is in VRAM amounts.

    To ELI5, how effective FG is at improving the base frame rate scales with available VRAM. (Think 60 improved to 80 versus 60 improved to 120.) Some modern games hit 12GB regularly now even in 1080p and before any fancy tech. (There’s a separate discussion on game optimization in there.) Since lower-end cards really skimp on provided VRAM (every tier should really be at least 4GB higher), there’s not much space there for FG to work with in the first place.