It just kinda makes no sense to me. How can you improve the framerate by predicting how the next frame should be rendered while reducing the overhead and not increasing it more than what it already takes to render the scene normally? Like even the simplistic concept of it sounds like pure magic. And yet… It’s real.


It does so at the cost of latency. It does not actually predict the next frame, it renders two full frames and then interpolates one frame between them. So it looks smoother but your input also takes that much longer to be displayed on the screen
That is not strictly true, the actual latency increase is half the original frame rate. Because the input is not just the frame image but also the motion vectors (in which direction the pixel moved) for the current frame. Frame gen also knows a lot about the image, like which bits have transparent pixels (which move in multiple directions at once) and when the game is done with the frame yet still has to wait for the GPU (time which can be used for more work with little impact).
Frame gen is much more involved than the old “motion smoothing” of televisions, the so called “soap opera” mode, which did increase the latency much more and had no knowledge of how the source image was built, so processing was much more involved.
Stuff like DLSS5 is supposed to use the same inputs (source images and motion vectors), now that is magic to me.
Isn’t that the thing NVIDIA was found to be lying about?
Yeah devs apparently saw it didn’t use internal engine data much at all
That was about the Yassify filter.
Yeah, I simplified it to keep it at ELI5 level, but you’re right
It absolutely does increase latency though. If I’ve got the option for steady frame rates without frame gen, I’ll take it over frame gen. Frame gen was just about mandatory for Borderlands 4 at launch, and it gave me a convincing 80 FPS. After a performance patch, the game can get 60 FPS on my machine for real with a few of the settings knocked down, and it feels so much better.