It just kinda makes no sense to me. How can you improve the framerate by predicting how the next frame should be rendered while reducing the overhead and not increasing it more than what it already takes to render the scene normally? Like even the simplistic concept of it sounds like pure magic. And yet… It’s real.
It does so at the cost of latency. It does not actually predict the next frame, it renders two full frames and then interpolates one frame between them. So it looks smoother but your input also takes that much longer to be displayed on the screen
That is not strictly true, the actual latency increase is half the original frame rate. Because the input is not just the frame image but also the motion vectors (in which direction the pixel moved) for the current frame. Frame gen also knows a lot about the image, like which bits have transparent pixels (which move in multiple directions at once) and when the game is done with the frame yet still has to wait for the GPU (time which can be used for more work with little impact).
Frame gen is much more involved than the old “motion smoothing” of televisions, the so called “soap opera” mode, which did increase the latency much more and had no knowledge of how the source image was built, so processing was much more involved.
Stuff like DLSS5 is supposed to use the same inputs (source images and motion vectors), now that is magic to me.
Isn’t that the thing NVIDIA was found to be lying about?
Yeah devs apparently saw it didn’t use internal engine data much at all
That was about the Yassify filter.
Yeah, I simplified it to keep it at ELI5 level, but you’re right
It absolutely does increase latency though. If I’ve got the option for steady frame rates without frame gen, I’ll take it over frame gen. Frame gen was just about mandatory for Borderlands 4 at launch, and it gave me a convincing 80 FPS. After a performance patch, the game can get 60 FPS on my machine for real with a few of the settings knocked down, and it feels so much better.
Works the same way as any other software optimization: lower quality computation as a shortcut.
The predicted frames don’t use the same full stack of data that a true frame uses to render, they just use the previous frames data and the motion vectors. The rest is a very efficient nueral-network guessing algorithm based on those two pieces of data instead of the full shader stack.
the ELI5 would be that frame generation skips a lot of the graphical calculations for geometry and lighting and so on and instead bases the generated frame on the pixel data from the real frames before. For every real frame the calculations must be done.
As I understand it, frame gen doesn’t predict frames but interpolates frames. So it doesn’t “look into the future” but inserts frames between those rendered by the game. With the selling point being 30fps could look like 60fps. The reason this works is because “guessing” what those extra frames between existing frames look like is relatively easy. The game has already done all the hard work of figuring out the actual frames, then frame gen uses a faster algorithm to make the I between frames. Because frame gen adds the frames in between the real frames you can feel more input lag.
Interesting. That makes frame gen sound like “tweeners” in animation: you get the experienced animators to do the big key frames, and then have the newbies draw what goes in between them.
Since it’s ELI5 I’ll keep it very simple. It’s not like I know the exact mechanics anyway. No guarantee of pedantic correctness. I’m sure if I get anything overly wrong then someone who wouldn’t comment otherwise will correct me (please and thank you).
Let’s start from interpolation. It’s a simple maths idea: inter for between, poles for points. Let’s say you have two points. You could draw a line between them, take the middle point of that line. You’ve now introduced a new point.
This concept is used a lot in physics or maths in general. Let’s say you are writing down the speed of a car over time. You have 1 speed value per second. But you’re interested in the speed at 23.33 seconds for some reason.
Now you have a few options:
- You could take the speeds at 23 and 24 seconds and just the same as before: draw an imaginary straight line between them, and read what speed that is at for 23.33.
- You could also look at how the speed changed from 22 to 23 instead, especially if you didn’t have the 24s time written down.
- You could look at more of the speed values and try to figure out how the car’s speed changes over time, since it’s unlikely to be linear. That gets you to more complex forms of interpolation. That’s what’s used to find a more descriptive equation of motion for objects.
That may have been a bit of a tangent, but it does get us back to frame generation. We are interpolating where each pixel is between frames. Or perhaps even saying: okay, this visual object moved from X to Y, what happened between them?
The key part is: graphics already have this information. It would be wasteful to re-render an entire scene every frame, so you just look at what needs updating and how. But that means you know what happens one frame to the next. So now you just take that information and do some simple maths to figure out the in-between step, and show that to the user as well.
Performance-wise it’s not costly. The tough calculation is the update from frame to frame. It does take a bit of time though, introducing some tiny lag in your display.
Of course the actual frame gen algorithms can take a lot more data into account, but the simple idea is: between Point X and point Y there exists a point A which we can calculate relatively cheaply and display first.
Adding to what others are saying: having a relatively high fps to start with helps frame gen work better:
- the difference between the real frames is smaller so the interpolation result is better
- the generated fake frames are shown for less time (120fps = 8ms per frame) so errors are harder to spot
- the perceived latency is reduced since generating two real frames (to add one between) takes less time
For these reasons, frame gen works best for those who already have a high base fps. I use it take advantage of 4k 240hz monitor in visually rich single player experiences. I aim for 70-90 base fps and then double it with frame gen. For competitive multiplayer I disable it.
This kinda also explains to me why my experience with games that offer FG have been mostly bad. The only game where FG actually made the game smoother, visually, was STALKER 2. Without FG, I get like 40fps even with FSR. With FG, it looks and almost feels like natural 60fps in other games but the input lag kinda ruins all the visual gains by making it very icky to control. Every other game, tho, where I get close to but not exactly 60fps, if I turn FG on the FPS counter says 60fps but the game is visually not fluid (even if the input lag isn’t at all noticeable; like standing still and watching trees blow in the wind or NPCs walking around looks even lower than the 40-50fps I was getting without FG on) and it often starts to stutter every few seconds/minutes which varies on the game.
I figured it might just be my older GPU (GTX 1660 Super) simply not having some chip necessary for it work properly. Like DLSS or RTX ray tracing. Which is also a little ironic given one would thinl FG is for when you can’t get a stable FPS on older or lower powered hardware. 🤷♂️
Yeah, there’s a fair bit of criticism about the tech being better for the higher-end cards that shouldn’t need it in the first place. Another way this shows up is in VRAM amounts.
To ELI5, how effective FG is at improving the base frame rate scales with available VRAM. (Think 60 improved to 80 versus 60 improved to 120.) Some modern games hit 12GB regularly now even in 1080p and before any fancy tech. (There’s a separate discussion on game optimization in there.) Since lower-end cards really skimp on provided VRAM (every tier should really be at least 4GB higher), there’s not much space there for FG to work with in the first place.
Ohhh. That might actually look nice. I haven’t thought about using it in that direction
So I don’t think it predicts anything, but renders two frames and them smushes them together into one or more in-between frames.
Because it’s not actually reducing any overhead. What you get is fewer high-fidelity “real” frames each second, in exchange for roughly 2x (or more) low-fidelity “fake” frames
So a 60 FPS game before may run at 100 FPS after, which is really only 50 real FPS + 50 fake FPS.
Also some of the frame generation algorithms are tied to upscaling, so textures and everything are loaded in lower res, and an algorithm guesses what’s missing.
The more you let the computer guess what’s supposed to be there the faster it runs but the less accurate it gets.






