The short answer is It's Complicated™. :)
A lot of factors can affect frame timing (and the associated problem of animation juddering, due to the game's animation delta-times not matching actual frame delivery times). Some of the factors are: how CPU-limited versus GPU-limited the game is, how the game's code measures time, how much variability there is in frame time on both the CPU and GPU, how many frames of queue-ahead the CPU is allowed to build up, double versus triple buffering, and driver behavior.
On the specific question of whether you get locked to a divisor of 60 fps (30, 20, 15, etc), this seems to depend mainly on whether the application is GPU-limited or CPU-limited.
If you're GPU-limited, then assuming the application is running steadily (not producing any hitches or rapid performance shifts in itself), it does lock to the highest divisor of the vsync rate that it can sustain.
This timing diagram shows how it works:
Time runs from left to right and the width of each block represents the duration of one frame's work.
After the first few frames, the system settles into a state where the frames are turned out at a steady rate, one every two vsync periods (i.e. if vsync was 60 Hz, the game would be running at exactly 30 fps).
Note the empty spaces (idle time) in both CPU and GPU rows.
That indicates that the game could run faster if vsync were turned off. However, with vsync on, the swapchain puts back-pressure on the GPU—the GPU can't start rendering the next frame until vsync releases a backbuffer for it to render into.
Vertical sync in nvidia control panel
The GPU in turn puts back-pressure on the CPU via the command queue, as the driver won't return from / until the GPU starts rendering the next frame, and opens a queue slot for the CPU to continue issuing commands. The result is that everything runs at a steady 30 fps rate, and everyone's happy (except maybe the 60fps purist gamer 😉).
Compare this to what happens when the application is CPU-limited:
This time we ended up with the game running at 36 fps, the maximum speed possible given the CPU load in this case.
Best Overwatch Settings! - Increase FPS and Reduce Input Lag Guide
But this isn't a divisor of the refresh rate, so we get uneven frame delivery to the display, with some frames shown for two vsync periods and some for only one. This will cause judder.
Here the GPU goes idle because it's waiting for the CPU to give it more work—not because it's waiting for a backbuffer to be available to render into.
The swapchain thus doesn't put back-pressure on the GPU (or at least doesn't do so consistently), and the GPU in turn doesn't put back-pressure on the CPU. So the CPU just runs as fast as it can, oblivious to the irregular frame times appearing on the display.
It's possible to "fix" this so that the game still discretizes to vsync-friendly rates even in the CPU-limited case.
The game code could try to recognize this situation and compensate by sleeping its CPU thread to artificially lock itself to 30 fps, or by increasing its swap interval to 2. Alternatively, the driver could do the same thing by inserting extra sleeps in / (the NVIDIA drivers do have this as a control panel option; I don't know about others).
But in any case, someone has to detect the problem and do something about it; the game won't naturally settle on a vsync-friendly rate the way it does when it's GPU-limited.
This was all a simplified case in which both CPU and GPU workloads are perfectly steady. In real life, of course, there's variability in the workloads and frame times, which can cause even more interesting things to happen.
The CPU render-ahead queue and triple buffering can come into play then.
answered Apr 10 '16 at 2:55
18.1k22 gold badges4848 silver badges8888 bronze badges