There's tends to be one major difference between games and non-game applications, so toolkits designed for one are often quite unsuitable for the other.
A game generally performs logic to paint the whole window, every frame, with at most some framerate-limiting in "paused" states. This burns power but is steady and often tries hard to reduce latency.
An application generally tries to paint as little of the window as possible, as rarely as possible. Reducing video bandwidth means using a lot less power, but can involve variable loads so sometimes latency gets pushed down to "it would be nice".
Notably, the implications of the 4-way choice between {tearing, vsync, double-buffer, triple-buffer} looks very different between those two - and so does the question of "how do we use the GPU"?