1

In all of my tests I always regulate the fps by using this method.

const int fps = 30;
const int msPerFrame = 1000/fps; // 33 ms per frame, so it's more like 30.3030 fps
while(true)
{
  const int start = SDL_GetTicks();

  // do stuff

  const int end = SDL_GetTicks();
  const int delay = msPerFrame - ( end - start );
  if ( delay > 0 )
    SDL_Delay( delay );
  else
    std::cout << 
      "Warning, main loop took " << -delay << 
      " ms more than it was allowed." << std::endl;
}

At the beginning of the game while loop I set a start = SDL_GetTicks(); then I do various game logic followed by rendering and then finish it with end = SDL_GetTicks(). Finally, I do the regulation by this: delay = 1000/fps - (end - start); then call the SDL_Delay(delay); if it was greater than zero.

My question is, is this actually doing what I want or is there the possibility that there might be fluctuations? I got to wondering this because sometimes the smallest drawing functions make my cpu usage go up.

Vaillancourt
  • 16,325
  • 17
  • 55
  • 61
  • While that is a very good read, it is actually far too intelligent for me. :p I'm not asking for another way, I'm just wondering if my code will work as your generic average programmer would do. – thatoneguy2 Apr 25 '15 at 20:34
  • The methods described in the article I posted are what the generic average game programmer would choose from depending on requirements (I prefer the "Free the physics" method, because it makes it unnecessary to remember to keep the delta-time into account for every single calculation which greatily simplifies the code of the update function, but that's just my personal preference). – Philipp Apr 29 '15 at 09:28

1 Answers1

1

No. SDL_Delay has a granularity of 10ms so it's not suitable for a game loop. It's simple to test: use SDL_GetPerformanceCounter() & SDL_GetPerformanceFrequency() to mesure the time taken by SDL_Delay(1) and you'll see it took 10ms.

In general, get used to measuring the effects of your code.

ed4053
  • 26
  • 1
  • 2