Saturday, 15 July 2017

Is gettimeofday() guaranteed to be of microsecond resolution?

Question: So I find myself porting a game, that was originally written for the Win32 API, to Linux (well, porting the OS X port of the Win32 port to Linux). I have implemented QueryPerformanceCounter by giving the uSeconds since the process start up:

BOOL QueryPerformanceCounter(LARGE_INTEGER* performanceCount)
{
    gettimeofday(&currentTimeVal, NULL);
    performanceCount->QuadPart = (currentTimeVal.tv_sec - startTimeVal.tv_sec);
    performanceCount->QuadPart *= (1000 * 1000);
    performanceCount->QuadPart += (currentTimeVal.tv_usec - startTimeVal.tv_usec);

    return true;
}

This, coupled with QueryPerformanceFrequency() giving a constant 1000000 as the frequency, works well on my machine, giving me a 64 bit variable that contains uSeconds since the program's start up. So is this portable? I don't want to discover it works differently if the kernel was compiled in a certain way or anything like that. I am fine with it being non-portable to something other than Linux, however.

Solution: Maybe. But you have bigger problems. gettimeofday() can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of gettimeofday() is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no.
You should look into clock_gettime(CLOCK_MONOTONIC) for timing intervals. It suffers from several less issues due to things like multi-core systems and external clock settings.
Also, look into the clock_getres() function.

No comments:

Post a Comment