RE: [Boost-Users] Using date_time::microsec_clock in windows ?

Jeff Garland wrote:
It actually shouldn't be too hard. It is really a matter of swapping out the Windows API function calls to return the times and then adjusting appropriately. (follow-ups should probably be diverted to the Boost developers list)
I'm not sure if this has been discussed or not, but it may be harder than you imagine. The Windows API functions dealing with time have a resolution of approximately 10 milliseconds. Even the functions that return values of hectonanoseconds (100 nanoseconds) are limited to this resolution. To see what I mean, run this program: #include <windows.h> #include <iostream> #include <map> const int numElements=10000; DWORD WINAPI timeThread(void *) { // Step 1: fill in an array of FILETIME elements with the time // returned by GetSystemTimeAsFileTime. Make sure each element // is unique - don't increment index until the time we just read // is actually different from the last time we read. FILETIME ft[numElements]; memset(&ft[0], 0, sizeof(ft)); GetSystemTimeAsFileTime(&ft[0]); int index=1; while (index < numElements) { GetSystemTimeAsFileTime(&ft[index]); if (ft[index].dwLowDateTime != ft[index-1].dwLowDateTime) index++; } // Step 2: calculate the delta from one timestamp to the next. Store // this delta in a map<int, int> where the key is the delta, and the value // is the number of times that delta appears. For example, with the timestamps: // 0, 5, 11, 16, 21, 25 // we get the deltas: // 5, 6, 5, 5, 4 // the map will contain three entries: pair(4, 1 ), pair( 5, 3 ), pair( 6, 1 ) unsigned smallestDiff=(unsigned)-1, largestDiff=0; ULARGE_INTEGER endTime; ULARGE_INTEGER startTime; startTime.LowPart = ft[0].dwLowDateTime; startTime.HighPart = ft[0].dwHighDateTime; std::map<int, int> distribution; for (index=1; index<numElements; ++index) { endTime.LowPart = ft[index].dwLowDateTime; endTime.HighPart = ft[index].dwHighDateTime; unsigned difference = endTime.QuadPart - startTime.QuadPart; std::cout << difference << "\n"; if (difference < smallestDiff) smallestDiff = difference; if (difference > largestDiff) largestDiff = difference; startTime = endTime; distribution[difference]++; } // step 3: dump the stats. endTime.LowPart = ft[numElements-1].dwLowDateTime; endTime.HighPart = ft[numElements-1].dwHighDateTime; startTime.LowPart = ft[0].dwLowDateTime; startTime.HighPart = ft[0].dwHighDateTime; std::cout << numElements << " samples. Smallest delta: " << smallestDiff/10000.0 << "ms; Largest delta: " << largestDiff/10000.0 << "ms; Average: " << unsigned long (endTime.QuadPart - startTime.QuadPart) / (numElements * 10000.0) << "ms\nDistribution:" << std::endl; for (std::map<int, int>::iterator i = distribution.begin(); i != distribution.end(); ++i) { std::cout << (*i).first << ": " << (*i).second << "\n"; } return 0; } int main() { std::cout << "Note: this program will run for approximately " << numElements/100 << " seconds\nwith no output, at real-time priority. \n" "Your system may appear to hang while this program is running.\n" "Rest assured it *will* terminate." << std::endl; HANDLE threadHandle = CreateThread(NULL, 0, timeThread, 0, CREATE_SUSPENDED, 0); SetThreadPriority(threadHandle, THREAD_PRIORITY_TIME_CRITICAL); ResumeThread(threadHandle); WaitForSingleObject(threadHandle, -1); } When I ran it on NT4 SP6, the distribution was as follows: 100136: 7614 100137: 2359 200272: 1 500680: 17 500685: 7 600822: 1 (the units are hectonanoseconds, divide by 10 to get microseconds). I'm guessing the 50 and 60 ms timings are related to task switching or other O/S overhead. On a machine running Windows 2000 SP 3, the results were: 100144: 9999 -- Jim

Jeff Garland wrote: It actually shouldn't be too hard. It is really a matter of swapping out the Windows API function calls to return the times and then adjusting appropriately.
Jim.Hyslop wrote:
I'm not sure if this has been discussed or not, but it may be harder than you imagine. The Windows API functions dealing with time have a resolution of approximately 10 milliseconds. Even the functions that return values of hectonanoseconds (100 nanoseconds) are limited to this resolution.
There's QueryPerformanceCounter, but this may occasionally jump by several seconds because of workarounds for some chipsets (Q274323). On uniprocessor PCs, I think this has a frequency of 1.1931817MHz. (64K cycles gives the 55ms resolution of 16-bit Windows timers.) On multi-processor (including hyper-threaded P4) systems, it is the processor clock speed, which nowadays requires a 64-bit integer unless you scale it. There's also the RDTSC instruction on most x86 CPUs, but uniprocessor Windows systems may do a HALT instruction when idle, making this unreliable too (http://www.sysinternals.com/ntw2k/info/tips.shtml#Idle). In all cases, I suppose the workaround is to use the low resolution clock to sanity-check the high resolution values. This may be the sort of "appropriate adjustment" that Jeff was referring to.
participants (2)
-
Jim.Hyslop
-
Ken Hagan