
On Fri, 11 Feb 2005 19:30:36 -0500, christopher diggins wrote:
I am confused. I don't see how measuring time intervals with either xtime_get() or QueryPerformanceCounter or std::clock() would differ apart from resolution and accuracy.
xtime_get retrieves the current wall clock time. This means that a difference between xtime_get values will tell you the actual amount of seconds passed, you could duplicate its results with your watch. std::clock() and I assume QueryPerformanceCounter() give you an amount of clock ticks. This number is only incremented when the CPU works on your task. Say you have a very highly loaded system, 100% CPU usage, lots of threads (say, 1000), you name it. The std::clock() difference would be very different from the xtime_get difference. Why? Because under those circumstances, the kernel may only get to run your task infrequently, and may not give it as much CPU time as it would get on a system with no load. Therefore, std::clock() is saying 'this is how much of the kernel CPU timeslice I got', where as xtime_get is telling you what the current time is, regardless of how much or how little CPU time the kernel is allocating to your process. I don't know of any windows commandline utilities to track this, but If you, say, had an application on linux/unix and ran 'time' on it, like so: time tar xvfz ../mtxdrivers-rh9.0-v1.1.0-pro-beta.tar.gz You would get the following output: real 0m0.414s user 0m0.166s sys 0m0.104s The 'real' is how much the difference between the wall clock times at the start and end of the task. The user and sys are how much time in 'CPU seconds' (aka. how many CPU clock cycles) the application was given to perform its task ('user' being how much time the user-space portion of the code used, and 'sys' being how much time the kernel took to execute its part of it (eg. disk I/O, etc). As you can see, the real time is greater than the sum of the other two because the kernel did not give the application 1 whole CPU to use for the entire duration of the application. So xtime_get is equivalent to 'real', and std::clock() is equivalent to 'user + sys'. Otherwise known as 'how long did this run for?' and 'how long did this take two run?', which are different questions :) Assuming the kernel gave the application 1 whole CPU to use from start to end of the application's invocation, they would be equal. This rarely happens. -- PreZ :) Founder. The Neuromancy Society (http://www.neuromancy.net)