gettimeofday() C++ Inconsistency



我正在做一个涉及比较编程语言的项目。我在计算阿克曼函数。我测试了Java、Python和Ruby,得到了10到30毫秒的响应。但是C++似乎需要125毫秒。这是正常的,还是gettimeofday()有问题?Gettimeofday()在时间.h。

我正在一个(虚拟(Ubuntu上测试Natty Narwhal 32位。我不缺处理能力(四核2.13 GHz英特尔至强(。

我的代码在这里:

#include <iostream>
#include <sys/time.h>
using namespace std;
int a(int m,int n) {
    if (m == 0) {
    return n + 1;
    } else if (m > 0 and n == 0) {
    return a(m-1,1);
    } else if (m > 0 and n > 0) {
    return a(m-1,a(m,n-1));
    }
}
int main() {
    timeval tim;
    gettimeofday(&tim,NULL);
    double t1 = tim.tv_usec;
    int v = a(3,4);           
    gettimeofday(&tim,NULL);
    double t2 = tim.tv_usec;
    cout << v << endl << t2-t1;
    return 0;
}       

假设您谈论的是返回数据的分辨率,则gettimeofday的POSIX规范规定:

系统时钟的分辨率未指定。

这是由于系统可能具有广泛变化的跟踪小时间段的能力。即使是ISO标准的clock()函数也包含这样的注意事项。

如果您谈论的是调用它需要多长,那么该标准并不能保证这些方面的性能。一个实现完全可以免费等待125分钟然后再给你时间,尽管我怀疑这样的实现会在市场上取得很大成功:-(


作为有限分辨率的一个例子,我输入了以下代码来检查我的系统:

#include <stdio.h>
#include <sys/time.h>
#define NUMBER 30
int main (void) {
    struct timeval tv[NUMBER];
    int count[NUMBER], i, diff;
    gettimeofday (&tv[0], NULL);
    for (i = 1; i < NUMBER; i++) {
        gettimeofday (&tv[i], NULL);
        count[i] = 1;
        while ((tv[i].tv_sec == tv[i-1].tv_sec) &&
            (tv[i].tv_usec == tv[i-1].tv_usec))
        {
            count[i]++;
            gettimeofday (&tv[i], NULL);
        }
    }
    printf ("%2d: secs = %d, usecs = %6dn", 0, tv[0].tv_sec, tv[0].tv_usec);
    for (i = 1; i < NUMBER; i++) {
        diff = (tv[i].tv_sec - tv[i-1].tv_sec) * 1000000;
        diff += tv[i].tv_usec - tv[i-1].tv_usec;
        printf ("%2d: secs = %d, usecs = %6d, count = %5d, diff = %dn",
            i, tv[i].tv_sec, tv[i].tv_usec, count[i], diff);
    }
    return 0;
}

代码基本上记录了底层时间的变化,记录了实际变化的时间对gettimeofday()的调用次数。这是在一台功能相当强大的机器上进行的,因此它并不缺乏处理能力(计数表明它能够在每个时间量子调用gettimeofday()的频率,大约在5800大关左右,忽略第一个,因为我们不知道是何时开始测量的(。

输出为:

 0: secs = 1318554836, usecs = 990820
 1: secs = 1318554836, usecs = 991820, count =  5129, diff = 1000
 2: secs = 1318554836, usecs = 992820, count =  5807, diff = 1000
 3: secs = 1318554836, usecs = 993820, count =  5901, diff = 1000
 4: secs = 1318554836, usecs = 994820, count =  5916, diff = 1000
 5: secs = 1318554836, usecs = 995820, count =  5925, diff = 1000
 6: secs = 1318554836, usecs = 996820, count =  5814, diff = 1000
 7: secs = 1318554836, usecs = 997820, count =  5814, diff = 1000
 8: secs = 1318554836, usecs = 998820, count =  5819, diff = 1000
 9: secs = 1318554836, usecs = 999820, count =  5901, diff = 1000
10: secs = 1318554837, usecs =    820, count =  5815, diff = 1000
11: secs = 1318554837, usecs =   1820, count =  5866, diff = 1000
12: secs = 1318554837, usecs =   2820, count =  5849, diff = 1000
13: secs = 1318554837, usecs =   3820, count =  5857, diff = 1000
14: secs = 1318554837, usecs =   4820, count =  5867, diff = 1000
15: secs = 1318554837, usecs =   5820, count =  5852, diff = 1000
16: secs = 1318554837, usecs =   6820, count =  5865, diff = 1000
17: secs = 1318554837, usecs =   7820, count =  5867, diff = 1000
18: secs = 1318554837, usecs =   8820, count =  5885, diff = 1000
19: secs = 1318554837, usecs =   9820, count =  5864, diff = 1000
20: secs = 1318554837, usecs =  10820, count =  5918, diff = 1000
21: secs = 1318554837, usecs =  11820, count =  5869, diff = 1000
22: secs = 1318554837, usecs =  12820, count =  5866, diff = 1000
23: secs = 1318554837, usecs =  13820, count =  5875, diff = 1000
24: secs = 1318554837, usecs =  14820, count =  5925, diff = 1000
25: secs = 1318554837, usecs =  15820, count =  5870, diff = 1000
26: secs = 1318554837, usecs =  16820, count =  5877, diff = 1000
27: secs = 1318554837, usecs =  17820, count =  5868, diff = 1000
28: secs = 1318554837, usecs =  18820, count =  5874, diff = 1000
29: secs = 1318554837, usecs =  19820, count =  5862, diff = 1000

表明分辨率似乎被限制在不超过一千微秒。当然,您的系统可能与此不同,归根结底,它取决于您的实现和/或环境。


绕过这种限制的一种方法是不做一件事,而是做N次,然后将经过的时间除以N

例如,假设您调用函数,计时器显示花费了125毫秒,您怀疑这似乎有点高。我建议在一个循环中调用一千次,测量一千次所花费的时间。

如果结果是125秒,那么,是的,它可能很慢。然而,如果只需要27秒,这将表明你的计时器分辨率是导致出现大时间的原因,因为这相当于每次迭代27毫秒,与你从其他结果中看到的结果相当。

修改你的代码以考虑这一点将沿着以下路线:

int main() {
    const int count = 1000;
    timeval tim;
    gettimeofday(&tim, NULL);
    double t1 = 1.0e6 * tim.tv_sec + tim.tv_usec;
    int v;
    for (int i = 0; i < count; ++i)
        v = a(3, 4);           
    gettimeofday(&tim, NULL);
    double t2 = 1.0e6 * tim.tv_sec + tim.tv_usec;
    cout << v << 'n' << ((t2 - t1) / count) << 'n';
    return 0;
}

相关内容

  • 没有找到相关文章

最新更新