C语言 使用 openmp 在指针数组和指向数组的指针之间的性能差异是什么问题?



我用C编写了两个程序,它们正在用openmp进行高瘦矩阵矩阵乘法。 该算法是我的机器的内存限制。 对于我使用的代码之一和用于存储矩阵的指针数组 (aop(。 对于其他代码,我只在数组上使用,其中矩阵的行一个接一个地存储,从现在开始称为 pta。 现在我观察到 pta 总是优于 aop 版本。 特别是当使用 12 个内核而不是 6 个内核时,aop 的性能略有下降,而 pta 的性能会翻倍。 我真的无法解释这种行为,我只是假设内核在计算过程中以某种方式干扰。 有人可以解释这种行为吗?

指向数组版本的指针:

int main(int argc, char *argv[])
{
// parallel region to verify that pinning works correctly
#pragma omp parallel
  {
    printf("OpenMP thread %d / %d runs on core %dn", omp_get_thread_num(), omp_get_num_threads(), sched_getcpu());
  }
  //define dimensions
  int dim_n=atoi(*(argv+1));
  int dim_nb=2;
  printf("n = %d, nb = %dn",dim_n,dim_nb);
  //allocate space for matrix M, V and W
  //each element of **M is a pointer for the first element of an array
  //size of double and double* is depending on compiler and machine
  double *M = malloc((dim_nb*dim_nb) * sizeof(double));
  //Initialize Matrix M
  for(int i=0; i<dim_nb; i++)
  {
    for(int j=0; j<dim_nb; j++)
    {
      M[i*dim_nb+j]=((i+1)-1.0)*dim_nb+(j+1)-1.0;
    }
  }
  double *V = malloc((dim_n*dim_nb) * sizeof(double));
  double *W = malloc((dim_n*dim_nb) * sizeof(double));

// using parallel region to Initialize the matrix V
#pragma omp parallel for schedule(static)
  for (int i=0; i<dim_n; i++)
  {
    for (int j=0; j<dim_nb; j++)
    {
      V[i*dim_nb+j]=j+1;
    }
  }
  int max_iter=100;
  double time = omp_get_wtime();
  // calculate the matrix-matrix product VM product max_iter times
  for(int iter=0; iter<max_iter; iter++)
  {
  // calculate matrix-matrix product in parallel
#pragma omp parallel for schedule(static)
    // i < #rows of V
    for(int i=0; i<dim_n; i++)
    {
      // j < #columns of M
      for(int j=0; j<dim_nb; j++)
      {
        // Initialize W_ij with zero, everytime W_ij is calculated
        W[i*dim_nb+j]=0;
        // k < #colums of V = rows of M
        for(int k=0; k<dim_nb; k++)
        {
          W[i*dim_nb+j] += V[i*dim_nb+k]*M[k*dim_nb+j];
        }
      }
    }
  }
  time=omp_get_wtime()-time;
'''

指针数组版本:

int main(int argc, char *argv[])
{
// parallel region to verify that pinning works correctly
#pragma omp parallel
  {
    printf("OpenMP thread %d / %d runs on core %dn", omp_get_thread_num(), omp_get_num_threads(), sched_getcpu());
  }
  //define dimensions
  int dim_n=atoi(*(argv+1));
  int dim_nb=2;
  printf("n = %d, nb = %dn",dim_n,dim_nb);
  //allocate space for matrix M, V and W
  // each element of **M is a pointer for the first element of an array
  //size of double and double* is depending on compiler and machine
  double **M = malloc(dim_nb * sizeof(double *));
  for(int i = 0; i < dim_nb; i++)
  {
    M[i] = malloc(dim_nb * sizeof(double));
  }

  //Initialize Matrix 
  for(int i=0; i<dim_nb; i++)
  {
    for(int j=0; j<dim_nb; j++)
    {
      M[i][j]=((i+1)-1.0)*dim_nb+(j+1)-1.0;
    }
  }
    double **V = malloc(dim_n * sizeof(double *));
    for(int i=0; i<dim_n; i++)
  {
    V[i] = malloc(dim_nb * sizeof(double));
  }
  double **W = malloc(dim_n * sizeof(double *));
    for(int i=0; i<dim_n; i++)
  {
    W[i] = malloc(dim_nb * sizeof(double));
  }

// using parallel region to Initialize the matrix V
#pragma omp parallel for schedule(static)
  for (int i=0; i<dim_n; i++)
  {
    for (int j=0; j<dim_nb; j++)
    {
      V[i][j]=j+1;
    }
  }
  int max_iter=100;
  double time = omp_get_wtime();
  // calculate the matrix-matrix product VM product max_iter times
  for(int iter=0; iter<max_iter; iter++)
  {
  // calculate matrix-matrix product in parallel
#pragma omp parallel for schedule(static)
    // i < #rows of V
    for(int i=0; i<dim_n; i++)
    {
      // j < #columns of M
      for(int j=0; j<dim_nb; j++)
      {
        // Initialize W_ij with zero, everytime W_ij is calculated
        W[i][j]=0;
        // k < #colums of V = rows of M
        for(int k=0; k<dim_nb; k++)
        {
          W[i][j] += V[i][k]*M[k][j];
        }
      }
    }
  }
  time=omp_get_wtime()-time;

这很容易解释,因为指针版本必须首先访问指针,然后取消引用此指针。这些内存位置可能彼此相距很远,并且更有可能刷新缓存。数组中的数据存储在一个内存块中,因此需要的内存访问更少,并且 CPU 更有可能不会丢失缓存。

https://godbolt.org/z/c_8c7c

最新更新