Single vs OpenMP vs MPI - Fortran



我是MPI编程中的新手。我必须测试3个代码,例如顺序,OpenMP和MPI代码。这3个代码(不是实际代码,例如(分别给出如下

顺序代码

 program no_parallel
 implicit none
 integer, parameter                         :: dp = selected_real_kind(15,307)  
 integer                                    :: i, j
 real(kind = dp)                            :: time1, time2
 real(kind = dp), dimension(1000000)        :: a
    !Initialisation
        do i = 1, 1000000
           a(i) = sqrt( dble(i) / 3.0d+0 ) 
        end do  
    call cpu_time( time1 )
        do j = 1, 1000 
             do i = 1, 1000000
                a(i) = a(i) + sqrt( dble(i) ) 
             end do 
        end do
    call cpu_time( time2 )
    print *, a(1000000)
    print *, 'Elapsed real time = ', time2 - time1, 'second(s)'
 end program no_parallel

OpenMP代码

 program openmp
 implicit none
 integer, parameter                         :: dp = selected_real_kind(15,307)  
 integer                                    :: i, j
 real(kind = dp)                            :: time1, time2, omp_get_wtime
 real(kind = dp), dimension(1000000)        :: a
    !Initialisation
        do i = 1, 1000000
           a(i) = sqrt( dble(i) / 3.0d+0 ) 
        end do 
    time1 = omp_get_wtime()
     !$omp parallel
        do j = 1, 1000
          !$omp do schedule( runtime ) 
             do i = 1, 1000000
                a(i) = a(i) + sqrt( dble(i) ) 
             end do
          !$omp end do 
        end do
     !$omp end parallel 
    time2 = omp_get_wtime()
    print *, a(1000000)
    print *, 'Elapsed real time = ', time2 - time1, 'second(s)'
 end program openmp

MPI代码

 program MPI
 implicit none
 include "mpif.h"
 integer, parameter                         :: dp = selected_real_kind(15,307)
 integer                                    :: ierr, num_procs, my_id, destination, tag, source, stat, i, j
 real(kind = dp)                            :: time1, time2
 real(kind = dp), dimension(1000000)        :: a
    call MPI_INIT ( ierr )
    call MPI_COMM_RANK ( MPI_COMM_WORLD, my_id, ierr )
    call MPI_COMM_SIZE ( MPI_COMM_WORLD, num_procs, ierr )  
    !Initialisation
        do i = 1, 1000000
           a(i) = sqrt( dble(i) / 3.0d+0 ) 
        end do
    destination = 0
    tag = 999
    source = 3
    stat = MPI_STATUS_SIZE
    time1 = MPI_Wtime()
        do j = 1, 1000     
           do i = 1 + my_id, 1000000, num_procs
              a(i) = a(i) + sqrt( dble(i) ) 
           end do
        end do
    call MPI_BARRIER ( MPI_COMM_WORLD, ierr )
    if( my_id == source ) then 
        call MPI_SEND ( a(1000000), 1, MPI_DOUBLE_PRECISION, destination, tag, MPI_COMM_WORLD, ierr )
    end if
    if( my_id == destination ) then
        call MPI_RECV ( a(1000000), 1, MPI_DOUBLE_PRECISION, source, tag, MPI_COMM_WORLD, stat, ierr )
    end if
    time2 = MPI_Wtime()
    if( my_id == 0) then
        print *, a(1000000)    !, 'from ID =', my_id 
        print *, 'Elapsed real time = ', time2 - time1, 'second(s)'
    end if
    stop 
    call MPI_FINALIZE ( ierr )
 end program MPI

我使用Intel Fortran Compiler 17.0.3-O0优化标志一起编译了这些代码。OpenMP和MPI代码均在4个核心Haswell桌面上进行。我分别获得了顺序,OpenMP和MPI代码8.08s2.1s3.2s的CPU时间。实际上,我期望OpenMP和MPI代码之间的结果几乎相似。但是,不是。我的问题:

  1. 关于MPI代码,如果我想打印出a(1000000)的结果,是否可以在不做这样的call MPI_SENDcall MPI_RECV的情况下以更智能的方式进行操作?

  2. 您是否知道仍然可以优化的MPI代码的哪一部分?

  3. 关于MPI代码中的source,是否可以自动定义它?在这种情况下,这对我来说很容易,因为处理器的数量为4,因此a(1000000)必须分配给线程3。

预先感谢您。

最后,我得到了问题的解决方案。以前,我没有意识到并行的方式在串行代码中进行循环:

do i = 1, 1000000
   a(i) = a(i) + sqrt( dble(i) ) 
end do

be 循环分布在MPI代码中:

do i = 1 + my_id, 1000000, num_procs
   a(i) = a(i) + sqrt( dble(i) ) 
end do

是问题所在。我认为这是因为更多缓存发生。因此,我将块分布应用于MPI代码,而不是循环分布,这是更有效的(对于这种情况!> (。我现在写了一个修订后的MPI代码为:

 program Revised_MPI
 use mpi
 implicit none
 integer, parameter                            :: dp = selected_real_kind(15,307), array_size = 1000000
 integer                                       :: ierr, num_procs, my_id, ista, iend, i, j
 integer, dimension(:), allocatable            :: ista_idx, iend_idx
 real(kind = dp)                               :: time1, time2
 real(kind = dp), dimension(:), allocatable    :: a
    call MPI_INIT ( ierr )
    call MPI_COMM_RANK ( MPI_COMM_WORLD, my_id, ierr )
    call MPI_COMM_SIZE ( MPI_COMM_WORLD, num_procs, ierr )
    !Distribute loop with block distribution
    call para_range ( 1, array_size, num_procs, my_id, ista, iend )
    allocate ( a( ista : iend ), ista_idx( num_procs ), iend_idx( num_procs ) )
    !Initialisation and saving ista and iend
        do i = ista, iend
           a(i) = sqrt( dble(i) / 3.0d+0 )
           ista_idx( my_id + 1 ) = ista
           iend_idx( my_id + 1 ) = iend  
        end do
    time1 = MPI_Wtime()
    !Performing main calculation for all processors (including master and slaves)
    do j = 1, 1000     
       do i = ista_idx( my_id + 1 ), iend_idx( my_id + 1 )
          a(i) = a(i) + sqrt( dble(i) ) 
       end do
    end do
    call MPI_BARRIER ( MPI_COMM_WORLD, ierr )
    time2 = MPI_Wtime()
    if( my_id == num_procs - 1 ) then
        print *, a( array_size )
        print *, 'Elapsed real time = ', time2 - time1, 'second(s)' 
    end if
    call MPI_FINALIZE ( ierr )
    deallocate ( a )
 end program Revised_MPI
!-----------------------------------------------------------------------------------------
 subroutine para_range ( n1, n2, num_procs, my_id, ista, iend )
 implicit none
 integer                                       :: n1, n2, num_procs, my_id, ista, iend, &
                                                  iwork1, iwork2
    iwork1 = ( n2 - n1 + 1 ) / num_procs
    iwork2 = mod( n2 - n1 + 1, num_procs )
    ista = my_id * iwork1 + n1 + min( my_id, iwork2 )
    iend = ista + iwork1 - 1
    if( iwork2 > my_id ) then
        iend = iend + 1
    end if
 end subroutine para_range
!-----------------------------------------------------------------------------------------   

现在,MPI代码可以与OpenMP达到(n((几乎(类似的CPU时间。同样,它非常适合优化标志-O3和-fast的使用。

谢谢大家的帮助。:)

实际上,您的MPI程序对我来说没有多大意义。为什么所有等级都具有相同的完整数组?为什么要复制完整的数组?为什么在此特定来源和目的地之间?

该程序没有计算任何有用的东西,因此真的很难说是正确的程序(这不能正确计算任何有用的内容(。

在许多MPI程序中,您永远不会发送和接收整个阵列。甚至不是完整的本地阵列,而只是它们之间的一些界限。

所以我想到了。注意use mpi,我从任何地方删除了魔术号1000000。

我还删除了stop。在end之前停止只是一个坏习惯,但这并不有害。将其放在MPI_Finalize() 之前有害。

最重要的是,我以不同的方式分发工作。每个等级都有其工作数组的一部分。

program Test_MPI
use mpi
 implicit none
 integer, parameter                         :: dp = selected_real_kind(15,307)
 integer                                    :: ierr, num_procs, my_id, stat, i, j
 real(kind = dp)                            :: time1, time2
 real(kind = dp), dimension(:), allocatable :: a
 integer, parameter                         :: n = 1000000
 integer                                    :: my_n, ns
    call MPI_INIT ( ierr )
    call MPI_COMM_RANK ( MPI_COMM_WORLD, my_id, ierr )
    call MPI_COMM_SIZE ( MPI_COMM_WORLD, num_procs, ierr )  
    my_n = n / num_procs
    ns = my_id * my_n
    if (my_id == num_procs-1) my_n = n - ns
    allocate(a(my_n))
    !Initialisation
        do i = 1, my_n
           a(i) = sqrt( real(i+ns, dp) / 3.0d+0 ) 
        end do
    stat = MPI_STATUS_SIZE
    time1 = MPI_Wtime()
        do j = 1, 1000     
           do i = 1 , my_n 
              a(i+my_id) = a(i) + sqrt( real(i+ns, dp) ) 
           end do
        end do
    call MPI_BARRIER ( MPI_COMM_WORLD, ierr )
    time2 = MPI_Wtime()
    if( my_id == 0) then
        !!!! why???  print *, a(my_n) 
        print *, 'Elapsed real time = ', time2 - time1, 'second(s)'
    end if
    call MPI_FINALIZE ( ierr )
 end program Test_MPI

是的,那里没有通信。我想不出为什么应该在那里。如果应该,您必须告诉我们原因。它应该几乎完美地扩展。

也许您想将最终数组列为一个等级?许多人这样做,但通常根本不需要。目前尚不清楚为什么在您的情况下需要它。

我发现一个通常需要在子例程或函数中做更多的工作才能使并行性获得回报,因此,专注于矢量化是您在此示例中的最佳方法。

这个绰号是" Vecorize Inner -Parallealise Outer"(Vipo(
对于第二种情况,我建议以下内容:

MODULE MyOMP_Funcs
IMPLICIT NONE
PRIVATE
integer, parameter, PUBLIC           :: dp = selected_real_kind(15,307)  
real(kind = dp), dimension(1000000)  :: a
PUBLIC  MyOMP_Init, MyOMP_Sum
CONTAINS
!=================================
SUBROUTINE MyOMP_Init(N,A)
IMPLICIT NONE
integer                      , INTENT(IN   ) :: N  
real(kind = dp), dimension(n), INTENT(INOUT) :: A
integer                                      :: I 
!Initialisation
DO i = 1, n
  A(i) = sqrt( dble(i) / 3.0d+0 ) 
ENDDO 
RETURN
END SUBROUTINE MyOMP_Init

!=================================
SUBROUTINE MyOMP_Sum(N,A,SumA)
!$OMP DECLARE SIMD(MyOMP_Sum) UNIFORM(N,SumA) linear(ref(A))
USE OMPLIB
IMPLICIT NONE
integer                      , INTENT(IN   ) :: N
!DIR$ ASSUME_ALIGNED A: 64                   :: A
real(kind = dp), dimension(n), INTENT(IN   ) :: A
real(kind = dp)              , INTENT(  OUT) :: SumA
integer                                      :: I
SumA = 0.0
!Maybe also try...  !DIR$ VECTOR ALWAYS
!$OMP SIMD REDUCTION(+:SumA)
Sum_Loop: DO i = 1, N
  SumA = SumA + A(i) + sqrt( dble(i) ) 
ENDDO Sum_Loop
!$omp end   !<-- You probably do not need these
RETURN
END SUBROUTINE MyOMP_Sum
!=================================
SUBROUTINE My_NOVEC_Sum_Sum(N,A,SumA)
IMPLICIT NONE
integer                      , INTENT(IN   ) :: N
!DIR$ ASSUME_ALIGNED A: 64                   :: A
real(kind = dp), dimension(n), INTENT(IN   ) :: A
real(kind = dp)              , INTENT(  OUT) :: SumA
integer                                      :: I
SumA = 0.0
!DIR$ NOVECTOR
Sum_Loop: DO i = 1, N
  SumA = SumA + A(i) + sqrt( dble(i) ) 
ENDDO Sum_Loop
RETURN
END SUBROUTINE My_NOVEC_Sum
!=================================
END MODULE MyOMP_Funcs
!=================================

!=================================
program openmp
!USE OMP_LIB 
USE MyOMP_Funcs 
implicit none
integer        , PARAMETER          :: OneM = 1000000
integer        , PARAMETER          :: OneK = 1000
integer                             :: i, j
real(kind = dp)                     :: time1, time2, omp_get_wtime
!DIR$ ATTRIBUTES ALIGNED:64         :: A, SumA
real(kind = dp), dimension(OneM)    :: A
real(kind = dp)                     :: SumA
!Initialisation
CALL MyOMP_Init(N,A)
time1 = omp_get_wtime()
!  !$omp parallel
!    do j = 1, OneK
CALL MyOMP_Sum(OneM, A, SumA)
!    end do
!  !$omp end parallel 
!!--> Put timing loops here
time2 = omp_get_wtime()
print *, a(1000000)
print *, 'Elapsed real time = ', time2 - time1, 'second(s)'
end program openmp

运行了SIMD减少版本后,您可以尝试在并行性上分层。
如果模块是库的一部分,则编译器设置与程序无关。

最新更新