c++ 用于构造性能OpenMP并行化

fhity93d  于 2022-12-01  发布在  其他
关注(0)|答案(1)|浏览(190)

第一种方法(并行化内部循环):

for(j=0; j<LATTICE_VW; ++j) {
    x = j*DX + LATTICE_W;
    #pragma omp parallel for ordered private(y, prob)
        for(i=0; i<LATTICE_VH; ++i) {
            y = i*DY + LATTICE_S;
            prob = psi[i][j].norm();

            #pragma omp ordered
                out << x << " " << y << " " << prob << endl;
        }
}

第二种方法(并行化外部循环):

#pragma omp parallel for ordered private(x, y, prob)
    for(j=0; j<LATTICE_VW; ++j) {
        x = j*DX + LATTICE_W;
        for(i=0; i<LATTICE_VH; ++i) {
            y = i*DY + LATTICE_S;
            prob = psi[i][j].norm();

            #pragma omp ordered
                out << x << " " << y << " " << prob << endl;
        }
    }

第三种方法(并行化折叠循环)

#pragma omp parallel for collapse(2) ordered private(x, y, prob)
    for(j=0; j<LATTICE_VW; ++j) {
        for(i=0; i<LATTICE_VH; ++i) {
            x = j*DX + LATTICE_W;
            y = i*DY + LATTICE_S;
            prob = psi[i][j].norm();

            #pragma omp ordered
                out << x << " " << y << " " << prob << endl;
        }
    }

如果要我猜的话,我会说方法3应该是最快的。
然而,方法1是最快的,而第二种和第三种方法所用的时间大致相同,就好像没有并行化一样。为什么会出现这种情况?

a2mppw5e

a2mppw5e1#

看看这个:

for(int x = 0; x < 4; ++x)
  #pragma omp parallel for ordered
  for(int y = 0; y < 4; ++y)
    #pragma omp ordered
    cout << x << ',' << y << " (by thread " << omp_get_thread_num() << ')' << endl;

您拥有:

0,0 (by thread 0)
0,1 (by thread 1)
0,2 (by thread 2)
0,3 (by thread 3)
1,0 (by thread 0)
1,1 (by thread 1)
1,2 (by thread 2)
1,3 (by thread 3)

每个线程只需要等待一些cout所有的工作就可以并行完成了.但是用:

#pragma omp parallel for ordered
for(int x = 0; x < 4; ++x)
  for(int y = 0; y < 4; ++y)
    #pragma omp ordered
    cout << x << ',' << y << " (by thread " << omp_get_thread_num() << ')' << endl;

#pragma omp parallel for collapse(2) ordered
for(int x = 0; x < 4; ++x)
  for(int y = 0; y < 4; ++y)
    #pragma omp ordered
    cout << x << ',' << y << " (by thread " << omp_get_thread_num() << ')' << endl;

情况是:

0,0 (by thread 0)
0,1 (by thread 0)
0,2 (by thread 0)
0,3 (by thread 0)
1,0 (by thread 1)
1,1 (by thread 1)
1,2 (by thread 1)
1,3 (by thread 1)
2,0 (by thread 2)
2,1 (by thread 2)
2,2 (by thread 2)
2,3 (by thread 2)
3,0 (by thread 3)
3,1 (by thread 3)
3,2 (by thread 3)
3,3 (by thread 3)

因此,thread 1必须等待thread 0完成其所有工作,然后才能第一次执行cout,并且几乎无法并行执行任何操作。
尝试将schedule(static,1)添加到collapse-version中,它的性能应该至少与第一个版本一样好。

相关问题