Be cautious of discriminating “Ctxt::multiplyBy()” and “Ctxt::*=”

In the past 2 Days, I was frustrated by an issue, which will cause program crashed at HElib‘s Ctxt::reLinearize() function:

assert(W.toKeyID>=0);       // verify that a switching matrix exists

After tough debugging, the root cause is found: I should use Ctxt::multiplyBy() instead of Ctxt::*=. From Ctxt::multiplyBy()‘s implementation:

void Ctxt::multiplyBy(const Ctxt& other)
{
  // Special case: if *this is empty then do nothing
  if (this->isEmpty()) return;

  *this *= other;  // perform the multiplication
  reLinearize();   // re-linearize
}

We can see besides it calls Ctxt::*=, it also invokes reLinearize(), and that’s point!

Clang may be a better choice than gcc in developing OpenMP program

As referred in The first gcc bug I ever meet, I upgraded gcc to the newest 7.1.0 version to conquer building OpenMP errors. But unfortunately, when using taskloop clause, weird issue happened again. My application utilizes HElib, and I just added following statement in a source file:

#pragma omp taskloop

Then the strange link error reported:

In function `EncryptedArray::EncryptedArray(EncryptedArray const&)':
/root/Project/../../HElib/src/EncryptedArray.h:539: undefined reference to `cloned_ptr<EncryptedArrayBase, deep_clone<EncryptedArrayBase> >::cloned_ptr(cloned_ptr<EncryptedArrayBase, deep_clone<EncryptedArrayBase> > const&)'
collect2: error: ld returned 1 exit status

I tried to debug it, nevertheless, nothing valuable was found.

So I attempted to use clang. Install it on ArchLinux like this:

# pacman -S clang
resolving dependencies...
looking for conflicting packages...

Packages (2) llvm-libs-4.0.0-3  clang-4.0.0-3

Total Download Size:53.24 MiB
Total Installed Size:  275.24 MiB

:: Proceed with installation? [Y/n] y
......
checking available disk space  [#########################################] 100%
:: Processing package changes...
(1/2) installing llvm-libs   [#########################################] 100%
(2/2) installing clang   [#########################################] 100%
Optional dependencies for clang
openmp: OpenMP support in clang with -fopenmp
python2: for scan-view and git-clang-format [installed]
:: Running post-transaction hooks...
(1/1) Arming ConditionNeedsUpdate...

Unlike gcc, to enable OpenMP feature in clang, we need to install an additional openmp package:

# pacman -S openmp

Write a simple program:

# cat parallel.cpp
#include <stdio.h>
#include <omp.h>

int main(void) {
    omp_set_num_threads(5);

    #pragma omp parallel for
    for (int i = 0; i < 5; i++) {

        #pragma omp taskloop
        for (int j = 0; j < 3; j++) {
            printf("%d\n", omp_get_thread_num());
        }

    }   
}

Compile and run it:

# clang++ -fopenmp parallel.cpp
# ./a.out
0
0
0
0
0
1
1
2
4
4
4
4
3
0
1

Clang OpenMP works as I expected. Build my project again, no eccentric errors! Work like a charm!

So according to my testing experience, clang may be a better choice than gcc in developing OpenMP program, especially for some new OpenMP features.

The first gcc bug I ever meet

I have used gcc for more than 10 years, but never met a bug before. In my mind, the gcc is one of the stable software in the world, but at yesterday, the myth ended.

I tried to use OpenMP to optimize my program, and all was OK until the taskloop construct was added:

#pragma omp taskloop

The build flow terminated with the following errors:

xxxxxxx.cpp:142:1: internal compiler error: Segmentation fault
 }
 ^
Please submit a full bug report,
with preprocessed source if appropriate.
See <https://bugs.archlinux.org/> for instructions.

Whoops! It seemed I got the lucky draw! Since my project uses a lot of compile options:

... -g -O2 -fopenmp -fprofile-arcs -ftest-coverage ...

I must narrow down to find the root cause. Firstly, I only use -fopenmp, then everything was OK; Secondly, adding -g -O2, no problem; …. After combination trial, -fopenmp -fprofile-arcs can cause the problem.

To confirm it, I wrote a simple program:

int main(void) {
    #pragma omp taskloop
    for (int i = 0; i < 2; i++) {
    }
    return 0;
}

Compile it:

# gcc -fopenmp -fprofile-arcs parallel.c
parallel.c:6:1: internal compiler error: Segmentation fault
 }
 ^
Please submit a full bug report,
with preprocessed source if appropriate.
See <https://bugs.archlinux.org/> for instructions.

Yeah, the bug was reproduced! It verified my assumption.

To bypass this issue, I decided to try the newest gcc. My OS is ArchLinux,and the gcc version is 6.3.1:

# gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-pc-linux-gnu/6.3.1/lto-wrapper
Target: x86_64-pc-linux-gnu
Configured with: /build/gcc/src/gcc/configure --prefix=/usr --libdir=/usr/lib --libexecdir=/usr/lib --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=https://bugs.archlinux.org/ --enable-languages=c,c++,ada,fortran,go,lto,objc,obj-c++ --enable-shared --enable-threads=posix --enable-libmpx --with-system-zlib --with-isl --enable-__cxa_atexit --disable-libunwind-exceptions --enable-clocale=gnu --disable-libstdcxx-pch --disable-libssp --enable-gnu-unique-object --enable-linker-build-id --enable-lto --enable-plugin --enable-install-libiberty --with-linker-hash-style=gnu --enable-gnu-indirect-function --disable-multilib --disable-werror --enable-checking=release
Thread model: posix
gcc version 6.3.1 20170306 (GCC)

Now ArchLinux doesn’t provide gcc 7.1 installation package, so I need to download and build it myself:

# wget http://gcc.parentingamerica.com/releases/gcc-7.1.0/gcc-7.1.0.tar.gz
# tar xvf gcc-7.1.0.tar.gz
# cd gcc-7.1.0/
# mkdir build
# cd build

Select the configuration options is a headache task for me, and I decide to copy the current options for 6.3.1 (Please refer the above output from gcc -v):

# ../configure --prefix=/usr --libdir=/usr/lib --libexecdir=/usr/lib --mandir=/usr/share/man --infodir=/usr/share/info .....

Since I don’t need to compile ada and lto, I remove these from --enable-languages:

--enable-languages=c,c++,fortran,go,objc,obj-c++

Besides this, I also need to build and install isl library myself or through ArchLinux isl package. Once configuration is successful, I can build and install it:

# make
# make install

Check the newest gcc:

# gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-pc-linux-gnu/7.1.0/lto-wrapper
Target: x86_64-pc-linux-gnu
Configured with: ../configure --prefix=/usr --libdir=/usr/lib --libexecdir=/usr/lib --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=https://bugs.archlinux.org/ --enable-languages=c,c++,fortran,go,objc,obj-c++ --enable-shared --enable-threads=posix --enable-libmpx --with-system-zlib --with-isl --enable-__cxa_atexit --disable-libunwind-exceptions --enable-clocale=gnu --disable-libstdcxx-pch --disable-libssp --enable-gnu-unique-object --enable-linker-build-id --enable-lto --enable-plugin --enable-install-libiberty --with-linker-hash-style=gnu --enable-gnu-indirect-function --disable-multilib --disable-werror --enable-checking=release
Thread model: posix
gcc version 7.1.0 (GCC)

Compile the program again:

# gcc -fopenmp -fprofile-arcs parallel.c
#

This time compilation is successful!

P.S. The gcc may have other bugs when supporting some new OpenMP directives, so please pay attention to it.

The pitfalls of using OpenMP parallel for-loops

According to this discussion:

#pragma omp parallel for
for (...)
{
}

is a shortcut of

#pragma omp parallel
{ 
#pragma omp for
    for (...)
    {
    }
}

and it seems more convenient of using “#pragma omp parallel for“. But there are some pitfalls which you should pay attention to:

(1) You can’t assume the number of threads will be equal to for-loops iteration counts even it is very small. For example (The machine has only cores.):

#include <omp.h>
#include <stdio.h>

int main(void) {
#pragma omp parallel for
    for (int i = 0; i < 5; i++) {
        printf("thread is %d\n", omp_get_thread_num());
    }
    return 0;
}

Build and run this program:

# gcc -fopenmp parallel.c
# ./a.out
thread is 0
thread is 0
thread is 0
thread is 1
thread is 1

We can see only 2 threads are generated. Run it in another 32-core machine:

# ./a.out
thread is 1
thread is 0
thread is 2
thread is 4
thread is 3

We can see 5 threads are launched.

(2) Use num_threads clause to modify the program as following:

#include <omp.h>
#include <stdio.h>

int main(void) {
#pragma omp parallel for num_threads(5)
    for (int i = 0; i < 5; i++) {
        printf("thread is %d\n", omp_get_thread_num());
    }
    return 0;
}

Rebuild and run it on original 2-core machine:

# gcc -fopenmp parallel.c
# ./a.out
thread is 2
thread is 3
thread is 4
thread is 1
thread is 0

We can see this time 5 threads are created. But you should notice the actual thread count depends the system resource. E.g., change the code like this:

#pragma omp parallel for num_threads(30000)
    for (int i = 0; i < 30000; i++) {
        printf("thread is %d\n", omp_get_thread_num());
    }

Execute it:

# ./a.out

libgomp: Thread creation failed: Resource temporarily unavailable

So we should notice the the created thread number.

P.S., if the iteration number is smaller than core number, the number of threads will be equal to core number (still in 32-core machine):

#include <omp.h>
#include <stdio.h>

int main(void) {
#pragma omp parallel for
    for (int i = 0; i < 4; i++) {
        if (0 == omp_get_thread_num()) {
            printf("thread number is %d\n", omp_get_num_threads());
        }
    }
    return 0;
}

The output is:

thread number is 32

(3) If you use C++ thread_local variable, you should take care:

#include <omp.h>
#include <stdio.h>

int main(void) {
    thread_local int array[5] = {0};
#pragma omp parallel for num_threads(5)
    for (int i = 0; i < 5; i++) {
        array[i] = i + 1;
    }

    for (int i = 0; i < 5; i++) {
        printf("array[%d] is %d\n", i, array[i]);
    }
    return 0;
}

Compile and run:

# g++ -fopenmp parallel.cpp
# ./a.out
array[0] is 1
array[1] is 0
array[2] is 0
array[3] is 0
array[4] is 0

We can see only the first element is changed, so it must be thread 0‘s work. Remove the thread_local qualifier, and rebuild. This time you get the wanted result:

# ./a.out
array[0] is 1
array[1] is 2
array[2] is 3
array[3] is 4
array[4] is 5

My 101st English post

How time flies! I have finished 100 English blog posts!

Back to 3 years ago, although I am a non-native English speaker, I decided to open English blogs. Since writing articles using my mother tongue can only let people who understand Chinese to read, while use English can benefit guys all over the world.

During the 100 posts, 95 percents are related to software technology, in other words, they are actually some experience and lessons which I have studied from daily work. I am very glad that these small essays can help other people on the earth. For example, I once received an email from a student who read my SAP HANA related posts and wanted to discuss some problems about using SAP HANA in container environment. Another sample is a trick of using Go: Fix “unsupported protocol scheme” issue in golang. This tip not only helps a lot of people and becomes the first item in google search, but also is translated into Chinese!

Besides gaining satisfactions, writing blog also enhances my English writing skills. Although there are still grammar and using words errors. Compared to the beginning, it is a really giant improvement!

I will continue to blogging, and look forward the next 100 posts!