Use different build system for cmake

Besides idiomatic make, cmake also supports other build systems (e.g., ninja):

# cmake -h
......
Generators

The following generators are available on this platform (* marks default):
* Unix Makefiles               = Generates standard UNIX makefiles.
  Ninja                        = Generates build.ninja files.
  Watcom WMake                 = Generates Watcom WMake makefiles.
  CodeBlocks - Ninja           = Generates CodeBlocks project files.
  CodeBlocks - Unix Makefiles  = Generates CodeBlocks project files.
  CodeLite - Ninja             = Generates CodeLite project files.
  CodeLite - Unix Makefiles    = Generates CodeLite project files.
  Sublime Text 2 - Ninja       = Generates Sublime Text 2 project files.
  Sublime Text 2 - Unix Makefiles
                               = Generates Sublime Text 2 project files.
  Kate - Ninja                 = Generates Kate project files.
  Kate - Unix Makefiles        = Generates Kate project files.
  Eclipse CDT4 - Ninja         = Generates Eclipse CDT 4.0 project files.
  Eclipse CDT4 - Unix Makefiles= Generates Eclipse CDT 4.0 project files.
......

My OS is Arch Linux. To use ninja, first install it:

# pacman -Syu ninja

Then specify using ninja when executing cmake command:

# cmake -G Ninja ..

Finally, use ninja to build:

# ninja

 

A performance issue caused by NUMA

The essence of NUMA is accessing local memory fast while remote slow, and I was bit by it today.

The original code is like this:

/* Every thread create one partition of a big vector and process it*/
#pragma omp parallel for
for (...)
{
    ......
    vector<> local_partition = create_big_vector_partition();
    /* processing the vector partition*/
    ......
}

I tried to create a big vector out of OpenMP block, then every thread just grabs a partition and processes it:

vector<> big_vector = create_big_vector();

#pragma omp parallel for
for (...)
{
    ......
    vector<>& local_partition = get_partition(big_vector);
    /* processing the vector partition*/
    ......
}

I measure the execution time of OpenMP block:

#pragma omp parallel for
for (...)
{
    ......
}

Though in original code, every thread needs to create partition of vector itself, it is still faster than the modified code.

After some experiments and analysis, numastat helps me to pinpoint the problem:

$ numastat
                           node0           node1
numa_hit              6259740856      7850720376
numa_miss              120468683       128900132
numa_foreign           128900132       120468683
interleave_hit             32881           32290
local_node            6259609322      7850520401
other_node             120600217       129100106

In original solution, every thread creates vector partition in local memory of CPU. However, in second case, the threads often need to access memory in remote node, and this overhead is bigger than creating vector partition locally.

Parallelization may cause GPU utilization become worse

Recently, I did an experiment about CUDA. The following is a simple dead-loop code:

......
while (1)
{
    dgt_mul<<<gDim, bDim, 0, st>>>(......);
}  
......

Run it in single thread, the GPU utilization is ~80%, while in two threads, the utilization is reduced to ~60%; in three threads, the utilization is reduced to ~40%. I can’t comprehend this phenomenon, so posted topics in both stackoverflow and CUDA developer forum. Unfortunately, there was no response.

After some investigation, I found this post and know there is a kernel launch queue firstly. I modified the code and profile again (use nvprof instead of GUI):

......
for (int i = 0; i < 10000; i++)
{
    dgt_mul<<<gDim, bDim, 0, st>>>(......);
}  
......

The following is the profile output of one, two and three threads:

==22209== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:  100.00%  17.577ms     10000  1.7570us  1.7270us  2.8800us  dgt_mul(unsigned int*, unsigned int*, unsigned int*, int, int)
      API calls:   97.83%  70.567ms     10000  7.0560us  4.4700us  13.296ms  cudaLaunchKernel
                    2.17%  1.5644ms     10000     156ns     119ns  15.779us  cudaGetLastError

==23662== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:  100.00%  35.288ms     20000  1.7640us  1.7270us  12.704us  dgt_mul(unsigned int*, unsigned int*, unsigned int*, int, int)
      API calls:   99.09%  473.79ms     20000  23.689us  5.0040us  13.294ms  cudaLaunchKernel
                    0.91%  4.3564ms     20000     217ns     117ns  6.4690us  cudaGetLastError

==27597== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:  100.00%  52.587ms     30000  1.7520us  1.7270us  2.9440us  dgt_mul(unsigned int*, unsigned int*, unsigned int*, int, int)
      API calls:   99.23%  2.10159s     30000  70.053us  13.545us  13.778ms  cudaLaunchKernel
                    0.77%  16.328ms     30000     544ns     368ns  19.316us  cudaGetLastError

We can see the average execution time of cudaLaunchKernel scales up, so it manifests there is bottle neck in the kernel launch queue when running more threads.

The display format of Bash’s built-in time command

Check the default output format of bash‘s built-in time command:

# time

real    0m0.000s
user    0m0.000s
sys     0m0.000s

You can use -p option to output in POSIX format:

# time -p
real 0.00
user 0.00
sys 0.00

From bash source code, we know the the definitions of these two formats:

#define POSIX_TIMEFORMAT "real %2R\nuser %2U\nsys %2S"
#define BASH_TIMEFORMAT  "\nreal\t%3lR\nuser\t%3lU\nsys\t%3lS"

To decipher the meanings of them, we need to refer bash manual:

TIMEFORMAT

The value of this parameter is used as a format string specifying how the timing information for pipelines prefixed with the time reserved word should be displayed. The ‘%’ character introduces an escape sequence that is expanded to a time value or other information. The escape sequences and their meanings are as follows; the braces denote optional portions.

%%
A literal ‘%’.

%[p][l]R
The elapsed time in seconds.

%[p][l]U
The number of CPU seconds spent in user mode.

%[p][l]S
The number of CPU seconds spent in system mode.

%P
The CPU percentage, computed as (%U + %S) / %R.

The optional p is a digit specifying the precision, the number of fractional digits after a decimal point. A value of 0 causes no decimal point or fraction to be output. At most three places after the decimal point may be specified; values of p greater than 3 are changed to 3. If p is not specified, the value 3 is used.

The optional l specifies a longer format, including minutes, of the form MMmSS.FFs. The value of p determines whether or not the fraction is included.
……

Take POSIX_TIMEFORMAT as an example: %2R denotes using second as time unit, and the precision is two digits after a decimal point; %2U and %2S are similar.

Now you can comprehend the output of time, correct? Try using BASH_TIMEFORMAT as a practice.

 

Downgrade boost on Ubuntu 16.04

In the past 2 days, I was tortured by boost. The default boostversion on Ubuntu 16.04 is 1.58, but I met following compile errors:

......
/usr/include/boost/multi_index/detail/bucket_array.hpp: In static member function ‘static std::size_t boost::multi_index::detail::bucket_array_base<_>::size_index(std::size_t)’:
/usr/include/boost/multi_index/detail/bucket_array.hpp:84:62: error: invalid use of non-lvalue array
     const std::size_t *bound=std::lower_bound(sizes,sizes+sizes_length,n);
                                                              ^~~~~~~~~~~~
/usr/include/boost/multi_index/detail/bucket_array.hpp:85:25: error: invalid use of non-lvalue array
     if(bound==sizes+sizes_length)--bound;
                         ^~~~~~~~~~~~
/usr/include/boost/multi_index/detail/bucket_array.hpp:86:22: error: invalid use of non-lvalue array
     return bound-sizes;
......

I downgraded boost to 1.55; and downloaded and built it:

$ ./bootstrap.sh --prefix=/usr/local  
$ sudo ./b2 -a -q install

This time I found the default gcc-5 could not compile successfully. So I followed this post to install gcc-6, and modified/home/nan/boost_1_55_0/tools/build/v2/user-config.jam file to use gcc-6 to compile boost:

......
# Configure specific gcc version, giving alternative name to use.
using gcc : 6 : g++-6 ;
......

Then my project can be compiled successfully. Check gcc search header file path and library path:

$ echo | gcc-6 -E -Wp,-v -
ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu"
ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/6/../../../../x86_64-linux-gnu/include"
#include "..." search starts here:
#include <...> search starts here:
 /usr/lib/gcc/x86_64-linux-gnu/6/include
 /usr/local/include
 /usr/lib/gcc/x86_64-linux-gnu/6/include-fixed
 /usr/include/x86_64-linux-gnu
 /usr/include
......

$ ldconfig -v 2>/dev/null | grep -v ^$'\t'
/usr/local/cuda-9.0/targets/x86_64-linux/lib:
/usr/lib/x86_64-linux-gnu/libfakeroot:
/usr/local/lib:
/lib/x86_64-linux-gnu:
/usr/lib/x86_64-linux-gnu:
/usr/lib/nvidia-384:
/usr/lib32/nvidia-384:
/lib32:
/usr/lib32:
/lib:
/usr/lib:
/usr/lib/nvidia-384/tls: (hwcap: 0x8000000000000000)
/usr/lib32/nvidia-384/tls: (hwcap: 0x8000000000000000)

You will find the fresh installed boost 1.55 (in /usr/local directory) always be found before default boost 1.58 (header files are in /usr/include/boostand libraries in /usr/lib/x86_64-linux-gnu).