Build googlemock library

When enabling ENABLE_TESTS option to build tfhe project, the googlemock library is needed, otherwise the following errors will be generated:

[ 54%] Linking CXX executable unittests-fftw
/usr/bin/ld: cannot find -lgmock
collect2: error: ld returned 1 exit status
make[2]: *** [test/CMakeFiles/unittests-fftw.dir/build.make:229: test/unittests-fftw] Error 1
make[1]: *** [CMakeFiles/Makefile2:1290: test/CMakeFiles/unittests-fftw.dir/all] Error 2
make: *** [Makefile:95: all] Error 2

To build googlemock, you can follow this guide:

(1) Download googletest:

$ git clone https://github.com/google/googletest.git

(2) Execute autoreconf command:

$ cd googletest/googlemock
$ autoreconf -fvi

(3) Create libgmock:

$ cd make
$ make
$ ar -rv libgmock.a gtest-all.o gmock-all.o

(4) Copy libgmock.a into /usr/local/lib:

$ cp libgmock.a /usr/local/lib/

Then you can use libgmock.a now.

Don’t use “-G” compile option for profiling CUDA programs

I use Nsight as an IDE to develop CUDA programs:

capture

Use nvprof to measure the load efficiency and store efficiency of accessing global memory:

$ nvprof --devices 2 --metrics gld_efficiency,gst_efficiency ./cuHE_opt

................... CRT polynomial Terminated ...................

==1443== Profiling application: ./cuHE_opt
==1443== Profiling result:
==1443== Metric result:
Invocations   Metric NameMetric Description Min Max Avg
Device "Tesla K80 (2)"
Kernel: gpu_cuHE_crt(unsigned int*, unsigned int*, int, int, int, int)
  1gld_efficiency Global Memory Load Efficiency  62.50%  62.50%  62.50%
  1gst_efficiencyGlobal Memory Store Efficiency 100.00% 100.00% 100.00%
Kernel: gpu_crt(unsigned int*, unsigned int*, int, int, int, int)
  1gld_efficiency Global Memory Load Efficiency  39.77%  39.77%  39.77%
  1gst_efficiencyGlobal Memory Store Efficiency 100.00% 100.00% 100.00%

But if I use nvcc to compile the program directly:

 nvcc -arch=sm_37 cuHE_opt.cu  -o cuHE_opt

The nvprof displays the different measuring results:

$ nvprof --devices 2 --metrics gld_efficiency,gst_efficiency ./cuHE_opt
......
................... CRT polynomial Terminated ...................

==1801== Profiling application: ./cuHE_opt
==1801== Profiling result:
==1801== Metric result:
Invocations   Metric NameMetric Description Min Max Avg
Device "Tesla K80 (2)"
Kernel: gpu_cuHE_crt(unsigned int*, unsigned int*, int, int, int, int)
  1gld_efficiency Global Memory Load Efficiency 100.00% 100.00% 100.00%
  1gst_efficiencyGlobal Memory Store Efficiency 100.00% 100.00% 100.00%
Kernel: gpu_crt(unsigned int*, unsigned int*, int, int, int, int)
  1gld_efficiency Global Memory Load Efficiency  50.00%  50.00%  50.00%
  1gst_efficiencyGlobal Memory Store Efficiency 100.00% 100.00% 100.00%

After some investigations, the reason is using -G compile option in the first case. As the document of nvcc has mentioned:

--device-debug (-G)
    Generate debug information for device code. Turns off all optimizations.
    Don't use for profiling; use -lineinfo instead.

So don’t use -G compile option for profiling CUDA programs.

Enable C++11 support for NVCC compiler in Nsight

When using Nsight as an IDE to develop CUDA programs, sometimes, the program may require C++11 support, otherwise errors like this will occur:

/usr/lib/gcc/x86_64-pc-linux-gnu/5.4.0/include/c++/bits/c++0x_warning.h:32:2: error: #error This file requires compiler and library support for the ISO C++ 2011 standard. This support must be enabled with the -std=c++11 or -std=gnu++11 compiler options.
 #error This file requires compiler and library support \
  ^
make: *** [src/subdir.mk:20: src/cuHE_opt.o] Error 1

To enable C++11 support, you need to do following configurations:
(1) Right-click the project, and select the last item: Properities.

1

(2) Check Settings->Tool Settings->Code Generation->Enable C++11 support (-std=c++11).

2

 

Is the warp size always 32 in CUDA?

Last week, I began to read the awesome Professional CUDA C Programming, and bumped into the following words in GPU Architecture Overview section:

CUDA employs a Single Instruction Multiple Thread (SIMT) architecture to manage and execute threads in groups of 32 called warps.

Since this book is published in 2014, I just wonder whether the warp size is still 32 in CUDA no matter the different Compute Capability is. To figure out it, I turn to the official CUDA C Programming Guide, and get the answer from Compute Capability table:

capture

Yep, for all Compute Capabilities, the warp size is always 32.

BTW, you can also use following program to determine the warp size value:

#include <stdio.h>

int main(void) {
        cudaDeviceProp deviceProp;
        if (cudaSuccess != cudaGetDeviceProperties(&deviceProp, 0)) {
                printf("Get device properties failed.\n");
                return 1;
        } else {
                printf("The warp size is %d.\n", deviceProp.warpSize);
                return 0;
        }
}

The running result in my CUDA box is here:

The warp size is 32.