CUDA编程笔记(5)——CUDA程序结构

这篇笔记摘自Professional CUDA C Programming

A typical CUDA program structure consists of five main steps:
1. Allocate GPU memories.
2. Copy data from CPU memory to GPU memory.
3. Invoke the CUDA kernel to perform program-specific computation.
4. Copy data back from GPU memory to CPU memory.
5. Destroy GPU memories.

CUDA exposes you to the concepts of both memory hierarchy and thread hierarchy, extending your ability to control thread execution and scheduling to a greater degree, using:
➤ Memory hierarchy structure
➤ Thread hierarchy structure
For example, a special memory, called shared memory, is exposed by the CUDA programming model. Shared memory can be thought of as a software-managed cache, which provides great speedup by conserving bandwidth to main memory. With shared memory, you can control the locality of your code directly.

When writing a parallel program in ANSI C, you need to explicitly organize your threads with either pthreads or OpenMP, two well-known techniques to support parallel programming on most processor architectures and operating systems. When writing a program in CUDA C, you actually just write a piece of serial code to be called by only one thread. The GPU takes this kernel and makes it parallel by launching thousands of threads, all performing that same computation. The CUDA programming model provides you with a way to organize your threads hierarchically. Manipulating this organization directly affects the order in which threads are executed on the GPU. Because CUDA C is an extension of C, it is often straightforward to port C programs to CUDA C. Conceptually, peeling off the loops of your code yields the kernel code for a CUDA C implementation.

CUDA abstracts away the hardware details and does not require applications to be mapped to traditional graphics APIs. At its core are three key abstractions: a hierarchy of thread groups, a hierarchy of memory groups, and barrier synchronization, which are exposed to you as a minimal set of language extensions.

 

CUDA编程笔记(4)——CPU THREAD VS GPU THREAD

这篇笔记摘自Professional CUDA C Programming

Threads on a CPU are generally heavyweight entities. The operating system must swap threads on and off CPU execution channels to provide multithreading capability. Context switches are slow and expensive.
Threads on GPUs are extremely lightweight. In a typical system, thousands of threads are queued up for work. If the GPU must wait on one group of threads, it simply begins executing work on another.
CPU cores are designed to minimize latency for one or two threads at a time, whereas GPU cores are designed to handle a large number of concurrent, lightweight threads in order to maximize throughput.
Today, a CPU with four quad core processors can run only 16 threads concurrently, or 32 if the CPUs support hyper-threading.
Modern NVIDIA GPUs can support up to 1,536 active threads concurrently per multiprocessor. On GPUs with 16 multiprocessors, this leads to more than 24,000 concurrently active threads.

CUDA编程笔记(3)——Heterogeneous architecture

这篇笔记摘自Professional CUDA C Programming

A typical heterogeneous compute node nowadays consists of two multicore CPU sockets and two or more many-core GPUs. A GPU is currently not a standalone platform but a co-processor to a CPU. Therefore, GPUs must operate in conjunction with a CPU-based host through a PCI-Express bus. That is why, in GPU computing terms, the CPU is called the host and the GPU is called the device.

capture

A heterogeneous application consists of two parts:
➤ Host code
➤ Device code
Host code runs on CPUs and device code runs on GPUs. An application executing on a heterogeneous platform is typically initialized by the CPU. The CPU code is responsible for managing the environment, code, and data for the device before loading compute-intensive tasks on the device.

There are two important features that describe GPU capability:
➤ Number of CUDA cores
➤ Memory size
Accordingly, there are two different metrics for describing GPU performance:
➤ Peak computational performance
➤ Memory bandwidth
Peak computational performance is a measure of computational capability, usually defined as how many single-precision or double-precision floating point calculations can be processed per second. Peak performance is usually expressed in gflops (billion floating-point operations per second) or tflops (trillion floating-point calculations per second). Memory bandwidth is a measure of the ratio at which data can be read from or stored to memory. Memory bandwidth is usually expressed in gigabytes per second, GB/s.

CUDA编程笔记(2)——GPU core VS CPU core

这篇笔记摘自Professional CUDA C Programming

Even though many-core and multicore are used to label GPU and CPU architectures, a GPU core is quite different than a CPU core.
A CPU core, relatively heavy-weight, is designed for very complex control logic, seeking to optimize the execution of sequential programs.
A GPU core, relatively light-weight, is optimized for data-parallel tasks with simpler control logic, focusing on the throughput of parallel programs.

 

CUDA编程笔记(1)——Parallelism

这篇笔记摘自Professional CUDA C Programming

There are two fundamental types of parallelism in applications:
➤ Task parallelism
➤ Data parallelism
Task parallelism arises when there are many tasks or functions that can be operated independently and largely in parallel. Task parallelism focuses on distributing functions across multiple cores.

Data parallelism arises when there are many data items that can be operated on at the same time. Data parallelism focuses on distributing the data across multiple cores.

CUDA programming is especially well-suited to address problems that can be expressed as data parallel computations. Many applications that process large data sets can use a data-parallel model to speed up the computations. Data-parallel processing maps data elements to parallel threads.

There are two basic approaches to partitioning data:
➤ Block: Each thread takes one portion of the data, usually an equal portion of the data.
➤ Cyclic: Each thread takes more than one portion of the data.

简而言之,block就是按线程数等分数据,10个线程就把数据分成10份,一个线程处理一份;而cyclic则是数据的份数大于线程数,举个例子,10个线程把数据分成20份,第一个线程处理第111份,第二个线程处理第212份。。。。。。,循环处理多次。

HP/HPE公司的*nix操作系统

HP/HPE公司(即通常说的惠普公司,因其在2015年已经拆分成HPHPE两家独立运营公司,且拆分后是由HPE延续操作系统的相关工作,所以在这里使用HP/HPE。)拥有自己的Unix操作系统:HP-UX。以前中国是有团队参与HP-UX的相关工作:功能开发,Unix认证等等,现在相应的工作应该都转到印度了。目前HP-UX应该在一些银行,电信系统还在使用,不过的确是很难见到了。可以通过Wikipedia来了解HP-UX的一些信息。

再来说一下Linux,其实以前HP/HPE公司有一个很大的Linux团队,其甚至有能力做出自己的Linux发行版:

img_20161123_140305_hdr

此外,这个团队也曾经是Linux kernel的一个很重要的贡献者。不过,随着这些年公司的战略调整,这个团队的绝大部分工程师都已经离开了,其中的很多人加盟了其它公司,继续为Linux贡献着力量。目前HP/HPELinux上的工作重心侧重在同Linux厂商的合作,譬如今年与SuSE的合作(详情请参考Sweet SUSE! HPE snags itself a Linux distro)。

Linux操作系统的pstack工具

Solaris操作系统提供了pstack工具,用来打印运行程序的线程堆栈信息。RedHat公司发行的Linux操作系统(RHELCentOS等等)也提供了pstack工具,只要安装gdb

# yum install gdb

就会把pstack也一并安装成功。

首先看一下pstack

# which pstack
/usr/bin/pstack
# ls -lt /usr/bin/pstack
lrwxrwxrwx. 1 root root 6 Nov 19 06:32 /usr/bin/pstack -> gstack

可以看出pstack实际上只是一个指向了gstack的符号链接。再看一下gstack

# cat /usr/bin/gstack
#!/bin/sh

if test $# -ne 1; then
    echo "Usage: `basename $0 .sh` <process-id>" 1>&2
    exit 1
fi

if test ! -r /proc/$1; then
    echo "Process $1 not found." 1>&2
    exit 1
fi

# GDB doesn't allow "thread apply all bt" when the process isn't
# threaded; need to peek at the process to determine if that or the
# simpler "bt" should be used.

backtrace="bt"
if test -d /proc/$1/task ; then
    # Newer kernel; has a task/ directory.
    if test `/bin/ls /proc/$1/task | /usr/bin/wc -l` -gt 1 2>/dev/null ; then
    backtrace="thread apply all bt"
    fi
elif test -f /proc/$1/maps ; then
    # Older kernel; go by it loading libpthread.
    if /bin/grep -e libpthread /proc/$1/maps > /dev/null 2>&1 ; then
    backtrace="thread apply all bt"
    fi
fi

GDB=${GDB:-/usr/bin/gdb}

# Run GDB, strip out unwanted noise.
# --readnever is no longer used since .gdb_index is now in use.
$GDB --quiet -nx $GDBARGS /proc/$1/exe $1 <<EOF 2>&1 |
set width 0
set height 0
set pagination no
$backtrace
EOF
/bin/sed -n \
    -e 's/^\((gdb) \)*//' \
    -e '/^#/p' \
    -e '/^Thread/p'

可以看到gstack仅仅是一个shell脚本。简单浏览一下这个脚本:

(1)

if test $# -ne 1; then
    echo "Usage: `basename $0 .sh` <process-id>" 1>&2
    exit 1
fi

脚本要求一个参数:进程ID

(2)

if test ! -r /proc/$1; then
    echo "Process $1 not found." 1>&2
    exit 1
fi

通过检测/proc目录下进程子目录是否可读,来查看相应进程是否存在。

(3)

# GDB doesn't allow "thread apply all bt" when the process isn't
# threaded; need to peek at the process to determine if that or the
# simpler "bt" should be used.

backtrace="bt"
if test -d /proc/$1/task ; then
    # Newer kernel; has a task/ directory.
    if test `/bin/ls /proc/$1/task | /usr/bin/wc -l` -gt 1 2>/dev/null ; then
    backtrace="thread apply all bt"
    fi
elif test -f /proc/$1/maps ; then
    # Older kernel; go by it loading libpthread.
    if /bin/grep -e libpthread /proc/$1/maps > /dev/null 2>&1 ; then
    backtrace="thread apply all bt"
    fi
fi

如果进程只有一个线程,那么使用gdb的“bt”命令打印线程堆栈信息,否则使用“thread apply all bt”命令。

(4)

GDB=${GDB:-/usr/bin/gdb}

# Run GDB, strip out unwanted noise.
# --readnever is no longer used since .gdb_index is now in use.
$GDB --quiet -nx $GDBARGS /proc/$1/exe $1 <<EOF 2>&1 |
set width 0
set height 0
set pagination no
$backtrace
EOF
/bin/sed -n \
    -e 's/^\((gdb) \)*//' \
    -e '/^#/p' \
    -e '/^Thread/p'

最后调用gdb,使用“bt”或“thread apply all bt”命令,并把输出重定向到sed工具,由sed工具打印出线程堆栈信息。

最后看一个使用pstack的例子:

# pstack 707
Thread 3 (Thread 0x7f69600d8700 (LWP 713)):
#0  0x00007f6968af269d in poll () at ../sysdeps/unix/syscall-template.S:81
#1  0x00007f6969027a84 in g_main_context_iterate.isra.24 () from /lib64/libglib-2.0.so.0
#2  0x00007f6969027bac in g_main_context_iteration () from /lib64/libglib-2.0.so.0
#3  0x00007f6969027be9 in glib_worker_main () from /lib64/libglib-2.0.so.0
#4  0x00007f696904d4f5 in g_thread_proxy () from /lib64/libglib-2.0.so.0
#5  0x00007f696af9fdc5 in start_thread (arg=0x7f69600d8700) at pthread_create.c:308
#6  0x00007f6968afcced in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 2 (Thread 0x7f695eec3700 (LWP 716)):
#0  0x00007f6968af269d in poll () at ../sysdeps/unix/syscall-template.S:81
#1  0x00007f6969027a84 in g_main_context_iterate.isra.24 () from /lib64/libglib-2.0.so.0
#2  0x00007f6969027dca in g_main_loop_run () from /lib64/libglib-2.0.so.0
#3  0x00007f6969641336 in gdbus_shared_thread_func () from /lib64/libgio-2.0.so.0
#4  0x00007f696904d4f5 in g_thread_proxy () from /lib64/libglib-2.0.so.0
#5  0x00007f696af9fdc5 in start_thread (arg=0x7f695eec3700) at pthread_create.c:308
#6  0x00007f6968afcced in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 1 (Thread 0x7f696c5738c0 (LWP 707)):
#0  0x00007f6968af269d in poll () at ../sysdeps/unix/syscall-template.S:81
#1  0x00007f6969027a84 in g_main_context_iterate.isra.24 () from /lib64/libglib-2.0.so.0
#2  0x00007f6969027dca in g_main_loop_run () from /lib64/libglib-2.0.so.0
#3  0x0000560a080a80a3 in main ()

如果使用的Linux发行版没有pstack这个工具,可以考虑直接把gstack脚本拷贝过去。

求解最长增加子数列

求最长增加子数列是一道经典算法题,其动态规划解法最简单,但是时间复杂度是O(n²)stackoverflow上有一个O(nlogn)解法,很是巧妙:

Now the improvement happens at the second loop, basically, you can improve the speed by using binary search. Besides the array dp[], let’s have another array c[], c is pretty special, c[i] means: the minimum value of the last element of the longest increasing sequence whose length is i.

sz = 1;
c[1] = array[0]; /*at this point, the minimum value of the last element of the size 1 increasing sequence must be array[0]*/
dp[0] = 1;
for( int i = 1; i < len; i++ ) {
   if( array[i] < c[1] ) {
  c[1] = array[i]; /*you have to update the minimum value right now*/
  dp[i] = 1;
   }
   else if( array[i] > c[sz] ) {
  c[sz+1] = array[i];
  dp[i] = sz+1;
  sz++;
   }
   else {
  int k = binary_search( c, sz, array[i] ); /*you want to find k so that c[k-1]<array[i]<c[k]*/
  c[k] = array[i];
  dp[i] = k;
   }
}

关键是理解这个c数组,其下标用来表示增加子数列的长度,而值则是这个长度的增加子数列中结尾元素最小的值。比如有两个相同长度的增加子数列:1,2,31,2,5,则c[3]的值为3,因为一旦将来有4出现,就可以把1,2,3扩展为1,2,3,4,而无法再把1,2,5扩展。完整的Go程序如下:

package main
import (
    "fmt"
    "os"
)

func bSearch(array []int, start int, end int, value int) int {
    for start <= end {
        mid := start + (end-start)/2
        if array[mid] == value {
            return mid
        } else if array[mid] < value {
            if mid+1 <= end && array[mid+1] > value {
                return mid+1
            } else {
                start = mid+1
            }            
        } else {
            if mid-1 >= start && array[mid-1] < value {
                return mid
            } else {
                end = mid-1
            }
        }
    }
    return start
}
func main() {
 //Enter your code here. Read input from STDIN. Print output to STDOUT
    var num int
    _, err := fmt.Scan(&num)
    if err != nil || num == 0{
        os.Exit(1)
    }
    s := make([]int, num)
    m := make([]int, num)
    e := make([]int, num+1)

    for i := 0; i < num; i++ {
        _, err := fmt.Scan(&s[i])
        if err != nil {
            os.Exit(1)
        }
    }

    m[0] = 1
    max := m[0]
    e[1] = s[0]
    for i := 1; i < num; i++ {
        if s[i] < e[1] {
            e[1] = s[i]
            m[i] = 1
        } else if s[i] > e[max] {
            max++
            e[max] = s[i]
            m[i] = max
        } else {
            k := bSearch(e, 1, max, s[i])
            e[k] = s[i]
            m[i] = k
        }
    }

    fmt.Println(max)
}

递归VS非递归

递归是一种很常见的解决问题办法,比如求解Fibonacci数列:

int Fibonacci(int n) {
    if (n <= 1) {
        return n;
    } else {
        return Fibonacci(n - 1) + Fibonacci(n - 2);
    }
}

但是递归会导致函数调用栈很深,此外有可能会有很多重复工作。比如求解Fibonacci(4)时,Fibonacci(2)Fibonacci(1)都会被重复求值。看一下非递归解法:

int Fibonacci(int n) {
    if (n <= 1) {
        return n;
    } else {
        int fi = 0;
        int fj = 1;
        for (int i = 2; i <= n; i++) {
            int temp = fi + fj;
            fi = fj;
            fj = temp;
        }
        return fj;
    }
}

同递归方式相比,用循环迭代的方式取代了函数调用。

最后再看一下wikipedia中关于Binary search tree中查找某一元素的递归和非递归代码。
递归:

def search_recursively(key, node):
    if node is None or node.key == key:
        return node
    elif key < node.key:
        return search_recursively(key, node.left)
    else:  # key > node.key
    return search_recursively(key, node.right)

非递归:

def search_iteratively(key, node): 
    current_node = node
    while current_node is not None:
        if key == current_node.key:
            return current_node
        elif key < current_node.key:
            current_node = current_node.left
        else:  # key > current_node.key:
            current_node = current_node.right
    return None

分析求子数组最大值问题

求子数组的最大值是一道经典dynamic programming题,解法如下(参考这里):

public int maxSubArray(int[] A) {
   int newsum=A[0];
   int max=A[0];
   for(int i=1;i<A.length;i++){
       newsum=Math.max(newsum+A[i],A[i]);
       max= Math.max(max, newsum);
   }
   return max;
}

理解这个算法的关键在于:求出数组中以每个元素为子数组的最后一个元素的最大值(上述代码中newsum),这些最大值中的最大者即为解(上述代码中max)。分析如下:从第一个元素A[0]起,newsummax均为A[0]。而对下一个元素A[1],以A[1]为子数组的最后一个元素的最大值或者是A[0]+A[1]A[0]大于0),或是A[1],取两者最大值。接下来再看A[2],以A[2]为子数组的最后一个元素的最大值是A[0]+A[1]+A[2]A[1]+A[2]A[2]三者之间的最大值。以此类推。。。