An Efficient Matrix Transpose in CUDA C/C++中Coalesced Transpose Via Shared Memory
一节讲述如何使用shared memory
高效地实现matrix transpose
:
__global__ void transposeCoalesced(float *odata, const float *idata)
{
__shared__ float tile[TILE_DIM][TILE_DIM];
int x = blockIdx.x * TILE_DIM + threadIdx.x;
int y = blockIdx.y * TILE_DIM + threadIdx.y;
int width = gridDim.x * TILE_DIM;
for (int j = 0; j < TILE_DIM; j += BLOCK_ROWS)
tile[threadIdx.y+j][threadIdx.x] = idata[(y+j)*width + x];
__syncthreads();
x = blockIdx.y * TILE_DIM + threadIdx.x; // transpose block offset
y = blockIdx.x * TILE_DIM + threadIdx.y;
for (int j = 0; j < TILE_DIM; j += BLOCK_ROWS)
odata[(y+j)*width + x] = tile[threadIdx.x][threadIdx.y + j];
}
(1)idata
和odata
分别是表示1024X1024
个float
元素的matrix
的连续内存:
(2)关于blockIdx
和threadIdx
的取值,参考下面的图:
shared memory
请参考下面的图:
(3)在下列代码中:
for (int j = 0; j < TILE_DIM; j += BLOCK_ROWS)
tile[threadIdx.y+j][threadIdx.x] = idata[(y+j)*width + x];
每一个block
是32X8
大小,需要循环4
次,把一个block
内容copy
到tile
这个shared memory
中。idata
是按行读取的,因此是coalesced
。
(4)最难理解的在最后一部分:
x = blockIdx.y * TILE_DIM + threadIdx.x; // transpose block offset
y = blockIdx.x * TILE_DIM + threadIdx.y;
for (int j = 0; j < TILE_DIM; j += BLOCK_ROWS)
odata[(y+j)*width + x] = tile[threadIdx.x][threadIdx.y + j];
对比从idata
读取数据和写数据到odata
:
......
tile[threadIdx.y+j][threadIdx.x] = idata[(y+j)*width + x];
......
odata[(y+j)*width + x] = tile[threadIdx.x][threadIdx.y + j];
......
可以看到是把tile
做了transpose
的数据(行变列,列变行)传给odata
。而确定需要把tile
放到哪里位置的代码:
x = blockIdx.y * TILE_DIM + threadIdx.x; // transpose block offset
y = blockIdx.x * TILE_DIM + threadIdx.y;
假设blockIdx.x
为31
,blockIdx.y
为0
,threadIdx.x
为1
,threadIdx.y
为2
。根据上述代码,计算x
和y
:
x = 0 * 32 + 1;
y = 31 * 32 + 2;
根据下面的图,可以看到是把东北角的内容copy
的西南角: