Programming Massively Parallel Processors Lecture Slides for Chapter

  • Slides: 23
Download presentation
Programming Massively Parallel Processors Lecture Slides for Chapter 4: CUDA Threads © David Kirk/NVIDIA

Programming Massively Parallel Processors Lecture Slides for Chapter 4: CUDA Threads © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 1

Block IDs and Thread IDs • Each thread uses IDs to decide what data

Block IDs and Thread IDs • Each thread uses IDs to decide what data to work on – Block ID: 1 D or 2 D – Thread ID: 1 D, 2 D, or 3 D • Simplifies memory addressing when processing multidimensional data – Image processing – Solving PDEs on volumes – … © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 2

bx Matrix Multiplication Using Multiple Blocks 0 1 2 tx 0 1 2 TILE_WIDTH-1

bx Matrix Multiplication Using Multiple Blocks 0 1 2 tx 0 1 2 TILE_WIDTH-1 Nd WIDTH • Break-up Pd into tiles • Each block calculates one tile – Each thread calculates one element – Block size equal tile size Md Pd 1 ty Pdsub TILE_WIDTH-1 WIDTH by 0 1 2 TILE_WIDTHE 0 TILE_WIDTH 2 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign WIDTH 3

A Small Example Block(0, 0) Block(1, 0) P 0, 0 P 1, 0 P

A Small Example Block(0, 0) Block(1, 0) P 0, 0 P 1, 0 P 2, 0 P 3, 0 TILE_WIDTH = 2 P 0, 1 P 1, 1 P 2, 1 P 3, 1 P 0, 2 P 1, 2 P 2, 2 P 3, 2 P 0, 3 P 1, 3 P 2, 3 P 3, 3 Block(0, 1) © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign Block(1, 1) 4

A Small Example: Multiplication Nd 0, 0 Nd 1, 0 Nd 0, 1 Nd

A Small Example: Multiplication Nd 0, 0 Nd 1, 0 Nd 0, 1 Nd 1, 1 Nd 0, 2 Nd 1, 2 Nd 0, 3 Nd 1, 3 Md 0, 0 Md 1, 0 Md 2, 0 Md 3, 0 Pd 0, 0 Pd 1, 0 Pd 2, 0 Pd 3, 0 Md 0, 1 Md 1, 1 Md 2, 1 Md 3, 1 Pd 0, 1 Pd 1, 1 Pd 2, 1 Pd 3, 1 Pd 0, 2 Pd 1, 2 Pd 2, 2 Pd 3, 2 Pd 0, 3 Pd 1, 3 Pd 2, 3 Pd 3, 3 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 5

Revised Matrix Multiplication Kernel using Multiple Blocks __global__ void Matrix. Mul. Kernel(float* Md, float*

Revised Matrix Multiplication Kernel using Multiple Blocks __global__ void Matrix. Mul. Kernel(float* Md, float* Nd, float* Pd, int Width) { // Calculate the row index of the Pd element and M int Row = block. Idx. y*TILE_WIDTH + thread. Idx. y; // Calculate the column idenx of Pd and N int Col = block. Idx. x*TILE_WIDTH + thread. Idx. x; float Pvalue = 0; // each thread computes one element of the block sub-matrix for (int k = 0; k < Width; ++k) Pvalue += Md[Row*Width+k] * Nd[k*Width+Col]; Pd[Row*Width+Col] = Pvalue; } © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 6

Revised Step 5: Kernel Invocation (Host-side Code) // Setup the execution configuration dim 3

Revised Step 5: Kernel Invocation (Host-side Code) // Setup the execution configuration dim 3 dim. Grid(Width/TILE_WIDTH, Width/TILE_WIDTH); dim 3 dim. Block(TILE_WIDTH, TILE_WIDTH); // Launch the device computation threads! Matrix. Mul. Kernel<<<dim. Grid, dim. Block>>>(Md, Nd, Pd, Width); © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 7

CUDA Thread Block • • All threads in a block execute the same kernel

CUDA Thread Block • • All threads in a block execute the same kernel program (SPMD) Programmer declares block: – – – • Thread Id #: 0123… m Threads have thread id numbers within block – • Block size 1 to 512 concurrent threads Block shape 1 D, 2 D, or 3 D Block dimensions in threads CUDA Thread Block Thread program uses thread id to select work and address shared data Thread program Threads in the same block share data and synchronize while doing their share of the work (__syncthreads(): all threads in a block will be held at the calling location until every thread in the block reaches the location). • Threads in different blocks cannot cooperate – Each block can execute in any order relative to other blocs! © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign Courtesy: John Nickolls, NVIDIA 8

Transparent Scalability • Hardware is free to assigns blocks to any resources/processors at any

Transparent Scalability • Hardware is free to assigns blocks to any resources/processors at any time ( the ability to execute the same application program on hardware with a different number of execution resources is referred to as transparent scalability. Reduce developer’s burden and improves the usability of applications. ) – A kernel scales across any number of parallel processors. E. g. Mobile may take left power saving strategy when resources are limited and desktop may take the right power consuming but fast. However, codes can be the same without change. Kernel grid Device Block 0 Block 1 Block 2 Block 3 Block 0 Block 1 Block 4 Block 5 Block 6 Block 7 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 time Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 Block 6 Block 7 Each block can execute in any order relative to other blocks. © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 9

G 80 CUDA mode – A Review • Processors execute computing threads • New

G 80 CUDA mode – A Review • Processors execute computing threads • New operating mode/HW interface for computing Host Input Assembler Thread Execution Manager Parallel Data Cache Parallel Data Cache Texture Texture Texture Load/store Global Memory © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign Load/store 10

G 80 Example: Executing Thread Blocks t 0 t 1 t 2 … tm

G 80 Example: Executing Thread Blocks t 0 t 1 t 2 … tm SM 0 SM 1 MT IU SP t 0 t 1 t 2 … tm MT IU Blocks SP • Blocks Shared Memory Threads are assigned to Streaming Multiprocessors in block granularity – Up to 8 blocks to each SM as resource allows – Recent CUDA device, each SM can take up to 1, 536 threads Shared Memory • • • Could be 256 (threads/block) * 6 blocks Or 512 (threads/block) * 3 blocks, etc. Threads run concurrently – SM maintains thread/block id #s © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 – SM manages/schedules thread ECE 498 AL, University of Illinois, Urbana-Champaign execution 11

Resource Limitation • Each device has a limit on the number of blocks that

Resource Limitation • Each device has a limit on the number of blocks that can be assigned to each SM. There may also be the maximum number of threads for each SM. E. g. 1, 536 threads per SM. With a limited number of SM and a limited number of blocks that can be assigned to each SM, there is a limit on the total number of blocks that can be actively executing in a CUDA device. • Most grid contains many more blocks than this number. The runtime system maintains a list of the blocks that need to execute and assigns new blocks to SM as they complete executing the blocks previously assigned to them. © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 12

G 80 Example: Thread Scheduling • Each Block is executed as 32 thread Warps

G 80 Example: Thread Scheduling • Each Block is executed as 32 thread Warps after it is assigned to a SM. – – • An implementation decision, not part of the CUDA programming model Warps are scheduling units in SM If 3 blocks are assigned to an SM and each block has 256 threads, how many Warps are there in an SM? – – Each Block is divided into 256/32 = 8 Warps There are 8 * 3 = 24 Warps © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign Block 1 Warps … t 0 t 1 t 2 … t 31 … …Block 2 Warps t 0 t 1 t 2 … t 31 … Block 1 Warps … t 0 t 1 t 2 … t 31 … Streaming Multiprocessor Instruction L 1 Instruction Fetch/Dispatch Shared Memory SP SP SFU SP SP 13

G 80 Example: Thread Scheduling (Cont. ) • • Usually less SP number than

G 80 Example: Thread Scheduling (Cont. ) • • Usually less SP number than the total number of threads assigned to each SM. Each SM has only enough hardware to execute instructions from a small subset of all threads at any point in time. In earlier GPU design, each SM can execute only one instruction for a single warp at any given instant, In more recent designs, each SM can execute instructions for a small number of warps at any given point in time. When a warp needs to wait for long-latency global memory access, other warps that do not need the data can be selected for execution. (such behavior of filling the latency time with work from other threads is often called latency tolerance or hiding) • SM implements zero-overhead warp scheduling ( no idle time is introduced) – Warps whose next instruction has its operands ready for consumption are eligible for execution – Eligible Warps are selected for execution on a prioritized scheduling policy – All threads in a warp execute the same instruction when selected © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 14

G 80 Block Granularity Considerations • For Matrix Multiplication using multiple blocks, should I

G 80 Block Granularity Considerations • For Matrix Multiplication using multiple blocks, should I use 8 X 8, 16 X 16 or 32 X 32 blocks? – For 8 X 8, we have 64 threads per Block. Since each SM can take up to 768 threads for G 80, there are 12 Blocks. However, each SM can only take up to 8 Blocks, only 512 threads will go into each SM! SM resource underutilized, not good for latency hiding – For 16 X 16, we have 256 threads per Block. Since each SM can take up to 768 threads, it can take up to 3 Blocks and achieve full capacity unless other resource considerations overrule. – For 32 X 32, we have 1024 threads per Block. Not even one can fit into an SM! © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 15

Query Device Properties • • How do we find out the amount of resources

Query Device Properties • • How do we find out the amount of resources available? Such as number of SMs and the number of threads that can be assigned to each SM? cuda. Get. Device. Count(&dev_count): how many GPUs? Multiple GPU from a single host cuda. Device. Prop dev_prop; For ( i = 0; i< dev_count; i++) cuda. Get. Device. Properties (&dev_prop, i); 1. dev_prop. max. Threads. Per. Block (maximal number of threads in a block. 1024 or fewer) 2. dev_prop. multi. Processor. Count (number of SMs ranging from 2 to 30) 3. dev_prop. max. Threads. Dim[0] for x dimension and dev_prop. max. Threads. Dim[2] for y dimension and dev_prop. max. Threads. Dim[3] for z dimension ( maximal threads along each dimension for a block) 4. dev_prop. clock. Rate (clock frequency of the device, the combined clock rate and the number of SMs give a good indication of the hardware execution capacity of the device). 5. dev_prop. warp. Size ( once a block is assigned to a SM, it is further divided into 32 -thred units called warps. The size is implementation specific. The wrap is the unit of thread scheduling in SM. © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 16

More Details of API Features © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009

More Details of API Features © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 17

Application Programming Interface • The API is an extension to the C programming language

Application Programming Interface • The API is an extension to the C programming language • It consists of: – Language extensions • To target portions of the code for execution on the device – A runtime library split into: • A common component providing built-in vector types and a subset of the C runtime library in both host and device codes • A host component to control and access one or more devices from the host • A device component providing device-specific functions © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 18

Language Extensions: Built-in Variables • dim 3 grid. Dim; – Dimensions of the grid

Language Extensions: Built-in Variables • dim 3 grid. Dim; – Dimensions of the grid in blocks (grid. Dim. z unused) • dim 3 block. Dim; – Dimensions of the block in threads • dim 3 block. Idx; – Block index within the grid • dim 3 thread. Idx; – Thread index within the block © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 19

Common Runtime Component: Mathematical Functions • • pow, sqrt, cbrt, hypot exp, exp 2,

Common Runtime Component: Mathematical Functions • • pow, sqrt, cbrt, hypot exp, exp 2, expm 1 log, log 2, log 10, log 1 p sin, cos, tan, asin, acos, atan 2 sinh, cosh, tanh, asinh, acosh, atanh ceil, floor, trunc, round Etc. – When executed on the host, a given function uses the C runtime implementation if available – These functions are only supported for scalar types, not vector types © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 20

Device Runtime Component: Mathematical Functions • Some mathematical functions (e. g. sin(x)) have a

Device Runtime Component: Mathematical Functions • Some mathematical functions (e. g. sin(x)) have a less accurate, but faster device-only version (e. g. __sin(x)) – – __pow __log, __log 2, __log 10 __exp __sin, __cos, __tan © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 21

Host Runtime Component • Provides functions to deal with: – Device management (including multi-device

Host Runtime Component • Provides functions to deal with: – Device management (including multi-device systems) – Memory management – Error handling • Initializes the first time a runtime function is called • A host thread can invoke device code on only one device – Multiple host threads required to run on multiple devices © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 22

Device Runtime Component: Synchronization Function • void __syncthreads(); • Synchronizes all threads in a

Device Runtime Component: Synchronization Function • void __syncthreads(); • Synchronizes all threads in a block • Once all threads have reached this point, execution resumes normally • Used to avoid RAW / WAR / WAW hazards when accessing shared or global memory • Allowed in conditional constructs only if the conditional is uniform across the entire thread block © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 -2009 ECE 498 AL, University of Illinois, Urbana-Champaign 23