18 742 Fall 2012 Parallel Computer Architecture Lecture

  • Slides: 96
Download presentation
18 -742 Fall 2012 Parallel Computer Architecture Lecture 6: Exploiting Asymmetry Prof. Onur Mutlu

18 -742 Fall 2012 Parallel Computer Architecture Lecture 6: Exploiting Asymmetry Prof. Onur Mutlu Carnegie Mellon University 9/19/2012

Reminder: Review Assignments n n n Due: Friday, September 21, 11: 59 pm. Smith,

Reminder: Review Assignments n n n Due: Friday, September 21, 11: 59 pm. Smith, “Architecture and applications of the HEP multiprocessor computer system, ” SPIE 1981. Tullsen et al. , “Exploiting Choice: Instruction Fetch and Issue on an Implementable Simultaneous Multithreading Processor, ” ISCA 1996. Chappell et al. , “Simultaneous Subordinate Microthreading (SSMT), ” ISCA 1999. Reinhardt and Mukherjee, “Transient Fault Detection via Simultaneous Multithreading, ” ISCA 2000. 2

Other Recommended Papers n n Ipek et al. , “Core fusion: accommodating software diversity

Other Recommended Papers n n Ipek et al. , “Core fusion: accommodating software diversity in chip multiprocessors, ” ISCA 2007. Ausavarugnirun et al. , “Staged memory scheduling: Achieving high performance and scalability in heterogeneous systems, ” ISCA 2012. 3

Last Lecture n An Early History of Multi-Core n Homogeneous Multi-Core Evolution n From

Last Lecture n An Early History of Multi-Core n Homogeneous Multi-Core Evolution n From Symmetry to Asymmetry 4

Today n More on Asymmetric Multi-Core n And, Asymmetry in General 5

Today n More on Asymmetric Multi-Core n And, Asymmetry in General 5

Asymmetric Multi-Core 6

Asymmetric Multi-Core 6

Review: Can We Get the Best of Both Worlds? n Tile Large + High

Review: Can We Get the Best of Both Worlds? n Tile Large + High performance on single thread, serial code sections (2 units) - Low throughput on parallel program portions (8 units) n Tile Small + High throughput on the parallel part (16 units) - Low performance on the serial part, single thread (1 unit), reduced single-thread performance compared to existing single thread processors n Idea: Have both large and small on the same chip Performance asymmetry 7

Review: Asymmetric Chip Multiprocessor (ACMP) Large core “Tile-Large” Small core Small core Small core

Review: Asymmetric Chip Multiprocessor (ACMP) Large core “Tile-Large” Small core Small core Small core Small core Small core “Tile-Small” Small core Small core Small core Large core ACMP Provide one large core and many small cores + Accelerate serial part using the large core (2 units) + Execute parallel part on all cores for high throughput (14 units) n 8

Review: EPI Throttling n n Goal: Minimize execution time of parallel programs while keeping

Review: EPI Throttling n n Goal: Minimize execution time of parallel programs while keeping power within a fixed budget For best scalar and throughput performance, vary energy expended per instruction (EPI) based on available parallelism q q n P = EPI • IPS P = fixed power budget EPI = energy per instruction IPS = aggregate instructions retired per second Idea: For a fixed power budget q q Run sequential phases on high-EPI processor Run parallel phases on multiple low-EPI processors 9

Review: EPI Throttling via DVFS n DVFS: Dynamic voltage frequency scaling n In phases

Review: EPI Throttling via DVFS n DVFS: Dynamic voltage frequency scaling n In phases of low thread parallelism q n Run a few cores at high supply voltage and high frequency In phases of high thread parallelism q Run many cores at low supply voltage and low frequency 10

EPI Throttling (Annavaram et al. , ISCA’ 05) n Static AMP q q q

EPI Throttling (Annavaram et al. , ISCA’ 05) n Static AMP q q q n Duty cycles set once prior to program run Parallel phases run on 3 P/1. 25 GHz Sequential phases run on 1 P/2 GHz Affinity guarantees sequential on 1 P and parallel on 3 Benchmarks that rapidly transition between sequential and parallel phases Dynamic AMP q q Duty cycle changes during program run Parallel phases run on all or a subset of four processors Sequential phases of execution on 1 P/2 GHz Benchmarks with long sequential and parallel phases 11

EPI Throttling (Annavaram et al. , ISCA’ 05) n Evaluation on Base SMP: 4

EPI Throttling (Annavaram et al. , ISCA’ 05) n Evaluation on Base SMP: 4 -way 2 GHz Xeon, 2 MB L 3, 4 GB Memory n Hand-modified programs q q q OMP threads set to 3 for static AMP Calls to set affinity in each thread for static AMP Calls to change duty cycle and to set affinity in dynamic AMP 12

EPI Throttling (Annavaram et al. , ISCA’ 05) n Frequency boosting AMP improves performance

EPI Throttling (Annavaram et al. , ISCA’ 05) n Frequency boosting AMP improves performance compared to 4 -way SMP for many applications 13

EPI Throttling n n Why does Frequency Boosting (FB) AMP not always improve performance?

EPI Throttling n n Why does Frequency Boosting (FB) AMP not always improve performance? Loss of throughput in static AMP (only 3 processors in parallel portion) q n Rapid transitions between serial and parallel phases q n Is this really the best way of using FB-AMP? Data/thread migration and throttling overhead Boosting frequency does not help memory-bound phases 14

Review So Far n Symmetric Multicore q q q n Evolution of Sun’s and

Review So Far n Symmetric Multicore q q q n Evolution of Sun’s and IBM’s Multicore systems and design choices Niagara, Niagara 2, ROCK IBM POWERx Asymmetric multicore q q Motivation Functional vs. Performance Asymmetry Static vs. Dynamic Asymmetry EPI Throttling 15

Design Tradeoffs in ACMP (I) n Hardware Design Effort vs. Programmer Effort - ACMP

Design Tradeoffs in ACMP (I) n Hardware Design Effort vs. Programmer Effort - ACMP requires more design effort + Performance becomes less dependent on length of the serial part + Can reduce programmer effort: Serial portions are not as bad for performance with ACMP n Migration Overhead vs. Accelerated Serial Bottleneck + Performance gain from faster execution of serial portion - Performance loss when architectural state is migrated/switched in when the master changes n Can be alleviated with multithreading and hidden by long serial portion - Serial portion incurs cache misses when it needs data generated by the parallel portion - Parallel portion incurs cache misses when it needs data generated by the serial portion 16

Design Tradeoffs in ACMP (II) n Fewer threads vs. accelerated serial bottleneck + Performance

Design Tradeoffs in ACMP (II) n Fewer threads vs. accelerated serial bottleneck + Performance gain from accelerated serial portion - Performance loss due to unavailability of L threads in parallel portion q q This need not be the case Large core can implement Multithreading to improve parallel throughput As the number of cores (threads) on chip increases, fractional loss in parallel performance decreases 17

Uses of Asymmetry n So far: q n Improvement in serial performance (sequential bottleneck)

Uses of Asymmetry n So far: q n Improvement in serial performance (sequential bottleneck) What else can we do with asymmetry? q q q Energy reduction? Energy/performance tradeoff? Improvement in parallel portion? 18

Use of Asymmetry for Energy Efficiency Kumar et al. , “Single-ISA Heterogeneous Multi-Core Architectures:

Use of Asymmetry for Energy Efficiency Kumar et al. , “Single-ISA Heterogeneous Multi-Core Architectures: The n Potential for Processor Power Reduction, ” MICRO 2003. n Idea: q q q Implement multiple types of cores on chip Monitor characteristics of the running thread (e. g. , sample energy/perf on each core periodically) Dynamically pick the core that provides the best energy/performance tradeoff for a given phase n “Best core” Depends on optimization metric 19

Use of Asymmetry for Energy Efficiency 20

Use of Asymmetry for Energy Efficiency 20

Use of Asymmetry for Energy Efficiency Advantages n + More flexibility in energy-performance tradeoff

Use of Asymmetry for Energy Efficiency Advantages n + More flexibility in energy-performance tradeoff + Can execute computation to the core that is best suited for it (in terms of energy) n Disadvantages/issues - Incorrect predictions/sampling wrong core reduced performance or increased energy - Overhead of core switching - Disadvantages of asymmetric CMP (e. g. , design multiple cores) - Need phase monitoring and matching algorithms - What characteristics should be monitored? - Once characteristics known, how do you pick the core? 21

Use of ACMP to Improve Parallel Portion Performance n Mutual Exclusion: q n n

Use of ACMP to Improve Parallel Portion Performance n Mutual Exclusion: q n n Threads are not allowed to update shared data concurrently Accesses to shared data are encapsulated inside critical sections Only one thread can execute a critical section at a given time Idea: Ship critical sections to a large core Suleman et al. , “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures, ” ASPLOS 2009, IEEE Micro Top Picks 2010. 22

Use of ACMP to Improve Parallel Portion Performance n n Suleman et al. ,

Use of ACMP to Improve Parallel Portion Performance n n Suleman et al. , “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures, ” ASPLOS 2009, IEEE Micro Top Picks 2010. Joao et al. , “Bottleneck Identification and Scheduling, ” ASPLOS 2012. 23

Asymmetry Everywhere 24

Asymmetry Everywhere 24

The Setting n Hardware resources are shared among many threads/apps in a many-core system

The Setting n Hardware resources are shared among many threads/apps in a many-core system q n Management of these resources is a very difficult task q q n Cores, caches, interconnects, memory, disks, power, lifetime, … When optimizing parallel/multiprogrammed workloads Threads interact unpredictably/unfairly in shared resources Power/energy consumption is arguably the most valuable shared resource q Main limiter to efficiency and performance 25

Shield the Programmer from Shared Resources n Writing even sequential software is hard enough

Shield the Programmer from Shared Resources n Writing even sequential software is hard enough q n Programmer should not worry about (hardware) resource management q n Optimizing code for a complex shared-resource parallel system will be a nightmare for most programmers What should be executed where with what resources Future computer architectures should be designed to q q Minimize programmer effort to optimize (parallel) programs Maximize runtime system’s effectiveness in automatic shared resource management 26

Shared Resource Management: Goals n n n Future many-core systems should manage power and

Shared Resource Management: Goals n n n Future many-core systems should manage power and performance automatically across threads/applications Minimize energy/power consumption While satisfying performance/SLA requirements q n Minimize programmer effort q n Provide predictability and Quality of Service In creating optimized parallel programs Asymmetry and configurability in system resources essential to achieve these goals 27

Asymmetry Enables Customization C C C 2 C 1 C C C C C

Asymmetry Enables Customization C C C 2 C 1 C C C C C 4 C 4 C C C 5 C 5 Symmetric n Asymmetric Symmetric: One size fits all q n C 3 Energy and performance suboptimal for different phase behaviors Asymmetric: Enables tradeoffs and customization q q Processing requirements vary across applications and phases Execute code on best-fit resources (minimal energy, adequate perf. ) 28

Thought Experiment: Asymmetry Everywhere n Design each hardware resource with asymmetric, (re- )configurable, partitionable

Thought Experiment: Asymmetry Everywhere n Design each hardware resource with asymmetric, (re- )configurable, partitionable components q Different power/performance/reliability characteristics q To fit different computation/access/communication patterns 29

Thought Experiment: Asymmetry Everywhere n Design the runtime system (HW & SW) to automatically

Thought Experiment: Asymmetry Everywhere n Design the runtime system (HW & SW) to automatically choose the best-fit components for each phase q Satisfy performance/SLA with minimal energy q Dynamically stitch together the “best-fit” chip for each phase 30

Thought Experiment: Asymmetry Everywhere n Morph software components to match asymmetric HW components q

Thought Experiment: Asymmetry Everywhere n Morph software components to match asymmetric HW components q Multiple versions for different resource characteristics 31

Thought Experiment: Asymmetry Everywhere n Design each hardware resource with asymmetric, (re)configurable, partitionable components

Thought Experiment: Asymmetry Everywhere n Design each hardware resource with asymmetric, (re)configurable, partitionable components n n Design the runtime system (HW & SW) to automatically choose the best-fit components for each phase Morph software components to match asymmetric HW components 32

Many Research and Design n. Questions How to design asymmetric components? q q n

Many Research and Design n. Questions How to design asymmetric components? q q n What monitoring to perform cooperatively in HW/SW? q q n n Fixed, partitionable, reconfigurable components? What types of asymmetry? Access patterns, technologies? To characterize a phase and match it to best-fit components Automatically discover phase/task requirements How to design feedback/control loop between components and runtime system software? How to design the runtime to automatically manage resources? q Track task behavior, pick “best-fit” components for the entire workload 33

Summary of the Thought Experiment n Need to minimize energy while satisfying performance requirements

Summary of the Thought Experiment n Need to minimize energy while satisfying performance requirements q While also minimizing programmer effort n Asymmetry key to energy/performance/reliability tradeoffs n Design systems with many asymmetric/partitionable components q q n Provide all-automatic resource management q q n Many types of cores, memories, interconnects, … Partitionable/configurable components, customized accelerators on chip Impose structure: HW and SW cooperatively map phases to components Dynamically stitch together the system that best fits the running tasks Programmer does not need to worry about complex resource sharing 34

Outline n How Do We Get There: Examples n Accelerated Critical Sections (ACS) Bottleneck

Outline n How Do We Get There: Examples n Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling n Asymmetry in Memory n n q q Thread Cluster Memory Scheduling Heterogeneous DRAM+NVM Main Memory 35

Exploiting Asymmetry: Simple Examples n Execute critical/serial sections on high-power, high-performance cores/resources [Suleman+ ASPLOS’

Exploiting Asymmetry: Simple Examples n Execute critical/serial sections on high-power, high-performance cores/resources [Suleman+ ASPLOS’ 09, ISCA’ 10, Top Picks’ 10’ 11, Joao+ ASPLOS’ 12] n Programmer can write less optimized, but more likely correct programs 36

Exploiting Asymmetry: Simple Examples n Execute streaming “memory phases” on streaming-optimized cores and memory

Exploiting Asymmetry: Simple Examples n Execute streaming “memory phases” on streaming-optimized cores and memory hierarchies n More efficient and higher performance than general purpose hierarchy 37

Exploiting Asymmetry: Simple Examples n Partition memory controller and on-chip network bandwidth asymmetrically among

Exploiting Asymmetry: Simple Examples n Partition memory controller and on-chip network bandwidth asymmetrically among threads [Kim+ HPCA 2010, MICRO 2010, Top Picks 2011] [Nychis+ Hot. Nets 2010] [Das+ MICRO 2009, ISCA 2010, Top Picks 2011] n Higher performance and energy-efficiency than symmetric/free-for-all 38

Exploiting Asymmetry: Simple Examples n Have multiple different memory scheduling policies apply them to

Exploiting Asymmetry: Simple Examples n Have multiple different memory scheduling policies apply them to different sets of threads based on thread behavior [Kim+ MICRO 2010, Top Picks 2011] [Ausavarungnirun, ISCA 2012] n Higher performance and fairness than a homogeneous policy 39

Exploiting Asymmetry: Simple Examples n Build main memory with different technologies with different characteristics

Exploiting Asymmetry: Simple Examples n Build main memory with different technologies with different characteristics (energy, latency, wear, bandwidth) [Meza+ IEEE CAL’ 12] n Map pages/applications to the best-fit memory resource 40

Outline n How Do We Get There: Examples n Accelerated Critical Sections (ACS) Bottleneck

Outline n How Do We Get There: Examples n Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling n Asymmetry in Memory n n q q Thread Cluster Memory Scheduling Heterogeneous DRAM+NVM Main Memory 41

Serialized Code Sections in Parallel Applications n Multithreaded applications: q Programs split into threads

Serialized Code Sections in Parallel Applications n Multithreaded applications: q Programs split into threads n Threads execute concurrently on multiple cores n Many programs cannot be parallelized completely n Serialized code sections: q q q Reduce performance Limit scalability Waste energy 42

Causes of Serialized Code Sections n n Sequential portions (Amdahl’s “serial part”) Critical sections

Causes of Serialized Code Sections n n Sequential portions (Amdahl’s “serial part”) Critical sections Barriers Limiter stages in pipelined programs 43

Bottlenecks in Multithreaded Applications Definition: any code segment for which threads contend (i. e.

Bottlenecks in Multithreaded Applications Definition: any code segment for which threads contend (i. e. wait) Examples: n Amdahl’s serial portions q n Critical sections q n Ensure mutual exclusion likely to be on the critical path if contended Barriers q n Only one thread exists on the critical path Ensure all threads reach a point before continuing the latest thread arriving is on the critical path Pipeline stages q Different stages of a loop iteration may execute on different threads, slowest stage makes other stages wait on the critical path 44

Critical Sections n Threads are not allowed to update shared data concurrently q n

Critical Sections n Threads are not allowed to update shared data concurrently q n n For correctness (mutual exclusion principle) Accesses to shared data are encapsulated inside critical sections Only one thread can execute a critical section at a given time 45

Example from My. SQL Critical Section Access Open Tables Cache Open database tables Perform

Example from My. SQL Critical Section Access Open Tables Cache Open database tables Perform the operations …. Parallel 46

Contention for Critical Sections Critical Section 12 iterations, 33% instructions inside the critical section

Contention for Critical Sections Critical Section 12 iterations, 33% instructions inside the critical section Parallel Idle P=4 P=3 P=2 33% in critical section P=1 0 1 2 3 4 5 6 7 8 9 10 11 12 47

Contention for Critical Sections Critical Section 12 iterations, 33% instructions inside the critical section

Contention for Critical Sections Critical Section 12 iterations, 33% instructions inside the critical section Parallel Idle P=4 Accelerating critical sections increases performance and scalability P=3 Critical Section Accelerated by 2 x P=2 P=1 0 1 2 3 4 5 6 7 8 9 10 11 12 48

Impact of Critical Sections on n Contention for critical sections leads to serial execution

Impact of Critical Sections on n Contention for critical sections leads to serial execution Scalability 8 7 Speedup n (serialization) of threads in the parallel program portion Contention for critical sections increases with the number of threads and limits scalability 6 5 4 3 2 1 0 0 8 16 24 32 Chip Area (cores) My. SQL (oltp-1) 49

A Case for Asymmetry n n Execution time of sequential kernels, critical sections, and

A Case for Asymmetry n n Execution time of sequential kernels, critical sections, and limiter stages must be short It is difficult for the programmer to shorten these serialized sections q q q n n Insufficient domain-specific knowledge Variation in hardware platforms Limited resources Goal: A mechanism to shorten serial bottlenecks without requiring programmer effort Idea: Accelerate serialized code sections by shipping them to powerful cores in an asymmetric multi-core (ACMP) 50

ACMP Large core Small core Small core Small core ACMP n n n Provide

ACMP Large core Small core Small core Small core ACMP n n n Provide one large core and many small cores Execute parallel part on small cores for high throughput Accelerate serialized sections using the large core q Baseline: Amdahl’s serial part accelerated [Morad+ CAL 2006, Suleman+, UT-TR 2007] 51

Conventional ACMP 1. 2. 3. 4. 5. Enter. CS() Priority. Q. insert(…) Leave. CS()

Conventional ACMP 1. 2. 3. 4. 5. Enter. CS() Priority. Q. insert(…) Leave. CS() P 1 P 2 encounters a Critical Section Sends a request for the lock Acquires the lock Executes Critical Section Releases the lock Core executing critical section P 2 P 3 P 4 On-chip Interconnect 52

Accelerated Critical Sections (ACS) Critical Section Request Buffer (CSRB) Large core Small core Small

Accelerated Critical Sections (ACS) Critical Section Request Buffer (CSRB) Large core Small core Small core Small core ACMP n Accelerate Amdahl’s serial part and critical sections using the large core q Suleman et al. , “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures, ” ASPLOS 2009, IEEE Micro Top Picks 2010. 53

Accelerated Critical Sections (ACS) Enter. CS() Priority. Q. insert(…) Leave. CS() P 1 Critical

Accelerated Critical Sections (ACS) Enter. CS() Priority. Q. insert(…) Leave. CS() P 1 Critical Section Request Buffer (CSRB) 1. P 2 encounters a critical section (CSCALL) 2. P 2 sends CSCALL Request to CSRB 3. P 1 executes Critical Section 4. P 1 sends CSDONE signal Core executing critical section P 2 P 3 P 4 Onchip. Interconnect 54

ACS Architecture Overview n ISA extensions q q CSCALL LOCK_ADDR, TARGET_PC CSRET LOCK_ADDR n

ACS Architecture Overview n ISA extensions q q CSCALL LOCK_ADDR, TARGET_PC CSRET LOCK_ADDR n Compiler/Library inserts CSCALL/CSRET n On a CSCALL, the small core: q Sends a CSCALL request to the large core n q n Arguments: Lock address, Target PC, Stack Pointer, Core ID Stalls and waits for CSDONE Large Core q q Critical Section Request Buffer (CSRB) Executes the critical section and sends CSDONE to the requesting core 55

Accelerated Critical Sections (ACS) Small Core A = compute() PUSH A CSCALL X, Target

Accelerated Critical Sections (ACS) Small Core A = compute() PUSH A CSCALL X, Target PC LOCK X result = CS(A) UNLOCK X print result … … … … Large Core CSCALL Request Send X, TPC, STACK_PTR, CORE_ID … Waiting in Critical Section … Request Buffer … (CSRB) TPC: Acquire X POP A result = CS(A) PUSH result Release X CSRET X CSDONE Response POP result print result 56

False Serialization n ACS can serialize independent critical sections n Selective Acceleration of Critical

False Serialization n ACS can serialize independent critical sections n Selective Acceleration of Critical Sections (SEL) q Saturating counters to track false serialization To large core A 2 3 4 CSCALL (A) B 5 4 CSCALL (A) Critical Section Request Buffer (CSRB) CSCALL (B) From small cores 57

ACS Performance Tradeoffs n Pluses + Faster critical section execution + Shared locks stay

ACS Performance Tradeoffs n Pluses + Faster critical section execution + Shared locks stay in one place: better lock locality + Shared data stays in large core’s (large) caches: better shared data locality, less ping-ponging n Minuses - Large core dedicated for critical sections: reduced parallel throughput - CSCALL and CSDONE control transfer overhead - Thread-private data needs to be transferred to large core: worse private data locality 58

ACS Performance Tradeoffs n Fewer parallel threads vs. accelerated critical sections q q Accelerating

ACS Performance Tradeoffs n Fewer parallel threads vs. accelerated critical sections q q Accelerating critical sections offsets loss in throughput As the number of cores (threads) on chip increase: n n n Overhead of CSCALL/CSDONE vs. better lock locality q n Fractional loss in parallel performance decreases Increased contention for critical sections makes acceleration more beneficial ACS avoids “ping-ponging” of locks among caches by keeping them at the large core More cache misses for private data vs. fewer misses for shared data 59

Cache Misses for Private Data Priority. Heap. insert(New. Sub. Problems) Private Data: New. Sub.

Cache Misses for Private Data Priority. Heap. insert(New. Sub. Problems) Private Data: New. Sub. Problems Shared Data: The priority heap Puzzle Benchmark 60

ACS Performance Tradeoffs n Fewer parallel threads vs. accelerated critical sections q q Accelerating

ACS Performance Tradeoffs n Fewer parallel threads vs. accelerated critical sections q q Accelerating critical sections offsets loss in throughput As the number of cores (threads) on chip increase: n n n Overhead of CSCALL/CSDONE vs. better lock locality q n Fractional loss in parallel performance decreases Increased contention for critical sections makes acceleration more beneficial ACS avoids “ping-ponging” of locks among caches by keeping them at the large core More cache misses for private data vs. fewer misses for shared data q Cache misses reduce if shared data > private data We will get back to this 61

ACS Comparison Points Small core Small core Small core Small core Small core Conventional

ACS Comparison Points Small core Small core Small core Small core Small core Conventional locking Small core Small core Small core Large core n n Conventional locking Large core executes Amdahl’s serial part Small core Small core Small core Large core ACMP SCMP n Small core ACS n Large core executes Amdahl’s serial part and critical sections 62

Accelerated Critical Sections: Methodology n Workloads: 12 critical section intensive applications q n Multi-core

Accelerated Critical Sections: Methodology n Workloads: 12 critical section intensive applications q n Multi-core x 86 simulator q q n Data mining kernels, sorting, database, web, networking 1 large and 28 small cores Aggressive stream prefetcher employed at each core Details: q q Large core: 2 GHz, out-of-order, 128 -entry ROB, 4 -wide, 12 -stage Small core: 2 GHz, in-order, 2 -wide, 5 -stage Private 32 KB L 1, private 256 KB L 2, 8 MB shared L 3 On-chip interconnect: Bi-directional ring, 5 -cycle hop latency 63

ACS Performance Chip Area = 32 small cores Equal-area comparison Number of threads =

ACS Performance Chip Area = 32 small cores Equal-area comparison Number of threads = Best threads SCMP = 32 small cores ACMP = 1 large and 28 small cores 269 180 185 Coarse-grain locks Fine-grain locks 64

------ SCMP ------ ACS Equal-Area Comparisons Number of threads = No. of cores Speedup

------ SCMP ------ ACS Equal-Area Comparisons Number of threads = No. of cores Speedup over a small core 3, 5 3 2, 5 2 1, 5 1 0, 5 0 3 5 2, 5 4 2 3 1, 5 2 1 0, 5 1 0 0 0 8 16 24 32 (a) ep (b) is 6 10 5 8 4 2 1 2 0 0 0 8 16 24 32 (c) pagemine (d) puzzle 6 (g) sqlite (h) iplookup 0 8 16 24 32 (e) qsort (f) tsp 12 10 2, 5 10 8 2 8 6 1, 5 6 4 1 4 2 0, 5 2 0 0 0 8 16 24 32 3 2 0 8 16 24 32 14 12 10 8 6 4 2 0 12 4 4 3, 5 3 2, 5 2 1, 5 1 0, 5 0 0 8 16 24 32 8 6 3 7 6 5 4 3 2 1 0 0 8 16 24 32 (i) oltp-1 (i) oltp-2 Chip Area (small cores) 65 0 8 16 24 32 (k) specjbb 0 8 16 24 32 (l) webcache

ACS Summary n n n Critical sections reduce performance and limit scalability Accelerate critical

ACS Summary n n n Critical sections reduce performance and limit scalability Accelerate critical sections by executing them on a powerful core ACS reduces average execution time by: q q n n 34% compared to an equal-area SCMP 23% compared to an equal-area ACMP ACS improves scalability of 7 of the 12 workloads Generalizing the idea: Accelerate all bottlenecks (“critical paths”) by executing them on a powerful core 66

Outline n How Do We Get There: Examples n Accelerated Critical Sections (ACS) Bottleneck

Outline n How Do We Get There: Examples n Accelerated Critical Sections (ACS) Bottleneck Identification and Scheduling (BIS) Staged Execution and Data Marshaling n Asymmetry in Memory n n q q Thread Cluster Memory Scheduling Heterogeneous DRAM+NVM Main Memory 67

BIS Summary n Problem: Performance and scalability of multithreaded applications are limited by serializing

BIS Summary n Problem: Performance and scalability of multithreaded applications are limited by serializing bottlenecks q q n Our Goal: Dynamically identify the most important bottlenecks and accelerate them q q n How to identify the most critical bottlenecks How to efficiently accelerate them Solution: Bottleneck Identification and Scheduling (BIS) q q n different types: critical sections, barriers, slow pipeline stages importance (criticality) of a bottleneck can change over time Software: annotate bottlenecks (Bottleneck. Call, Bottleneck. Return) and implement waiting for bottlenecks with a special instruction (Bottleneck. Wait) Hardware: identify bottlenecks that cause the most thread waiting and accelerate those bottlenecks on large cores of an asymmetric multi-core system Improves multithreaded application performance and scalability, outperforms previous work, and performance improves with more cores 68

Bottlenecks in Multithreaded Applications Definition: any code segment for which threads contend (i. e.

Bottlenecks in Multithreaded Applications Definition: any code segment for which threads contend (i. e. wait) Examples: n Amdahl’s serial portions q n Critical sections q n Ensure mutual exclusion likely to be on the critical path if contended Barriers q n Only one thread exists on the critical path Ensure all threads reach a point before continuing the latest thread arriving is on the critical path Pipeline stages q Different stages of a loop iteration may execute on different threads, slowest stage makes other stages wait on the critical path 69

Observation: Limiting Bottlenecks Change Over Time A=full linked list; B=empty linked list repeat 32

Observation: Limiting Bottlenecks Change Over Time A=full linked list; B=empty linked list repeat 32 threads Lock A Traverse list A Remove X from A Unlock A Compute on X Lock B Traverse list B Insert X into B Unlock B until A is empty Lock B is limiter Lock A is limiter 70

Limiting Bottlenecks Do Change on Real Applications My. SQL running Sysbench queries, 16 threads

Limiting Bottlenecks Do Change on Real Applications My. SQL running Sysbench queries, 16 threads 71

Previous Work on Bottleneck Asymmetric CMP (ACMP) proposals Acceleration n [Annavaram+, ISCA’ 05] [Morad+,

Previous Work on Bottleneck Asymmetric CMP (ACMP) proposals Acceleration n [Annavaram+, ISCA’ 05] [Morad+, Comp. Arch. Letters’ 06] [Suleman+, Tech. Report’ 07] q n n Accelerate only the Amdahl’s bottleneck Accelerated Critical Sections (ACS) [Suleman+, ASPLOS’ 09] q Accelerate only critical sections q Does not take into account importance of critical sections Feedback-Directed Pipelining (FDP) [Suleman+, PACT’ 10 and Ph. D thesis’ 11] q Accelerate only stages with lowest throughput q Slow to adapt to phase changes (software based library) No previous work can accelerate all three types of bottlenecks or quickly adapts to fine-grain changes in the importance of bottlenecks Our goal: general mechanism to identify performance-limiting bottlenecks of any type and accelerate them on an ACMP 72

Bottleneck Identification and Scheduling (BIS) n n Key insight: q Thread waiting reduces parallelism

Bottleneck Identification and Scheduling (BIS) n n Key insight: q Thread waiting reduces parallelism and is likely to reduce performance q Code causing the most thread waiting likely critical path Key idea: q Dynamically identify bottlenecks that cause the most thread waiting q Accelerate them (using powerful cores in an ACMP) 73

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implement waiting for

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implement waiting for bottlenecks Binary containing BIS instructions Hardware 1. Measure thread waiting cycles (TWC) for each bottleneck 2. Accelerate bottleneck(s) with the highest TWC 74

Critical Sections: Code Modifications target. PC: … Bottleneck. Call bid, target. PC while cannot

Critical Sections: Code Modifications target. PC: … Bottleneck. Call bid, target. PC while cannot acquire lock … Wait loop for watch_addr while cannot acquire lock Bottleneck. Wait bid, watch_addr Wait loop for watch_addr … acquire release lock Used to enable Used to keep track of … acceleration waiting cycles release lock Bottleneck. Return bid 75

Barriers: Code Modifications target. PC: … Bottleneck. Call bid, target. PC enter barrier while

Barriers: Code Modifications target. PC: … Bottleneck. Call bid, target. PC enter barrier while not all threads in barrier Bottleneck. Wait bid, watch_addr exit barrier … code running for the barrier … Bottleneck. Return bid 76

Pipeline Stages: Code Modifications target. PC: Bottleneck. Call bid, target. PC … while not

Pipeline Stages: Code Modifications target. PC: Bottleneck. Call bid, target. PC … while not done while empty queue Bottleneck. Wait prev_bid dequeue work do the work … while full queue Bottleneck. Wait next_bid enqueue next work Bottleneck. Return bid 77

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implements waiting for

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implements waiting for bottlenecks Binary containing BIS instructions Hardware 1. Measure thread waiting cycles (TWC) for each bottleneck 2. Accelerate bottleneck(s) with the highest TWC 78

We did not cover the following slides in lecture. These are for your preparation

We did not cover the following slides in lecture. These are for your preparation for the next lecture.

BIS: Hardware Overview n n Performance-limiting bottleneck identification and acceleration are independent tasks Acceleration

BIS: Hardware Overview n n Performance-limiting bottleneck identification and acceleration are independent tasks Acceleration can be accomplished in multiple ways q q q Increasing core frequency/voltage Prioritization in shared resources [Ebrahimi+, MICRO’ 11] Migration to faster cores in an Asymmetric CMP Small core Large core Small Small core core 80

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implements waiting for

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implements waiting for bottlenecks Binary containing BIS instructions Hardware 1. Measure thread waiting cycles (TWC) for each bottleneck 2. Accelerate bottleneck(s) with the highest TWC 81

Determining Thread Waiting Cycles for Each Bottleneck Small Core 1 Large Core 0 Bottleneck.

Determining Thread Waiting Cycles for Each Bottleneck Small Core 1 Large Core 0 Bottleneck. Wait x 4500 waiters=0, 11 0 1 7 10 waiters=2, twc = 5 9 bid=x 4500, waiters=1, 3 4 2 Small Core 2 Bottleneck Table (BT) Bottleneck. Wait x 4500 … 82

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implements waiting for

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implements waiting for bottlenecks Binary containing BIS instructions Hardware 1. Measure thread waiting cycles (TWC) for each bottleneck 2. Accelerate bottleneck(s) with the highest TWC 83

Bottleneck Acceleration Small Core 1 Large Core 0 x 4700 Bottleneck. Call bid=x 4700,

Bottleneck Acceleration Small Core 1 Large Core 0 x 4700 Bottleneck. Call bid=x 4700, x 4600 pc, sp, core 1 Bottleneck. Return x 4700 Execute locally remotely Acceleration Index Table (AIT) bid=x 4700, pc, sp, core 1 bid=x 4700 , large core 0 Executeremotely locally Small Core 2 Scheduling Buffer (SB) bid=x 4600, twc=100 twc < Threshold bid=x 4700, twc=10000 twc > Threshold Bottleneck Table (BT) AIT bid=x 4700 , large core 0 … 84

BIS Mechanisms n Basic mechanisms for BIS: q q n Determining Thread Waiting Cycles

BIS Mechanisms n Basic mechanisms for BIS: q q n Determining Thread Waiting Cycles Accelerating Bottlenecks Mechanisms to improve performance and generality of BIS: q q q Dealing with false serialization Preemptive acceleration Support for multiple large cores 85

False Serialization and Starvation n n Observation: Bottlenecks are picked from Scheduling Buffer in

False Serialization and Starvation n n Observation: Bottlenecks are picked from Scheduling Buffer in Thread Waiting Cycles order Problem: An independent bottleneck that is ready to execute has to wait for another bottleneck that has higher thread waiting cycles False serialization Starvation: Extreme false serialization Solution: Large core detects when a bottleneck is ready to execute in the Scheduling Buffer but it cannot sends the bottleneck back to the small core 86

Preemptive Acceleration n Observation: A bottleneck executing on a small core can become the

Preemptive Acceleration n Observation: A bottleneck executing on a small core can become the bottleneck with the highest thread waiting cycles Problem: This bottleneck should really be accelerated (i. e. , executed on the large core) Solution: The Bottleneck Table detects the situation and sends a preemption signal to the small core. Small core: q n saves register state on stack, ships the bottleneck to the large core Main acceleration mechanism for barriers and pipeline stages 87

Support for Multiple Large Cores n n n Objective: to accelerate independent bottlenecks Each

Support for Multiple Large Cores n n n Objective: to accelerate independent bottlenecks Each large core has its own Scheduling Buffer (shared by all of its SMT threads) Bottleneck Table assigns each bottleneck to a fixed large core context to q q n preserve cache locality avoid busy waiting Preemptive acceleration extended to send multiple instances of a bottleneck to different large core contexts 88

Hardware Cost n Main structures: q q q Bottleneck Table (BT): global 32 -entry

Hardware Cost n Main structures: q q q Bottleneck Table (BT): global 32 -entry associative cache, minimum-Thread-Waiting-Cycle replacement Scheduling Buffers (SB): one table per large core, as many entries as small cores Acceleration Index Tables (AIT): one 32 -entry table per small core n Off the critical path n Total storage cost for 56 -small-cores, 2 -large-cores < 19 KB 89

BIS Performance Trade-offs n Bottleneck identification: q n Small cost: Bottleneck. Wait instruction and

BIS Performance Trade-offs n Bottleneck identification: q n Small cost: Bottleneck. Wait instruction and Bottleneck Table Bottleneck acceleration on an ACMP (execution migration): q Faster bottleneck execution vs. fewer parallel threads n q Better shared data locality vs. worse private data locality n n q Acceleration offsets loss of parallel throughput with large core counts Shared data stays on large core (good) Private data migrates to large core (bad, but latency hidden with Data Marshaling [Suleman+, ISCA’ 10]) Benefit of acceleration vs. migration latency n n Migration latency usually hidden by waiting (good) Unless bottleneck not contended (bad, but likely to not be on critical path) 90

Methodology n Workloads: 8 critical section intensive, 2 barrier intensive and 2 pipeline-parallel applications

Methodology n Workloads: 8 critical section intensive, 2 barrier intensive and 2 pipeline-parallel applications q n Cycle-level multi-core x 86 simulator q q n Data mining kernels, scientific, database, web, networking, specjbb 8 to 64 small-core-equivalent area, 0 to 3 large cores, SMT 1 large core is area-equivalent to 4 small cores Details: q q Large core: 4 GHz, out-of-order, 128 -entry ROB, 4 -wide, 12 -stage Small core: 4 GHz, in-order, 2 -wide, 5 -stage Private 32 KB L 1, private 256 KB L 2, shared 8 MB L 3 On-chip interconnect: Bi-directional ring, 2 -cycle hop latency 91

BIS Comparison Points (Area. Equivalent) SCMP (Symmetric CMP) n q q n n n

BIS Comparison Points (Area. Equivalent) SCMP (Symmetric CMP) n q q n n n All small cores Results in the paper ACMP (Asymmetric CMP) q Accelerates only Amdahl’s serial portions q Our baseline ACS (Accelerated Critical Sections) q Accelerates only critical sections and Amdahl’s serial portions q Applicable to multithreaded workloads (iplookup, mysql, specjbb, sqlite, tsp, webcache, mg, ft) FDP (Feedback-Directed Pipelining) q Accelerates only slowest pipeline stages q Applicable to pipeline-parallel workloads (rank, pagemine) 92

BIS Performance Improvement Optimal number of threads, 28 small cores, 1 large core n

BIS Performance Improvement Optimal number of threads, 28 small cores, 1 large core n n which ACS limiting bottlenecks change over barriers, time BIS outperforms ACS/FDP and ACMP by 32% FDP ACSby 15% cannot accelerate BIS improves scalability on 4 of the benchmarks 93

Why Does BIS Work? Fraction of execution time spent on predicted-important bottlenecks Actually critical

Why Does BIS Work? Fraction of execution time spent on predicted-important bottlenecks Actually critical n n Coverage: fraction of program critical path that is actually identified as bottlenecks q 39% (ACS/FDP) to 59% (BIS) Accuracy: identified bottlenecks on the critical path over total identified bottlenecks q 72% (ACS/FDP) to 73. 5% (BIS) 94

BIS Scaling Results Performance increases with: 15% 2. 4% 6. 2% 19% 1) More

BIS Scaling Results Performance increases with: 15% 2. 4% 6. 2% 19% 1) More small cores n Contention due to bottlenecks increases n Loss of parallel throughput due to large core reduces 2) More large cores n Can accelerate independent bottlenecks n Without reducing parallel throughput (enough cores) 95

BIS Summary n n Serializing bottlenecks of different types limit performance of multithreaded applications:

BIS Summary n n Serializing bottlenecks of different types limit performance of multithreaded applications: Importance changes over time BIS is a hardware/software cooperative solution: q q n BIS improves application performance and scalability: q q q n Dynamically identifies bottlenecks that cause the most thread waiting and accelerates them on large cores of an ACMP Applicable to critical sections, barriers, pipeline stages 15% speedup over ACS/FDP Can accelerate multiple independent critical bottlenecks Performance benefits increase with more cores Provides comprehensive fine-grained bottleneck acceleration for future ACMPs with little or no programmer effort 96