Computer Architecture Lecture 20 Heterogeneous MultiCore Systems II

  • Slides: 139
Download presentation
Computer Architecture Lecture 20: Heterogeneous Multi-Core Systems II Prof. Onur Mutlu ETH Zürich Fall

Computer Architecture Lecture 20: Heterogeneous Multi-Core Systems II Prof. Onur Mutlu ETH Zürich Fall 2018 29 November 2018

Computer Architecture Research n If you want to do research in any of the

Computer Architecture Research n If you want to do research in any of the covered topics or any topic in Comp Arch, HW/SW Interaction & related areas q q n We have many projects and a great environment to perform topnotch research, bachelor’s/master’s/semester projects So, talk with me (email, in-person, Whats. App, etc. ) Many research topics and projects q q q q Memory (DRAM, NVM, Flash, software/hardware issues) Processing in Memory Hardware Security New Computing Paradigms Machine Learning for System Design Genomics, System Design for Bioinformatics and Health … 2

Today n Heterogeneous Multi-Core Systems n Bottleneck Acceleration 3

Today n Heterogeneous Multi-Core Systems n Bottleneck Acceleration 3

Some Readings n n n Suleman et al. , “Accelerating Critical Section Execution with

Some Readings n n n Suleman et al. , “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures, ” ASPLOS 2009. Joao et al. , “Bottleneck Identification and Scheduling in Multithreaded Applications, ” ASPLOS 2012. Joao et al. , “Bottleneck Identification and Scheduling for Multithreaded Applications, ” ASPLOS 2012. Joao et al. , “Utility-Based Acceleration of Multithreaded Applications on Asymmetric CMPs, ” ISCA 2013. Grochowski et al. , “Best of Both Latency and Throughput, ” ICCD 2004. 4

Heterogeneity (Asymmetry) 5

Heterogeneity (Asymmetry) 5

Recap: Why Asymmetry in Design? (II) n Problem: Symmetric design is one-size-fits-all n n

Recap: Why Asymmetry in Design? (II) n Problem: Symmetric design is one-size-fits-all n n It tries to fit a single-size design to all workloads and metrics It is very difficult to come up with a single design q q n that satisfies all workloads even for a single metric that satisfies all design metrics at the same time This holds true for different system components, or resources q q Cores, caches, memory, controllers, interconnect, disks, servers, … Algorithms, policies, … 6

Recap: Asymmetry Enables Customization C C C 2 C 1 C C C C

Recap: Asymmetry Enables Customization C C C 2 C 1 C C C C 4 C 4 C C C 5 C 5 C 5 Symmetric n Asymmetric Symmetric: One size fits all q n C 3 C Energy and performance suboptimal for different “workload” behaviors Asymmetric: Enables customization and adaptation q q Processing requirements vary across workloads (applications and phases) Execute code on best-fit resources (minimal energy, adequate perf. ) 7

Recap: Three Key Problems in Future Systems n Memory system q q n Efficiency

Recap: Three Key Problems in Future Systems n Memory system q q n Efficiency (performance and energy) scalability q q n Applications are increasingly data intensive Data storage and movement limits performance & efficiency Enables scalable systems new applications Enables better user experience new usage models Predictability and robustness q q Resource sharing and unreliable hardware causes Qo. S issues Predictable performance and Qo. S are first class constraints s n g i s e D c i r t Asymme s m e l b o r P e s e h T e v l o S p Can Hel 8

Recap: Commercial Asymmetric Design Examples n Integrated CPU-GPU systems (e. g. , Intel Sandy.

Recap: Commercial Asymmetric Design Examples n Integrated CPU-GPU systems (e. g. , Intel Sandy. Bridge) n CPU + Hardware Accelerators (e. g. , your cell phone) n ARM big. LITTLE processor n IBM Cell processor 9

Recap: Increasing Asymmetry in Modern Systems CPU CPU GPU Shared Cache HWA DRAM and

Recap: Increasing Asymmetry in Modern Systems CPU CPU GPU Shared Cache HWA DRAM and Hybrid Memory Controllers DRAM and Hybrid Memories n n n Heterogeneous agents: CPUs, GPUs, and HWAs Heterogeneous memories: Fast vs. Slow DRAM Heterogeneous interconnects: Control, Data, Synchronization 10

Multi-Core Design: An Asymmetric Perspective 11

Multi-Core Design: An Asymmetric Perspective 11

Many Cores on Chip n n Simpler and lower power than a single large

Many Cores on Chip n n Simpler and lower power than a single large core Large scale parallelism on chip Intel Core i 7 AMD Barcelona 8 cores IBM Cell BE IBM POWER 7 Intel SCC Tilera TILE Gx 8+1 cores 8 cores 4 cores Sun Niagara II 8 cores Nvidia Fermi 448 “cores” 48 cores, networked 100 cores, networked 12

With Many Cores on Chip n What we want: q n N times the

With Many Cores on Chip n What we want: q n N times the performance with N times the cores when we parallelize an application on N cores What we get: q q Amdahl’s Law (serial bottleneck) Bottlenecks in the parallel portion 13

Caveats of Parallelism n Amdahl’s Law q q f: Parallelizable fraction of a program

Caveats of Parallelism n Amdahl’s Law q q f: Parallelizable fraction of a program N: Number of processors 1 Speedup = 1 -f q n n + f N Amdahl, “Validity of the single processor approach to achieving large scale computing capabilities, ” AFIPS 1967. Maximum speedup limited by serial portion: Serial bottleneck Parallel portion is usually not perfectly parallel q q q Synchronization overhead (e. g. , updates to shared data) Load imbalance overhead (imperfect parallelization) Resource sharing overhead (contention among N processors) 14

The Problem: Serialized Code Sections n Many parallel programs cannot be parallelized completely n

The Problem: Serialized Code Sections n Many parallel programs cannot be parallelized completely n Causes of serialized code sections q q n Sequential portions (Amdahl’s “serial part”) Critical sections Barriers Limiter stages in pipelined programs Serialized code sections q q q Reduce performance Limit scalability Waste energy 15

Example from My. SQL Asymmetric Critical Section Access Open Tables Cache 8 7 Speedup

Example from My. SQL Asymmetric Critical Section Access Open Tables Cache 8 7 Speedup Open database tables 6 5 4 3 2 Perform the operations …. Today 1 Parallel 0 0 8 16 24 32 Chip Area (cores) 16

Demands in Different Code Sections n What we want: n In a serialized code

Demands in Different Code Sections n What we want: n In a serialized code section one powerful “large” core n In a parallel code section many wimpy “small” cores n These two conflict with each other: q q If you have a single powerful core, you cannot have many cores A small core is much more energy and area efficient than a large core 17

“Large” vs. “Small” Cores Large Core Out-of-order Wide fetch e. g. 4 -wide Deeper

“Large” vs. “Small” Cores Large Core Out-of-order Wide fetch e. g. 4 -wide Deeper pipeline Aggressive branch predictor (e. g. hybrid) • Multiple functional units • Trace cache • Memory dependence speculation • • Small Core • • In-order Narrow Fetch e. g. 2 -wide Shallow pipeline Simple branch predictor (e. g. Gshare) • Few functional units Large Cores are power inefficient: e. g. , 2 x performance for 4 x area (power) 18

Large vs. Small Cores n Grochowski et al. , “Best of both Latency and

Large vs. Small Cores n Grochowski et al. , “Best of both Latency and Throughput, ” ICCD 2004. 19

Meet Large: IBM POWER 4 n n n Tendler et al. , “POWER 4

Meet Large: IBM POWER 4 n n n Tendler et al. , “POWER 4 system microarchitecture, ” IBM J R&D, 2002. A symmetric multi-core chip… Two powerful cores 20

IBM POWER 4 n n n 2 cores, out-of-order execution 100 -entry instruction window

IBM POWER 4 n n n 2 cores, out-of-order execution 100 -entry instruction window in each core 8 -wide instruction fetch, issue, execute Large, local+global hybrid branch predictor 1. 5 MB, 8 -way L 2 cache Aggressive stream based prefetching 21

IBM POWER 5 n Kalla et al. , “IBM Power 5 Chip: A Dual-Core

IBM POWER 5 n Kalla et al. , “IBM Power 5 Chip: A Dual-Core Multithreaded Processor, ” IEEE Micro 2004. 22

Meet Small: Sun Niagara (Ultra. SPARC T 1) n Kongetira et al. , “Niagara:

Meet Small: Sun Niagara (Ultra. SPARC T 1) n Kongetira et al. , “Niagara: A 32 -Way Multithreaded SPARC Processor, ” IEEE Micro 2005. 23

Niagara Core n n n 4 -way fine-grain multithreaded, 6 -stage, dual-issue in-order Round

Niagara Core n n n 4 -way fine-grain multithreaded, 6 -stage, dual-issue in-order Round robin thread selection (unless cache miss) Shared FP unit among cores 24

Remember the Demands n What we want: n In a serialized code section one

Remember the Demands n What we want: n In a serialized code section one powerful “large” core n In a parallel code section many wimpy “small” cores n These two conflict with each other: q q n If you have a single powerful core, you cannot have many cores A small core is much more energy and area efficient than a large core Can we get the best of both worlds? 25

Performance vs. Parallelism Assumptions: 1. Small core takes an area budget of 1 and

Performance vs. Parallelism Assumptions: 1. Small core takes an area budget of 1 and has performance of 1 2. Large core takes an area budget of 4 and has performance of 2 26

Tile-Large Approach Large core “Tile-Large” Tile a few large cores n IBM Power 5,

Tile-Large Approach Large core “Tile-Large” Tile a few large cores n IBM Power 5, AMD Barcelona, Intel Core 2 Quad, Intel Nehalem + High performance on single thread, serial code sections (2 units) - Low throughput on parallel program portions (8 units) n 27

Tile-Small Approach Small core Small core Small core Small core “Tile-Small” Tile many small

Tile-Small Approach Small core Small core Small core Small core “Tile-Small” Tile many small cores n Sun Niagara, Intel Larrabee, Tilera TILE (tile ultra-small) + High throughput on the parallel part (16 units) - Low performance on the serial part, single thread (1 unit) n 28

Can we get the best of both worlds? n Tile Large + High performance

Can we get the best of both worlds? n Tile Large + High performance on single thread, serial code sections (2 units) - Low throughput on parallel program portions (8 units) n Tile Small + High throughput on the parallel part (16 units) - Low performance on the serial part, single thread (1 unit), reduced single-thread performance compared to existing single thread processors n Idea: Have both large and small on the same chip Performance asymmetry 29

Asymmetric Multi-Core 30

Asymmetric Multi-Core 30

Asymmetric Chip Multiprocessor (ACMP) Large core “Tile-Large” Small core Small core Small core Small

Asymmetric Chip Multiprocessor (ACMP) Large core “Tile-Large” Small core Small core Small core Small core Small core “Tile-Small” Small core Small core Small core Large core ACMP Provide one large core and many small cores + Accelerate serial part using the large core (2 units) + Execute parallel part on small cores and large core for high throughput (12+2 units) n 31

Accelerating Serial Bottlenecks Single thread Large core Small core Small core Small core ACMP

Accelerating Serial Bottlenecks Single thread Large core Small core Small core Small core ACMP Approach 32

Performance vs. Parallelism Assumptions: 1. Small cores takes an area budget of 1 and

Performance vs. Parallelism Assumptions: 1. Small cores takes an area budget of 1 and has performance of 1 2. Large core takes an area budget of 4 and has performance of 2 33

ACMP Performance vs. Parallelism Area-budget = 16 small cores Large core Small Small core

ACMP Performance vs. Parallelism Area-budget = 16 small cores Large core Small Small core core Large core Small core core Small Small Small Small core core core “Tile-Small” ACMP “Tile-Large” Large Cores 4 0 1 Small Cores 0 16 12 Serial Performance 2 1 2 2 x 4=8 1 x 16 = 16 1 x 2 + 1 x 12 = 14 Parallel Throughput 34 34

Amdahl’s Law Modified n n Simplified Amdahl’s Law for an Asymmetric Multiprocessor Assumptions: q

Amdahl’s Law Modified n n Simplified Amdahl’s Law for an Asymmetric Multiprocessor Assumptions: q q q Serial portion executed on the large core Parallel portion executed on both small cores and large cores f: Parallelizable fraction of a program L: Number of large processors S: Number of small processors X: Speedup of a large processor over a small one Speedup = 1 1 -f X + f S + X*L 35

Caveats of Parallelism, Revisited n Amdahl’s Law q q f: Parallelizable fraction of a

Caveats of Parallelism, Revisited n Amdahl’s Law q q f: Parallelizable fraction of a program N: Number of processors 1 Speedup = 1 -f q n n + f N Amdahl, “Validity of the single processor approach to achieving large scale computing capabilities, ” AFIPS 1967. Maximum speedup limited by serial portion: Serial bottleneck Parallel portion is usually not perfectly parallel q q q Synchronization overhead (e. g. , updates to shared data) Load imbalance overhead (imperfect parallelization) Resource sharing overhead (contention among N processors) 36

Accelerating Parallel Bottlenecks n n Serialized or imbalanced execution in the parallel portion can

Accelerating Parallel Bottlenecks n n Serialized or imbalanced execution in the parallel portion can also benefit from a large core Examples: q q n Critical sections that are contended Parallel stages that take longer than others to execute Idea: Dynamically identify these code portions that cause serialization and execute them on a large core 37

Accelerated Critical Sections M. Aater Suleman, Onur Mutlu, Moinuddin K. Qureshi, and Yale N.

Accelerated Critical Sections M. Aater Suleman, Onur Mutlu, Moinuddin K. Qureshi, and Yale N. Patt, "Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures" Proceedings of the 14 th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2009 38

Contention for Critical Sections Critical Section 12 iterations, 33% instructions inside the critical section

Contention for Critical Sections Critical Section 12 iterations, 33% instructions inside the critical section Parallel Idle P=4 P=3 P=2 33% in critical section P=1 0 1 2 3 4 5 6 7 8 9 10 11 12 39

Contention for Critical Sections Critical Section 12 iterations, 33% instructions inside the critical section

Contention for Critical Sections Critical Section 12 iterations, 33% instructions inside the critical section Parallel Idle P=4 Accelerating critical sections increases performance and scalability P=3 Critical Section Accelerated by 2 x P=2 P=1 0 1 2 3 4 5 6 7 8 9 10 11 12 40

Impact of Critical Sections on n Contention for critical sections leads to serial execution

Impact of Critical Sections on n Contention for critical sections leads to serial execution Scalability 8 7 Speedup n (serialization) of threads in the parallel program portion Contention for critical sections increases with the number of threads and limits scalability Asymmetric 6 5 4 3 2 Today 1 0 0 8 16 24 32 Chip Area (cores) My. SQL (oltp-1) 41

A Case for Asymmetry n n Execution time of sequential kernels, critical sections, and

A Case for Asymmetry n n Execution time of sequential kernels, critical sections, and limiter stages must be short It is difficult for the programmer to shorten these serialized sections q q n n Insufficient domain-specific knowledge Variation in hardware platforms Limited resources Performance-debugging tradeoff Goal: A mechanism to shorten serial bottlenecks without requiring programmer effort Idea: Accelerate serialized code sections by shipping them to powerful cores in an asymmetric multi-core (ACMP) 42

An Example: Accelerated Critical Sections Idea: HW/SW ships critical sections to a large, powerful

An Example: Accelerated Critical Sections Idea: HW/SW ships critical sections to a large, powerful core in an n asymmetric multi-core architecture n Benefit: q q q n n Reduces serialization due to contended locks Reduces the performance impact of hard-to-parallelize sections Programmer does not need to (heavily) optimize parallel code fewer bugs, improved productivity Suleman et al. , “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures, ” ASPLOS 2009, IEEE Micro Top Picks 2010. Suleman et al. , “Data Marshaling for Multi-Core Architectures, ” ISCA 2010, IEEE Micro Top Picks 2011. 43

Accelerated Critical Sections Enter. CS() Priority. Q. insert(…) Leave. CS() 1. P 2 encounters

Accelerated Critical Sections Enter. CS() Priority. Q. insert(…) Leave. CS() 1. P 2 encounters a critical section (CSCALL) 2. P 2 sends CSCALL Request to CSRB 3. P 1 executes Critical Section 4. P 1 sends CSDONE signal Core executing critical section P 1 P 2 Critical Section Request Buffer (CSRB) P 3 P 4 Onchip. Interconnect

Accelerated Critical Sections (ACS) Small Core A = compute() PUSH A CSCALL X, Target

Accelerated Critical Sections (ACS) Small Core A = compute() PUSH A CSCALL X, Target PC LOCK X result = CS(A) UNLOCK X print result … … … … Large Core CSCALL Request Send X, TPC, STACK_PTR, CORE_ID … Waiting in Critical Section … Request Buffer … (CSRB) TPC: Acquire X POP A result = CS(A) PUSH result Release X CSRET X CSDONE Response POP result print result n Suleman et al. , “Accelerating Critical Section Execution with Asymmetric Multi-Core Architectures, ” ASPLOS 2009. 45

False Serialization n ACS can serialize independent critical sections n Selective Acceleration of Critical

False Serialization n ACS can serialize independent critical sections n Selective Acceleration of Critical Sections (SEL) q Saturating counters to track false serialization To large core A 2 3 4 CSCALL (A) B 5 4 CSCALL (A) Critical Section Request Buffer (CSRB) CSCALL (B) From small cores 46

ACS Performance Tradeoffs n Pluses + Faster critical section execution + Shared locks stay

ACS Performance Tradeoffs n Pluses + Faster critical section execution + Shared locks stay in one place: better lock locality + Shared data stays in large core’s (large) caches: better shared data locality, less ping-ponging n Minuses - Large core dedicated for critical sections: reduced parallel throughput - CSCALL and CSDONE control transfer overhead - Thread-private data needs to be transferred to large core: worse private data locality 47

ACS Performance Tradeoffs n Fewer parallel threads vs. accelerated critical sections q q Accelerating

ACS Performance Tradeoffs n Fewer parallel threads vs. accelerated critical sections q q Accelerating critical sections offsets loss in throughput As the number of cores (threads) on chip increase: n n n Overhead of CSCALL/CSDONE vs. better lock locality q n Fractional loss in parallel performance decreases Increased contention for critical sections makes acceleration more beneficial ACS avoids “ping-ponging” of locks among caches by keeping them at the large core More cache misses for private data vs. fewer misses for shared data 48

Cache Misses for Private Data Priority. Heap. insert(New. Sub. Problems) Private Data: New. Sub.

Cache Misses for Private Data Priority. Heap. insert(New. Sub. Problems) Private Data: New. Sub. Problems Shared Data: The priority heap Puzzle Benchmark 49

ACS Performance Tradeoffs n Fewer parallel threads vs. accelerated critical sections q q Accelerating

ACS Performance Tradeoffs n Fewer parallel threads vs. accelerated critical sections q q Accelerating critical sections offsets loss in throughput As the number of cores (threads) on chip increase: n n n Overhead of CSCALL/CSDONE vs. better lock locality q n Fractional loss in parallel performance decreases Increased contention for critical sections makes acceleration more beneficial ACS avoids “ping-ponging” of locks among caches by keeping them at the large core More cache misses for private data vs. fewer misses for shared data q Cache misses reduce if shared data > private data This problem can be solved See Suleman et al. , “Data Marshaling for Multi-Core Architectures, ” ISCA 2010. 50

ACS Comparison Points Small core Small core Small core Small core Small core Conventional

ACS Comparison Points Small core Small core Small core Small core Small core Conventional locking Small core Small core Small core Large core n n Conventional locking Large core executes Amdahl’s serial part Small core Small core Small core Large core ACMP SCMP n Small core ACS n Large core executes Amdahl’s serial part and critical sections 51

Accelerated Critical Sections: Methodology n Workloads: 12 critical section intensive applications q n Multi-core

Accelerated Critical Sections: Methodology n Workloads: 12 critical section intensive applications q n Multi-core x 86 simulator q q n Data mining kernels, sorting, database, web, networking 1 large and 28 small cores Aggressive stream prefetcher employed at each core Details: q q Large core: 2 GHz, out-of-order, 128 -entry ROB, 4 -wide, 12 -stage Small core: 2 GHz, in-order, 2 -wide, 5 -stage Private 32 KB L 1, private 256 KB L 2, 8 MB shared L 3 On-chip interconnect: Bi-directional ring, 5 -cycle hop latency 52

ACS Performance Chip Area = 32 small cores Equal-area comparison Number of threads =

ACS Performance Chip Area = 32 small cores Equal-area comparison Number of threads = Best threads SCMP = 32 small cores ACMP = 1 large and 28 small cores 269 180 185 Coarse-grain locks Fine-grain locks 53

------ SCMP ------ ACS Equal-Area Comparisons Number of threads = No. of cores Speedup

------ SCMP ------ ACS Equal-Area Comparisons Number of threads = No. of cores Speedup over a small core 3. 5 3 2. 5 2 1. 5 1 0. 5 0 3 5 2. 5 4 2 3 1. 5 2 1 0. 5 1 0 0 0 8 16 24 32 (a) ep (b) is 6 10 5 8 4 2 1 2 0 0 0 8 16 24 32 (c) pagemine (d) puzzle 6 (g) sqlite (h) iplookup 0 8 16 24 32 (e) qsort (f) tsp 12 10 2. 5 10 8 2 8 6 1. 5 6 4 1 4 2 0. 5 2 0 0 0 8 16 24 32 3 2 0 8 16 24 32 14 12 10 8 6 4 2 0 12 4 4 3. 5 3 2. 5 2 1. 5 1 0. 5 0 0 8 16 24 32 8 6 3 7 6 5 4 3 2 1 0 0 8 16 24 32 (i) oltp-1 (i) oltp-2 Chip Area (small cores) 54 0 8 16 24 32 (k) specjbb 0 8 16 24 32 (l) webcache

ACS Summary n n n Critical sections reduce performance and limit scalability Accelerate critical

ACS Summary n n n Critical sections reduce performance and limit scalability Accelerate critical sections by executing them on a powerful core ACS reduces average execution time by: q q n n 34% compared to an equal-area SCMP 23% compared to an equal-area ACMP ACS improves scalability of 7 of the 12 workloads Generalizing the idea: Accelerate all bottlenecks (“critical paths”) by executing them on a powerful core 55

More on Accelerated Critical M. Aater Suleman, Onur Mutlu, Moinuddin K. Qureshi, and Yale

More on Accelerated Critical M. Aater Suleman, Onur Mutlu, Moinuddin K. Qureshi, and Yale N. Patt, Sections "Accelerating Critical Section Execution with Asymmetric Multi n -Core Architectures" Proceedings of the 14 th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), pages 253 -264, Washington, DC, March 2009. Slides (ppt) 56

Bottleneck Identification and Scheduling Jose A. Joao, M. Aater Suleman, Onur Mutlu, and Yale

Bottleneck Identification and Scheduling Jose A. Joao, M. Aater Suleman, Onur Mutlu, and Yale N. Patt, "Bottleneck Identification and Scheduling in Multithreaded Applications" Proceedings of the 17 th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), London, UK, March 2012. 57

Bottlenecks in Multithreaded Applications Definition: any code segment for which threads contend (i. e.

Bottlenecks in Multithreaded Applications Definition: any code segment for which threads contend (i. e. wait) Examples: n Amdahl’s serial portions q n Critical sections q n Ensure mutual exclusion likely to be on the critical path if contended Barriers q n Only one thread exists on the critical path Ensure all threads reach a point before continuing the latest thread arriving is on the critical path Pipeline stages q Different stages of a loop iteration may execute on different threads, slowest stage makes other stages wait on the critical path 58

Observation: Limiting Bottlenecks Change Over Time A=full linked list; B=empty linked list repeat 32

Observation: Limiting Bottlenecks Change Over Time A=full linked list; B=empty linked list repeat 32 threads Lock A Traverse list A Remove X from A Unlock A Compute on X Lock B Traverse list B Insert X into B Unlock B until A is empty Lock B is limiter Lock A is limiter 59

Limiting Bottlenecks Do Change on Real Applications My. SQL running Sysbench queries, 16 threads

Limiting Bottlenecks Do Change on Real Applications My. SQL running Sysbench queries, 16 threads 60

Bottleneck Identification and Scheduling (BIS) n n Key insight: q Thread waiting reduces parallelism

Bottleneck Identification and Scheduling (BIS) n n Key insight: q Thread waiting reduces parallelism and is likely to reduce performance q Code causing the most thread waiting likely critical path Key idea: q Dynamically identify bottlenecks that cause the most thread waiting q Accelerate them (using powerful cores in an ACMP) 61

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implement waiting for

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implement waiting for bottlenecks Binary containing BIS instructions Hardware 1. Measure thread waiting cycles (TWC) for each bottleneck 2. Accelerate bottleneck(s) with the highest TWC 62

Critical Sections: Code Modifications target. PC: … Bottleneck. Call bid, target. PC while cannot

Critical Sections: Code Modifications target. PC: … Bottleneck. Call bid, target. PC while cannot acquire lock … Wait loop for watch_addr while cannot acquire lock Bottleneck. Wait bid, watch_addr Wait loop for watch_addr … acquire lock release lock Used to enable Used to keep track of … acceleration waiting cycles release lock Bottleneck. Return bid 63

Barriers: Code Modifications target. PC: … Bottleneck. Call bid, target. PC enter barrier while

Barriers: Code Modifications target. PC: … Bottleneck. Call bid, target. PC enter barrier while not all threads in barrier Bottleneck. Wait bid, watch_addr exit barrier … code running for the barrier … Bottleneck. Return bid 64

Pipeline Stages: Code Modifications target. PC: Bottleneck. Call bid, target. PC … while not

Pipeline Stages: Code Modifications target. PC: Bottleneck. Call bid, target. PC … while not done while empty queue Bottleneck. Wait prev_bid dequeue work do the work … while full queue Bottleneck. Wait next_bid enqueue next work Bottleneck. Return bid 65

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implement waiting for

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implement waiting for bottlenecks Binary containing BIS instructions Hardware 1. Measure thread waiting cycles (TWC) for each bottleneck 2. Accelerate bottleneck(s) with the highest TWC 66

BIS: Hardware Overview n n Performance-limiting bottleneck identification and acceleration are independent tasks Acceleration

BIS: Hardware Overview n n Performance-limiting bottleneck identification and acceleration are independent tasks Acceleration can be accomplished in multiple ways q q q Increasing core frequency/voltage Prioritization in shared resources [Ebrahimi+, MICRO’ 11] Migration to faster cores in an Asymmetric CMP Small core Large core Small Small core core 67

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implement waiting for

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implement waiting for bottlenecks Binary containing BIS instructions Hardware 1. Measure thread waiting cycles (TWC) for each bottleneck 2. Accelerate bottleneck(s) with the highest TWC 68

Determining Thread Waiting Cycles for Each Bottleneck Small Core 1 Large Core 0 Bottleneck.

Determining Thread Waiting Cycles for Each Bottleneck Small Core 1 Large Core 0 Bottleneck. Wait x 4500 waiters=0, 11 0 1 7 10 waiters=2, twc = 5 9 bid=x 4500, waiters=1, 3 4 2 Small Core 2 Bottleneck Table (BT) Bottleneck. Wait x 4500 … 69

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implement waiting for

Bottleneck Identification and Scheduling (BIS) Compiler/Library/Programmer 1. Annotate bottleneck code 2. Implement waiting for bottlenecks Binary containing BIS instructions Hardware 1. Measure thread waiting cycles (TWC) for each bottleneck 2. Accelerate bottleneck(s) with the highest TWC 70

Bottleneck Acceleration Small Core 1 Large Core 0 x 4700 Bottleneck. Call bid=x 4700,

Bottleneck Acceleration Small Core 1 Large Core 0 x 4700 Bottleneck. Call bid=x 4700, x 4600 pc, sp, core 1 Bottleneck. Return x 4700 Execute locally remotely Acceleration Index Table (AIT) bid=x 4700, pc, sp, core 1 bid=x 4700 , large core 0 Executeremotely locally Small Core 2 Scheduling Buffer (SB) bid=x 4600, twc=100 twc < Threshold bid=x 4700, twc=10000 twc > Threshold Bottleneck Table (BT) AIT bid=x 4700 , large core 0 … 71

BIS Mechanisms n Basic mechanisms for BIS: q q n Determining Thread Waiting Cycles

BIS Mechanisms n Basic mechanisms for BIS: q q n Determining Thread Waiting Cycles Accelerating Bottlenecks Mechanisms to improve performance and generality of BIS: q q q Dealing with false serialization Preemptive acceleration Support for multiple large cores 72

Hardware Cost n Main structures: q q q Bottleneck Table (BT): global 32 -entry

Hardware Cost n Main structures: q q q Bottleneck Table (BT): global 32 -entry associative cache, minimum-Thread-Waiting-Cycle replacement Scheduling Buffers (SB): one table per large core, as many entries as small cores Acceleration Index Tables (AIT): one 32 -entry table per small core n Off the critical path n Total storage cost for 56 -small-cores, 2 -large-cores < 19 KB 73

BIS Performance Trade-offs n Faster bottleneck execution vs. fewer parallel threads q n Better

BIS Performance Trade-offs n Faster bottleneck execution vs. fewer parallel threads q n Better shared data locality vs. worse private data locality q q n Acceleration offsets loss of parallel throughput with large core counts Shared data stays on large core (good) Private data migrates to large core (bad, but latency hidden with Data Marshaling [Suleman+, ISCA’ 10]) Benefit of acceleration vs. migration latency q q Migration latency usually hidden by waiting (good) Unless bottleneck not contended (bad, but likely not on critical path) 74

Evaluation Methodology n Workloads: 8 critical section intensive, 2 barrier intensive and 2 pipeline-parallel

Evaluation Methodology n Workloads: 8 critical section intensive, 2 barrier intensive and 2 pipeline-parallel applications q n Cycle-level multi-core x 86 simulator q q n Data mining kernels, scientific, database, web, networking, specjbb 8 to 64 small-core-equivalent area, 0 to 3 large cores, SMT 1 large core is area-equivalent to 4 small cores Details: q q Large core: 4 GHz, out-of-order, 128 -entry ROB, 4 -wide, 12 -stage Small core: 4 GHz, in-order, 2 -wide, 5 -stage Private 32 KB L 1, private 256 KB L 2, shared 8 MB L 3 On-chip interconnect: Bi-directional ring, 2 -cycle hop latency 75

BIS Comparison Points (Area. Equivalent) SCMP (Symmetric CMP) n q n n n All

BIS Comparison Points (Area. Equivalent) SCMP (Symmetric CMP) n q n n n All small cores ACMP (Asymmetric CMP) q Accelerates only Amdahl’s serial portions q Our baseline ACS (Accelerated Critical Sections) q Accelerates only critical sections and Amdahl’s serial portions q Applicable to multithreaded workloads (iplookup, mysql, specjbb, sqlite, tsp, webcache, mg, ft) FDP (Feedback-Directed Pipelining) q Accelerates only slowest pipeline stages q Applicable to pipeline-parallel workloads (rank, pagemine) 76

BIS Performance Improvement Optimal number of threads, 28 small cores, 1 large core n

BIS Performance Improvement Optimal number of threads, 28 small cores, 1 large core n n which ACS limiting bottlenecks change over barriers, time BIS outperforms ACS/FDP by 15% and ACMP by 32% FDP ACS cannot accelerate BIS improves scalability on 4 of the benchmarks 77

Why Does BIS Work? Fraction of execution time spent on predicted-important bottlenecks Actually critical

Why Does BIS Work? Fraction of execution time spent on predicted-important bottlenecks Actually critical n n Coverage: fraction of program critical path that is actually identified as bottlenecks q 39% (ACS/FDP) to 59% (BIS) Accuracy: identified bottlenecks on the critical path over total identified bottlenecks q 72% (ACS/FDP) to 73. 5% (BIS) 78

BIS Scaling Results Performance increases with: 15% 2. 4% 6. 2% 19% 1) More

BIS Scaling Results Performance increases with: 15% 2. 4% 6. 2% 19% 1) More small cores n Contention due to bottlenecks increases n Loss of parallel throughput due to large core reduces 2) More large cores n Can accelerate independent bottlenecks n Without reducing parallel throughput (enough cores) 79

BIS Summary n n Serializing bottlenecks of different types limit performance of multithreaded applications:

BIS Summary n n Serializing bottlenecks of different types limit performance of multithreaded applications: Importance changes over time BIS is a hardware/software cooperative solution: q q n BIS improves application performance and scalability: q n Dynamically identifies bottlenecks that cause the most thread waiting and accelerates them on large cores of an ACMP Applicable to critical sections, barriers, pipeline stages Performance benefits increase with more cores Provides comprehensive fine-grained bottleneck acceleration with no programmer effort 80

More on Bottleneck Identification & Scheduling Jose A. Joao, M. Aater Suleman, Onur Mutlu,

More on Bottleneck Identification & Scheduling Jose A. Joao, M. Aater Suleman, Onur Mutlu, and Yale N. Patt, n "Bottleneck Identification and Scheduling in Multithreaded Applications" Proceedings of the 17 th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), London, UK, March 2012. Slides (ppt) (pdf) 81

Handling Private Data Locality: Data Marshaling M. Aater Suleman, Onur Mutlu, Jose A. Joao,

Handling Private Data Locality: Data Marshaling M. Aater Suleman, Onur Mutlu, Jose A. Joao, Khubaib, and Yale N. Patt, "Data Marshaling for Multi-core Architectures" Proceedings of the 37 th International Symposium on Computer Architecture (ISCA), pages 441 -450, Saint-Malo, France, June 2010. 82

Staged Execution Model (I) n n Goal: speed up a program by dividing it

Staged Execution Model (I) n n Goal: speed up a program by dividing it up into pieces Idea q q q n Benefits q q q n Split program code into segments Run each segment on the core best-suited to run it Each core assigned a work-queue, storing segments to be run Accelerates segments/critical-paths using specialized/heterogeneous cores Exploits inter-segment parallelism Improves locality of within-segment data Examples q q Accelerated critical sections, Bottleneck identification and scheduling Producer-consumer pipeline parallelism Task parallelism (Cilk, Intel TBB, Apple Grand Central Dispatch) Special-purpose cores and functional units 83

Staged Execution Model (II) LOAD X STORE Y LOAD Y …. STORE Z LOAD

Staged Execution Model (II) LOAD X STORE Y LOAD Y …. STORE Z LOAD Z …. 84

Staged Execution Model (III) Split code into segments Segment S 0 LOAD X STORE

Staged Execution Model (III) Split code into segments Segment S 0 LOAD X STORE Y Segment S 1 LOAD Y …. STORE Z Segment S 2 LOAD Z …. 85

Staged Execution Model (IV) Core 0 Core 1 Core 2 Instances of S 0

Staged Execution Model (IV) Core 0 Core 1 Core 2 Instances of S 0 Instances of S 1 Instances of S 2 Work-queues 86

Staged Execution Model: Segment Spawning Core 0 Core 1 Core 2 S 0 LOAD

Staged Execution Model: Segment Spawning Core 0 Core 1 Core 2 S 0 LOAD X STORE Y S 1 LOAD Y …. STORE Z S 2 LOAD Z …. 87

Staged Execution Model: Two Examples n Accelerated Critical Sections [Suleman et al. , ASPLOS

Staged Execution Model: Two Examples n Accelerated Critical Sections [Suleman et al. , ASPLOS 2009] q Idea: Ship critical sections to a large core in an asymmetric CMP n n q n Segment 0: Non-critical section Segment 1: Critical section Benefit: Faster execution of critical section, reduced serialization, improved lock and shared data locality Producer-Consumer Pipeline Parallelism q Idea: Split a loop iteration into multiple “pipeline stages” where one stage consumes data produced by the previous stage each stage runs on a different core n q Segment N: Stage N Benefit: Stage-level parallelism, better locality faster execution 88

Problem: Locality of Inter-segment Data. Core 0 Core 1 Core 2 S 0 LOAD

Problem: Locality of Inter-segment Data. Core 0 Core 1 Core 2 S 0 LOAD X STORE Y Transfer Y Cache Miss S 1 LOAD Y …. STORE Z Transfer Z Cache Miss S 2 LOAD Z …. 89

Problem: Locality of Inter-segment n Data Accelerated Critical Sections [Suleman et al. , ASPLOS

Problem: Locality of Inter-segment n Data Accelerated Critical Sections [Suleman et al. , ASPLOS 2010] q q n Producer-Consumer Pipeline Parallelism q q n Idea: Ship critical sections to a large core in an ACMP Problem: Critical section incurs a cache miss when it touches data produced in the non-critical section (i. e. , thread private data) Idea: Split a loop iteration into multiple “pipeline stages” each stage runs on a different core Problem: A stage incurs a cache miss when it touches data produced by the previous stage Performance of Staged Execution limited by inter-segment cache misses 90

What if We Eliminated All Inter-segment Misses? 91

What if We Eliminated All Inter-segment Misses? 91

Terminology Core 0 S 0 Core 1 LOAD X STORE Y Transfer Y S

Terminology Core 0 S 0 Core 1 LOAD X STORE Y Transfer Y S 1 LOAD Y …. STORE Z Generator instruction: The last instruction to write to an inter -segment cache block in a segment Core 2 Inter-segment data: Cache block written by one segment and consumed by the next segment Transfer Z S 2 LOAD Z …. 92

Key Observation and Idea n n Observation: Set of generator instructions is stable over

Key Observation and Idea n n Observation: Set of generator instructions is stable over execution time and across input sets Idea: q q q n Identify the generator instructions Record cache blocks produced by generator instructions Proactively send such cache blocks to the next segment’s core before initiating the next segment Suleman et al. , “Data Marshaling for Multi-Core Architectures, ” ISCA 2010, IEEE Micro Top Picks 2011. 93

Data Marshaling Hardware Compiler/Profiler 1. Identify generator instructions 2. Insert marshal instructions Binary containing

Data Marshaling Hardware Compiler/Profiler 1. Identify generator instructions 2. Insert marshal instructions Binary containing generator prefixes & marshal Instructions 1. Record generatorproduced addresses 2. Marshal recorded blocks to next core 94

Data Marshaling Hardware Compiler/Profiler 1. Identify generator instructions 2. Insert marshal instructions Binary containing

Data Marshaling Hardware Compiler/Profiler 1. Identify generator instructions 2. Insert marshal instructions Binary containing generator prefixes & marshal Instructions 1. Record generatorproduced addresses 2. Marshal recorded blocks to next core 95

Profiling Algorithm Inter-segment data Mark as Generator Instruction LOAD X STORE Y LOAD Y

Profiling Algorithm Inter-segment data Mark as Generator Instruction LOAD X STORE Y LOAD Y …. STORE Z LOAD Z …. 96

Marshal Instructions LOAD X STORE Y G: STORE Y MARSHAL C 1 When to

Marshal Instructions LOAD X STORE Y G: STORE Y MARSHAL C 1 When to send (Marshal) Where to send (C 1) LOAD Y …. G: STORE Z MARSHAL C 2 0 x 5: LOAD Z …. 97

DM Support/Cost n Profiler/Compiler: Generators, marshal instructions ISA: Generator prefix, marshal instructions Library/Hardware: Bind

DM Support/Cost n Profiler/Compiler: Generators, marshal instructions ISA: Generator prefix, marshal instructions Library/Hardware: Bind next segment ID to a physical core n Hardware n n q Marshal Buffer n n q q Stores physical addresses of cache blocks to be marshaled 16 entries enough for almost all workloads 96 bytes per core Ability to execute generator prefixes and marshal instructions Ability to push data to another cache 98

DM: Advantages, Disadvantages n Advantages q q q n Timely data transfer: Push data

DM: Advantages, Disadvantages n Advantages q q q n Timely data transfer: Push data to core before needed Can marshal any arbitrary sequence of lines: Identifies generators, not patterns Low hardware cost: Profiler marks generators, no need for hardware to find them Disadvantages q q Requires profiler and ISA support Not always accurate (generator set is conservative): Pollution at remote core, wasted bandwidth on interconnect n Not a large problem as number of inter-segment blocks is small 99

Accelerated Critical Sections with DM Large Core Small Core 0 LOAD X STORE Y

Accelerated Critical Sections with DM Large Core Small Core 0 LOAD X STORE Y G: STORE Y CSCALL Addr Y L 2 Cache. Y Data L 2 Cache LOAD Y …. G: STORE Z CSRET Critical Section Marshal Buffer Cache Hit! 100

Accelerated Critical Sections: Methodology n Workloads: 12 critical section intensive applications q q n

Accelerated Critical Sections: Methodology n Workloads: 12 critical section intensive applications q q n Multi-core x 86 simulator q q n Data mining kernels, sorting, database, web, networking Different training and simulation input sets 1 large and 28 small cores Aggressive stream prefetcher employed at each core Details: q q Large core: 2 GHz, out-of-order, 128 -entry ROB, 4 -wide, 12 -stage Small core: 2 GHz, in-order, 2 -wide, 5 -stage Private 32 KB L 1, private 256 KB L 2, 8 MB shared L 3 On-chip interconnect: Bi-directional ring, 5 -cycle hop latency 101

DM on Accelerated Critical Sections: Results 168 170 8. 7% 102

DM on Accelerated Critical Sections: Results 168 170 8. 7% 102

Pipeline Parallelism Cache Hit! Core 0 S 0 LOAD X STORE Y G: STORE

Pipeline Parallelism Cache Hit! Core 0 S 0 LOAD X STORE Y G: STORE Y MARSHAL C 1 S 1 LOAD Y …. G: STORE Z MARSHAL C 2 S 2 0 x 5: LOAD Z …. Core 1 Addr Y L 2 Cache. Y Data Marshal Buffer L 2 Cache 103

Pipeline Parallelism: Methodology n Workloads: 9 applications with pipeline parallelism q q n Financial,

Pipeline Parallelism: Methodology n Workloads: 9 applications with pipeline parallelism q q n Financial, compression, multimedia, encoding/decoding Different training and simulation input sets Multi-core x 86 simulator q q 32 -core CMP: 2 GHz, in-order, 2 -wide, 5 -stage Aggressive stream prefetcher employed at each core Private 32 KB L 1, private 256 KB L 2, 8 MB shared L 3 On-chip interconnect: Bi-directional ring, 5 -cycle hop latency 104

DM on Pipeline Parallelism: Results 16% 105

DM on Pipeline Parallelism: Results 16% 105

DM Coverage, Accuracy, Timeliness n n High coverage of inter-segment misses in a timely

DM Coverage, Accuracy, Timeliness n n High coverage of inter-segment misses in a timely manner Medium accuracy does not impact performance q Only 5. 0 and 6. 8 cache blocks marshaled for average segment 106

Scaling Results n DM performance improvement increases with q q q n More cores

Scaling Results n DM performance improvement increases with q q q n More cores Higher interconnect latency Larger private L 2 caches Why? Inter-segment data misses become a larger bottleneck q q q More cores More communication Higher latency Longer stalls due to communication Larger L 2 cache Communication misses remain 107

Other Applications of Data Marshaling n Can be applied to other Staged Execution models

Other Applications of Data Marshaling n Can be applied to other Staged Execution models q Task parallelism models n q q q n Cilk, Intel TBB, Apple Grand Central Dispatch Special-purpose remote functional units Computation spreading [Chakraborty et al. , ASPLOS’ 06] Thread motion/migration [e. g. , Rangan et al. , ISCA’ 09] Can be an enabler for more aggressive SE models q Lowers the cost of data migration n q an important overhead in remote execution of code segments Remote execution of finer-grained tasks can become more feasible finer-grained parallelization in multi-cores 108

Data Marshaling Summary n n Inter-segment data transfers between cores limit the benefit of

Data Marshaling Summary n n Inter-segment data transfers between cores limit the benefit of promising Staged Execution (SE) models Data Marshaling is a hardware/software cooperative solution: detect inter-segment data generator instructions and push their data to next segment’s core q q q n Applicable to several existing Staged Execution models q q n Significantly reduces cache misses for inter-segment data Low cost, high-coverage, timely for arbitrary address sequences Achieves most of the potential of eliminating such misses Accelerated Critical Sections: 9% performance benefit Pipeline Parallelism: 16% performance benefit Can enable new models very fine-grained remote execution 109

More on Bottleneck Identification & Scheduling n M. Aater Suleman, Onur Mutlu, Jose A.

More on Bottleneck Identification & Scheduling n M. Aater Suleman, Onur Mutlu, Jose A. Joao, Khubaib, and Yale N. Patt, "Data Marshaling for Multi-core Architectures" Proceedings of the 37 th International Symposium on Computer Architecture (ISCA), pages 441 -450, Saint-Malo, France, June 2010. Slides (ppt) 110

Other Uses of Asymmetry 111

Other Uses of Asymmetry 111

Use of Asymmetry for Energy Efficiency Kumar et al. , “Single-ISA Heterogeneous Multi-Core Architectures:

Use of Asymmetry for Energy Efficiency Kumar et al. , “Single-ISA Heterogeneous Multi-Core Architectures: The n Potential for Processor Power Reduction, ” MICRO 2003. n Idea: q q q Implement multiple types of cores on chip Monitor characteristics of the running thread (e. g. , sample energy/perf on each core periodically) Dynamically pick the core that provides the best energy/performance tradeoff for a given phase n “Best core” Depends on optimization metric 112

Use of Asymmetry for Energy Efficiency 113

Use of Asymmetry for Energy Efficiency 113

Use of Asymmetry for Energy Efficiency Advantages n + More flexibility in energy-performance tradeoff

Use of Asymmetry for Energy Efficiency Advantages n + More flexibility in energy-performance tradeoff + Can execute computation to the core that is best suited for it (in terms of energy) n Disadvantages/issues - Incorrect predictions/sampling wrong core reduced performance or increased energy - Overhead of core switching - Disadvantages of asymmetric CMP (e. g. , design multiple cores) - Need phase monitoring and matching algorithms - What characteristics should be monitored? - Once characteristics known, how do you pick the core? 114

Asymmetric vs. Symmetric Cores n Advantages of Asymmetric + Can provide better performance when

Asymmetric vs. Symmetric Cores n Advantages of Asymmetric + Can provide better performance when thread parallelism is limited + Can be more energy efficient + Schedule computation to the core type that can best execute it n Disadvantages - Need to design more than one type of core. Always? - Scheduling becomes more complicated - What computation should be scheduled on the large core? - Who should decide? HW vs. SW? - Managing locality and load balancing can become difficult if threads move between cores (transparently to software) - Cores have different demands from shared resources 115

How to Achieve Asymmetry n Static q q Type and power of cores fixed

How to Achieve Asymmetry n Static q q Type and power of cores fixed at design time Two approaches to design “faster cores”: n n q n High frequency Build a more complex, powerful core with entirely different uarch Is static asymmetry natural? (chip-wide variations in frequency) Dynamic q q Type and power of cores change dynamically Two approaches to dynamically create “faster cores”: n n n Boost frequency dynamically (limited power budget) Combine small cores to enable a more complex, powerful core Is there a third, fourth, fifth approach? 116

Asymmetry via Frequency Boosting

Asymmetry via Frequency Boosting

Asymmetry via Boosting of Frequency n Static q q n Due to process variations,

Asymmetry via Boosting of Frequency n Static q q n Due to process variations, cores might have different frequency Simply hardwire/design cores to have different frequencies Dynamic q q Annavaram et al. , “Mitigating Amdahl’s Law Through EPI Throttling, ” ISCA 2005. Dynamic voltage and frequency scaling 118

EPI Throttling n n Goal: Minimize execution time of parallel programs while keeping power

EPI Throttling n n Goal: Minimize execution time of parallel programs while keeping power within a fixed budget For best scalar and throughput performance, vary energy expended per instruction (EPI) based on available parallelism q q n P = EPI • IPS P = fixed power budget EPI = energy per instruction IPS = aggregate instructions retired per second Idea: For a fixed power budget q q Run sequential phases on high-EPI processor Run parallel phases on multiple low-EPI processors 119

EPI Throttling via DVFS n DVFS: Dynamic voltage frequency scaling n In phases of

EPI Throttling via DVFS n DVFS: Dynamic voltage frequency scaling n In phases of low thread parallelism q n Run a few cores at high supply voltage and high frequency In phases of high thread parallelism q Run many cores at low supply voltage and low frequency 120

Possible EPI Throttling Techniques n Grochowski et al. , “Best of both Latency and

Possible EPI Throttling Techniques n Grochowski et al. , “Best of both Latency and Throughput, ” ICCD 2004. 121

Boosting Frequency of a Small Core vs. Large Core n n Frequency boosting implemented

Boosting Frequency of a Small Core vs. Large Core n n Frequency boosting implemented on Intel Nehalem, IBM POWER 7 Advantages of Boosting Frequency + Very simple to implement; no need to design a new core + Parallel throughput does not degrade when TLP is high + Preserves locality of boosted thread n Disadvantages - Does not improve performance if thread is memory bound - Does not reduce Cycles per Instruction (remember the performance equation? ) - Changing frequency/voltage can take longer than switching to a large core 122

Computer Architecture Lecture 20: Heterogeneous Multi-Core Systems II Prof. Onur Mutlu ETH Zürich Fall

Computer Architecture Lecture 20: Heterogeneous Multi-Core Systems II Prof. Onur Mutlu ETH Zürich Fall 2018 29 November 2018

We did not cover the following slides in lecture. These are for your preparation

We did not cover the following slides in lecture. These are for your preparation for the next lecture.

A Case for Asymmetry Everywhere Onur Mutlu, "Asymmetry Everywhere (with Automatic Resource Management)" CRA

A Case for Asymmetry Everywhere Onur Mutlu, "Asymmetry Everywhere (with Automatic Resource Management)" CRA Workshop on Advancing Computer Architecture Research: Popular Parallel Programming, San Diego, CA, February 2010. Position paper 125

Asymmetry Enables Customization C C C 2 C 1 C C C C 4

Asymmetry Enables Customization C C C 2 C 1 C C C C 4 C 4 C C C 5 C 5 C 5 Symmetric n Asymmetric Symmetric: One size fits all q n C 3 C Energy and performance suboptimal for different phase behaviors Asymmetric: Enables tradeoffs and customization q q Processing requirements vary across applications and phases Execute code on best-fit resources (minimal energy, adequate perf. ) 126

Thought Experiment: Asymmetry Everywhere n Design each hardware resource with asymmetric, (re- )configurable, partitionable

Thought Experiment: Asymmetry Everywhere n Design each hardware resource with asymmetric, (re- )configurable, partitionable components q Different power/performance/reliability characteristics q To fit different computation/access/communication patterns 127

Thought Experiment: Asymmetry Everywhere n Design the runtime system (HW & SW) to automatically

Thought Experiment: Asymmetry Everywhere n Design the runtime system (HW & SW) to automatically choose the best-fit components for each phase q Satisfy performance/SLA with minimal energy q Dynamically stitch together the “best-fit” chip for each phase Phase 1 Phase 2 Phase 3 128

Thought Experiment: Asymmetry Everywhere n Morph software components to match asymmetric HW components q

Thought Experiment: Asymmetry Everywhere n Morph software components to match asymmetric HW components q Multiple versions for different resource characteristics Version 1 Version 2 Version 3 129

Many Research and Design n. Questions How to design asymmetric components? q q n

Many Research and Design n. Questions How to design asymmetric components? q q n What monitoring to perform cooperatively in HW/SW? q n n Fixed, partitionable, reconfigurable components? What types of asymmetry? Access patterns, technologies? Automatically discover phase/task requirements How to design feedback/control loop between components and runtime system software? How to design the runtime to automatically manage resources? q Track task behavior, pick “best-fit” components for the entire workload 130

Exploiting Asymmetry: Simple Examples Serial n Parallel Execute critical/serial sections on high-power, high-performance cores/resources

Exploiting Asymmetry: Simple Examples Serial n Parallel Execute critical/serial sections on high-power, high-performance cores/resources [Suleman+ ASPLOS’ 09, ISCA’ 10, Top Picks’ 10’ 11, Joao+ ASPLOS’ 12, ISCA’ 13] n Programmer can write less optimized, but more likely correct programs 131

Exploiting Asymmetry: Simple Examples VLIW Backend Oo. O Backend n Execute each code block

Exploiting Asymmetry: Simple Examples VLIW Backend Oo. O Backend n Execute each code block on the most efficient execution backend for that block [Fallin+ ICCD’ 14] n Enables a much more efficient and still high performance core design 132

Exploiting Asymmetry: Simple Examples Streaming n R a n d o m a Execute

Exploiting Asymmetry: Simple Examples Streaming n R a n d o m a Execute streaming “memory phases” on streaming-optimized cores and memory hierarchies c n c More efficient and higher performance than general purpose hierarchy e s s 133

Exploiting Asymmetry: Simple Examples B a n Latency optimized No. C d w i

Exploiting Asymmetry: Simple Examples B a n Latency optimized No. C d w i d n Execute bandwidth-sensitive threads on a bandwidth-optimized t h network, latency-sensitive ones on a latency-optimized network [Das+ DAC’ 13] o n Higher performance and energy-efficiency than a single network p t 134 i

Exploiting Asymmetry: Simple Examples B a n Latency sensitive d w i d t

Exploiting Asymmetry: Simple Examples B a n Latency sensitive d w i d t n Partition memory controller and on-chip network bandwidth h HPCA 2010, MICRO 2010, Top Picks asymmetrically among threads [Kim+ 2011] [Nychis+ Hot. Nets 2010] [Das+ MICRO 2009, ISCA 2010, Top Picks 2011] n s e n s Higher performance and energy-efficiency than symmetric/free-for-all 135

Exploiting Asymmetry: Simple Examples Compute intensive n Memory intensive Have multiple different memory scheduling

Exploiting Asymmetry: Simple Examples Compute intensive n Memory intensive Have multiple different memory scheduling policies apply them to different sets of threads based on thread behavior [Kim+ MICRO 2010, Top Picks 2011] [Ausavarungnirun+ ISCA 2012] n Higher performance and fairness than a homogeneous policy 136

Exploiting Asymmetry: Simple Examples CPU DRAM Fast, durable DRAM Small, leaky, volatile, high-cost n

Exploiting Asymmetry: Simple Examples CPU DRAM Fast, durable DRAM Small, leaky, volatile, high-cost n DRAM Ctrl PCM Ctrl Phase Change Memory (or Tech. X) Phase Change Memory Large, non-volatile, low-cost Slow, wears out, high active energy Build main memory with different technologies with different characteristics (e. g. , latency, bandwidth, cost, energy, reliability) [Meza+ IEEE CAL’ 12, Yoon+ ICCD’ 12, Luo+ DSN’ 14] n Higher performance and energy-efficiency than homogeneous memory 137

Exploiting Asymmetry: Simple Examples Reliable DRAM n Less Reliable DRAM Build main memory with

Exploiting Asymmetry: Simple Examples Reliable DRAM n Less Reliable DRAM Build main memory with different technologies with different characteristics (e. g. , latency, bandwidth, cost, energy, reliability) [Meza+ IEEE CAL’ 12, Yoon+ ICCD’ 12, Luo+ DSN’ 14] n Lower-cost than homogeneous-reliability memory at same availability 138

Exploiting Asymmetry: Simple Examples Heterogeneous-Latency DRAM Heterogeneous-Refresh-Rate DRAM n Design each memory chip to

Exploiting Asymmetry: Simple Examples Heterogeneous-Latency DRAM Heterogeneous-Refresh-Rate DRAM n Design each memory chip to be heterogeneous to achieve low latency and low energy at reasonably low cost [Lee+ HPCA’ 13, Liu+ ISCA’ 12] n Higher performance and energy-efficiency than single-level memory 139