Computer Architecture Lecture 22 Interconnects Prof Onur Mutlu

































































































- Slides: 97
Computer Architecture Lecture 22: Interconnects Prof. Onur Mutlu ETH Zürich Fall 2020 27 December 2020
Summary of Last Week n Multiprocessing Fundamentals n Memory Ordering (Consistency) n Cache Coherence 2
Interconnection Networks 3
Readings n Required q n Moscibroda and Mutlu, “A Case for Bufferless Routing in On-Chip Networks, ” ISCA 2009. Recommended q q Das et al. , “Application-Aware Prioritization Mechanisms for On-Chip Networks, ” MICRO 2009. Janak H. Patel, “Processor-Memory Interconnections for Multiprocessors, ” ISCA 1979. Gottlieb et al. “The NYU Ultracomputer - Designing an MIMD Shared Memory Parallel Computer, ” IEEE Trans. On Comp. , 1983. Das et al. , “Aergia: Exploiting Packet Latency Slack in On-Chip Networks, ” ISCA 2010. 4
Interconnection Network Basics 5
Where Is Interconnect Used? n To connect components n Many examples q q q Processors and processors Processors and memories (banks) Processors and caches (banks) Caches and caches I/O devices Interconnection network 6
Why Is It Important? n Affects the scalability and cost of the system q q n Affects performance and energy efficiency q q q n How large of a system can you build? How easily can you add more processors? How fast can processors, caches, and memory communicate? How long are the latencies to memory? How much energy is spent on communication? Affects reliability and security q Can you guarantee messages are delivered or your protocol works? 7
Interconnection Network Basics n Topology q q n Routing (algorithm) q q n Specifies the way switches are wired Affects routing, reliability, throughput, latency, building ease How does a message get from source to destination Static or adaptive Buffering and Flow Control q What do we store within the network? n q q Entire packets, parts of packets, etc? How do we throttle during oversubscription? Tightly coupled with routing strategy 8
Terminology n Network interface q q n Link q n Bundle of wires that carry a signal Switch/router q n Module that connects endpoints (e. g. processors) to network Decouples computation/communication Connects fixed number of input channels to fixed number of output channels Channel q A single logical connection between routers/switches 9
More Terminology n Node q n Message q n Unit of transfer for network’s clients (processors, memory) Packet q n A router/switch within a network Unit of transfer for network Flit q q Flow control digit Unit of flow control within network 10
Some More Terminology n q q Direct or Indirect Networks Endpoints sit “inside” (direct) or “outside” (indirect) the network E. g. mesh is direct; every node is both endpoint and switch Router (switch), Radix of 2 (2 inputs, 2 outputs) Abbreviation: Radix-ary These routers are 2 -ary 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Indirect Direct 11
Interconnection Network Topology 12
Properties of a Topology/Network n Regular or Irregular q n Routing Distance q n number of links/hops along a route Diameter q n Regular if topology is regular graph (e. g. ring, mesh). maximum routing distance within the network Average Distance q Average number of hops across all valid routes 13
Properties of a Topology/Network n Bisection Bandwidth q q Often used to describe network performance Cut network in half and sum bandwidth of links severed n q q n (Min # channels spanning two halves) * (BW of each channel) Meaningful only for recursive topologies Can be misleading, because does not account for switch and routing efficiency (and certainly not execution time) Blocking vs. Non-Blocking q q If connecting any permutation of sources & destinations is possible, network is non-blocking; otherwise network is blocking. Rearrangeable non-blocking: Same as non-blocking but might require rearranging connections when switching from one permutation to another. 14
Topology n n n Bus (simplest) Point-to-point connections (ideal and most costly) Crossbar (less costly) Ring Tree Omega Hypercube Mesh Torus Butterfly … 15
Metrics to Evaluate Interconnect Topology n Cost n Latency (in hops, in nanoseconds) Contention n Many others exist you should think about n q q q Energy Bandwidth Overall system performance 16
Bus All nodes connected to a single link + Simple + Cost effective for a small number of nodes + Easy to implement coherence (snooping and serialization) - Not scalable to large number of nodes (limited bandwidth, electrical loading reduced frequency) - High contention fast saturation 0 1 2 3 4 5 6 7 17
Point-to-Point Every node connected to every other with direct/isolated links 0 7 1 + Lowest contention + Potentially lowest latency + Ideal, if cost is no issue 6 -- Highest cost O(N) connections/ports per node O(N 2) links -- Not scalable -- How to lay out on chip? 2 5 3 4 18
Crossbar n n n Every node connected to every other with a shared link for each destination Enables concurrent transfers to non-conflicting destinations Could be cost-effective for small number of nodes 7 + Low latency and high throughput - Expensive - Not scalable O(N 2) cost - Difficult to arbitrate as N increases 6 5 4 3 2 Used in core-to-cache-bank networks in - IBM POWER 5 - Sun Niagara I/II 1 0 0 1 2 3 4 5 6 7 19
Another Crossbar Design 0 1 2 3 4 5 6 7 20
Sun Ultra. SPARC T 2 Core-to-Cache Crossbar n High bandwidth interface between 8 cores and 8 L 2 banks & NCU n n 4 -stage pipeline: req, arbitration, selection, transmission 2 -deep queue for each src/dest pair to hold data transfer request 21
Bufferless and Buffered Crossbars 0 N I 1 N I 2 N I 3 N I + Simpler arbitration/ scheduling Flow Control + Efficient support for variable-size packets Flow Control Bufferless Buffered Crossbar Output Arbiter - Requires N 2 buffers 22
Can We Get Lower Cost than A Crossbar? n Yet still have low contention compared to a bus? n Idea: Multistage networks 23
Multistage Logarithmic Networks n n Idea: Indirect networks with multiple layers of switches between terminals/nodes Cost: O(Nlog. N), Latency: O(log. N) Many variations (Omega, Butterfly, Benes, Banyan, …) Omega Network: Blocking or Non-blocking? conflict 24
Multistage Networks (Circuit 0 Switched) 1 1 2 2 3 3 4 4 5 5 6 6 7 n n 0 A multistage network has more restrictions on feasible concurrent Tx-Rx pairs vs a crossbar But more scalable than crossbar in cost, e. g. , O(N log. N) for Butterfly 7 2 -by-2 crossbar 25
Multistage Networks (Packet Switched) 0 n 0 1 1 2 2 3 3 4 4 5 5 6 6 7 7 2 -by-2 router Packets “hop” from router to router, pending availability of the next-required switch and buffer 26
Aside: Circuit vs. Packet Switching n Circuit switching sets up full path before transmission Establish route then send data q Noone else can use those links while “circuit” is set + faster arbitration + no buffering -- setting up and bringing down “path” takes time q n Packet switching routes per packet in each router q q --+ + Route each packet individually (possibly via different paths) If link is free, any packet can use it potentially slower --- must dynamically switch need buffering no setup, bring down time more flexible, does not underutilize links 27
Switching vs. Topology n n n Circuit/packet switching choice independent of topology It is a higher-level protocol on how a message gets sent to a destination However, some topologies are more amenable to circuit vs. packet switching 28
Another Example: Delta Network n n Single path from source to destination Each stage has different routers Proposed to replace costly crossbars as processor-memory interconnect Janak H. Patel, “Processor. Memory Interconnections for Multiprocessors, ” ISCA 1979. 8 x 8 Delta network 29
Another Example: Omega Network n n Single path from source to destination All stages are the same Used in NYU Ultracomputer Gottlieb et al. “The NYU Ultracomputer - Designing an MIMD Shared Memory Parallel Computer, ” IEEE Trans. On Comp. , 1983. 30
Combining Operations in the Network n n n Idea: Combine multiple operations on a shared memory location Example: Omega network switches combine fetch-and-add operations in NYU Ultracomputer Fetch-and-add(M, I): return M, replace M with M+I q n Common when parallel processors modify a shared variable, e. g. obtain a chunk of the array Combining reduces synchronization latency 31
Butterfly n n Equivalent to Omega Network Indirect Used in BBN Butterfly Conflicts can cause “tree saturation” q Randomization of route selection helps 15 14 13 12 11 11 10 10 9 9 8 8 7 7 6 6 5 4 3 3 2 2 1 0 32
Review: Topologies 3 2 1 0 0 1 Topology 2 3 7 7 6 6 5 4 3 2 1 0 Crossbar Multistage Logarith. Mesh Direct/Indirect Direct Blocking/ Non-blocking Blocking Cost O(N 2) O(Nlog. N) O(N) Latency O(1) O(log. N) O(sqrt(N)) 33
Ring Each node connected to exactly two other nodes. Nodes form a continuous pathway such that packets can reach any node. + Cheap: O(N) cost - High latency: O(N) - Not easy to scale - Bisection bandwidth remains constant Used in Intel Haswell, Intel Larrabee, IBM Cell, many commercial systems today 34
Unidirectional Ring R R N-2 N-1 2 x 2 router 0 1 2 n n Single directional pathway Simple topology and implementation q q q Reasonable performance if N and performance needs (bandwidth & latency) still moderately low O(N) cost N/2 average hops; latency depends on utilization 35
Bidirectional Rings Multi-directional pathways, or multiple rings + Reduces latency + Improves scalability - Slightly more complex injection policy (need to select which ring to inject a packet into) 36
Hierarchical Rings + More scalable + Lower latency - More complex 37
More on Hierarchical Rings n Rachata Ausavarungnirun, Chris Fallin, Xiangyao Yu, Kevin Chang, Greg Nazario, Reetuparna Das, Gabriel Loh, and Onur Mutlu, "Design and Evaluation of Hierarchical Rings with Deflection Routing" Proceedings of the 26 th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD), Paris, France, October 2014. [Slides (pptx) (pdf)] [Source Code] n Describes the design and implementation of a mostly-bufferless hierarchical ring 38
More on Hierarchical Rings (II) n Rachata Ausavarungnirun, Chris Fallin, Xiangyao Yu, Kevin Chang, Greg Nazario, Reetuparna Das, Gabriel Loh, and Onur Mutlu, "A Case for Hierarchical Rings with Deflection Routing: An Energy-Efficient On-Chip Communication Substrate" Parallel Computing (PARCO), to appear in 2016. ar. Xiv. org version, February 2016. 39
Mesh n n n n Each node connected to 4 neighbors (N, E, S, W) O(N) cost Average latency: O(sqrt(N)) Easy to layout on-chip: regular and equal-length links Path diversity: many ways to get from one node to another Used in Tilera 100 -core And many on-chip network prototypes 40
Torus Mesh is not symmetric on edges: performance very sensitive to placement of task on edge vs. middle n Torus avoids this problem + Higher path diversity (and bisection bandwidth) than mesh - Higher cost - Harder to lay out on-chip - Unequal link lengths n 41
Torus, continued n Weave nodes to make inter-node latencies ~constant 42
Trees Planar, hierarchical topology Latency: O(log. N) Good for local traffic + Cheap: O(N) cost + Easy to Layout - Root can become a bottleneck Fat trees avoid this problem (CM-5) Fat Tree 43
CM-5 Fat Tree n n n Fat tree based on 4 x 2 switches Randomized routing on the way up Combining, multicast, reduction operators supported in hardware q Thinking Machines Corp. , “The Connection Machine CM-5 Technical Summary, ” Jan. 1992. 44
Hypercube n “N-dimensional cube” or “N-cube” Latency: O(log. N) n Radix: O(log. N) n #links: O(Nlog. N) + Low latency - Hard to lay out in 2 D/3 D n 11 01 00 01 01 11 10 11 01 00 10 01 01 10 10 00 00 11 11 10 10 00 10 45
Caltech Cosmic Cube n n 64 -node message passing machine Seitz, “The Cosmic Cube, ” CACM 1985. 46
Routing 47
Routing Mechanism n Arithmetic q q n Simple arithmetic to determine route in regular topologies Dimension order routing in meshes/tori Source Based Source specifies output port for each switch in route + Simple switches q n no control state: strip output port off header - Large header n Table Lookup Based Index into table for output port + Small header - More complex switches q 48
Routing Algorithm n Three Types q q q n Deterministic: always chooses the same path for a communicating source-destination pair Oblivious: chooses different paths, without considering network state Adaptive: can choose different paths, adapting to the state of the network How to adapt q q Local/global feedback Minimal or non-minimal paths 49
Deterministic Routing n n All packets between the same (source, dest) pair take the same path Dimension-order routing q q First traverse dimension X, then traverse dimension Y E. g. , XY routing (used in Cray T 3 D, and many on-chip networks) + Simple + Deadlock freedom (no cycles in resource allocation) - Could lead to high contention - Does not exploit path diversity 50
Deadlock n n n No forward progress Caused by circular dependencies on resources Each packet waits for a buffer occupied by another packet downstream 51
Handling Deadlock n Avoid cycles in routing q Dimension order routing n q Cannot build a circular dependency Restrict the “turns” each packet can take n Avoid deadlock by adding more buffering (escape paths) n Detect and break deadlock q Preemption of buffers 52
Turn Model to Avoid Deadlock n Idea q q q n Analyze directions in which packets can turn in the network Determine the cycles that such turns can form Prohibit just enough turns to break possible cycles Glass and Ni, “The Turn Model for Adaptive Routing, ” ISCA 1992. 53
Oblivious Routing: Valiant’s Algorithm n Goal: Balance network load n Idea: Randomly choose an intermediate destination, route to it first, then route from there to destination q Between source-intermediate and intermediate-dest, can use dimension order routing + Randomizes/balances network load - Non minimal (packet latency can increase) n Optimizations: q q Do this on high load Restrict the intermediate node to be close (in the same quadrant) Valiant, “A Scheme for Fast Parallel Communication, ” SIAM Journal of Computing, 1982. 54
More on Valiant’s Algorithm n n Valiant and Brebner, “Universal Schemes for Parallel Communication, ” STOC 1981. Valiant, “A Scheme for Fast Parallel Communication, ” SIAM Journal of Computing, 1982. 55
Adaptive Routing n Minimal adaptive Router uses network state (e. g. , downstream buffer occupancy) to pick which “productive” output port to send a packet to q Productive output port: port that gets the packet closer to its destination + Aware of local congestion - Minimality restricts achievable link utilization (load balance) q n Non-minimal (fully) adaptive “Misroute” packets to non-productive output ports based on network state + Can achieve better network utilization and load balance - Need to guarantee livelock freedom q 56
More on Adaptive Routing n Can avoid faulty links/routers n Idea: Route around faults + Deterministic routing cannot handle faulty components - Need to change the routing table to disable faulty routes - Assuming the faulty link/router is detected One recent example: Fattah et al. , "A Low-Overhead, Fully-Distributed, Guaranteed-Delivery Routing Algorithm for Faulty Network-on-Chips", NOCS 2015. 57
Recent Example: Fault Tolerance n Mohammad Fattah, Antti Airola, Rachata Ausavarungnirun, Nima Mirzaei, Pasi Liljeberg, Juha Plosila, Siamak Mohammadi, Tapio Pahikkala, Onur Mutlu, and Hannu Tenhunen, "A Low-Overhead, Fully-Distributed, Guaranteed-Delivery Routing Algorithm for Faulty Network-on-Chips" Proceedings of the 9 th ACM/IEEE International Symposium on Networks on Chip (NOCS), Vancouver, BC, Canada, September 2015. [Slides (pptx) (pdf)] [Source Code] One of the three papers nominated for the Best Paper Award by the Program Committee. 58
Buffering and Flow Control 59
Recall: Circuit vs. Packet Switching n Circuit switching sets up full path before transmission Establish route then send data q Noone else can use those links while “circuit” is set + faster arbitration -- setting up and bringing down “path” takes time q n Packet switching routes per packet in each router Route each packet individually (possibly via different paths) q If link is free, any packet can use it -- potentially slower --- must dynamically switch + no setup, bring down time + more flexible, does not underutilize links q 60
Packet Switched Networks: Packet Format n Header q n Payload q q n routing and control information carries data (non HW specific information) can be further divided (framing, protocol stacks…) Error Code q generally at tail of packet so it can be generated on the way out Header Payload Error Code 61
Handling Contention n n Two packets trying to use the same link at the same time What do you do? q q q n Buffer one Drop one Misroute one (deflection) Tradeoffs? 62
Flow Control Methods n Circuit switching n Bufferless (Packet/flit based) n Store and forward (Packet based) n Virtual cut through (Packet based) n Wormhole (Flit based) 63
Circuit Switching Revisited n n n Resource allocation granularity is high Idea: Pre-allocate resources across multiple switches for a given “flow” Need to send a probe to set up the path for pre-allocation + No need for buffering + No contention (flow’s performance is isolated) + Can handle arbitrary message sizes - Lower link utilization: two flows cannot use the same link - Handshake overhead to set up a “circuit” 64
Bufferless Deflection Routing n Key idea: Packets are never buffered in the network. When two packets contend for the same link, one is deflected. 1 New traffic can be injected whenever there is a free output link. Destination 1 Baran, “On Distributed Communication Networks. ” RAND Tech. Report. , 1962 / IEEE Trans. Comm. , 1964. 65
Bufferless Deflection Routing n Input buffers are eliminated: packets are buffered in pipeline latches and on network links Input Buffers North South East West Local Deflection Routing Logic Moscibroda and Mutlu, “A Case for Bufferless Routing in On-Chip Networks, ” ISCA 2009. 66
Issues In Bufferless Deflection Routing n Livelock n Resulting Router Complexity n Performance & Congestion at High Loads n Chris Fallin, Greg Nazario, Xiangyao Yu, Kevin Chang, Rachata Ausavarungnirun, and Onur Mutlu, "Bufferless and Minimally-Buffered Deflection Routing" Invited Book Chapter in Routing Algorithms in Networks-on. Chip, pp. 241 -275, Springer, 2014. 67
Bufferless Deflection Routing in No. Cs n Moscibroda and Mutlu, “A Case for Bufferless Routing in On. Chip Networks, ” ISCA 2009. q https: //users. ece. cmu. edu/~omutlu/pub/bless_isca 09. pdf 68
Low-Complexity Bufferless Routing n Chris Fallin, Chris Craik, and Onur Mutlu, "CHIPPER: A Low-Complexity Bufferless Deflection Router" Proceedings of the 17 th International Symposium on High. Performance Computer Architecture (HPCA), pages 144 -155, San Antonio, TX, February 2011. Slides (pptx) An extended version as SAFARI Technical Report, TR-SAFARI 2010 -001, Carnegie Mellon University, December 2010. 69
Minimally-Buffered Deflection Chris Fallin, Greg Nazario, Xiangyao Yu, Kevin Chang, Rachata Routing n Ausavarungnirun, and Onur Mutlu, "Min. BD: Minimally-Buffered Deflection Routing for Energy. Efficient Interconnect" Proceedings of the 6 th ACM/IEEE International Symposium on Networks on Chip (NOCS), Lyngby, Denmark, May 2012. Slides (pptx) (pdf) 70
“Bufferless” Hierarchical Rings n Ausavarungnirun et al. , “Design and Evaluation of Hierarchical Rings with Deflection Routing, ” SBAC-PAD 2014. q n http: //users. ece. cmu. edu/~omutlu/pub/hierarchical-rings-withdeflection_sbacpad 14. pdf Discusses the design and implementation of a mostlybufferless hierarchical ring 71
“Bufferless” Hierarchical Rings (II) n Rachata Ausavarungnirun, Chris Fallin, Xiangyao Yu, Kevin Chang, Greg Nazario, Reetuparna Das, Gabriel Loh, and Onur Mutlu, "A Case for Hierarchical Rings with Deflection Routing: An Energy-Efficient On-Chip Communication Substrate" Parallel Computing (PARCO), to appear in 2016. q ar. Xiv. org version, February 2016. 72
Summary of Six Years of Research n Chris Fallin, Greg Nazario, Xiangyao Yu, Kevin Chang, Rachata Ausavarungnirun, and Onur Mutlu, "Bufferless and Minimally-Buffered Deflection Routing" Invited Book Chapter in Routing Algorithms in Networks-on-Chip, pp. 241 -275, Springer, 2014. 73
Store and Forward Flow Control n n Packet-based flow control Store and Forward q q n n Packet copied entirely into network router before moving to the next node Flow control unit is the entire packet Leads to high per-packet latency Requires buffering for entire packet in each node S D Can we do better? 74
Cut through Flow Control n n Another form of packet-based flow control Start forwarding as soon as header is received and resources (buffer, channel, etc) allocated q n Dramatic reduction in latency Still allocate buffers and channel bandwidth for full packets S D n What if packets are large? 75
Cut through Flow Control n n What to do if output port is blocked? Lets the tail continue when the head is blocked, absorbing the whole message into a single switch. q Requires a buffer large enough to hold the largest packet. n Degenerates to store-and-forward with high contention n Can we do better? 76
Wormhole Flow Control n n Packets broken into (potentially) smaller flits (buffer/bw allocation unit) Flits are sent across the fabric in a wormhole fashion q q n n Body follows head, tail follows body Pipelined If head blocked, rest of packet stops Routing (src/dest) information only in head How does body/tail know where to go? Latency almost independent of distance for long messages 77
Wormhole Flow Control n Advantages over “store and forward” flow control + Lower latency + More efficient buffer utilization n Limitations - Suffers from head of line blocking - If head flit cannot move due to contention, another worm cannot proceed even though links may be idle Input Queues 1 2 Switching Fabric Outputs 1 1 2 2 1 Idle! 2 HOL Blocking 78
Head of Line Blocking n n A worm can be before another in the router input buffer Due to FIFO nature, the second worm cannot be scheduled even though it may need to access another output port 79
Head of Line Blocking Red holds this channel: channel remains idle until read proceeds Channel idle but red packet blocked behind blue Buffer full: blue cannot proceed Blocked by other packets 80
Virtual Channel Flow Control n n n Idea: Multiplex multiple channels over one physical channel Divide up the input buffer into multiple buffers sharing a single physical channel Dally, “Virtual Channel Flow Control, ” ISCA 1990. 81
Virtual Channel Flow Control n n n Idea: Multiplex multiple channels over one physical channel Divide up the input buffer into multiple buffers sharing a single physical channel Dally, “Virtual Channel Flow Control, ” ISCA 1990. 82
Virtual Channel Flow Control Buffer full: blue cannot proceed Blocked by other packets 83
A Modern Virtual Channel Based Router 84
Other Uses of Virtual Channels n Deadlock avoidance q Enforcing switching to a different set of virtual channels on some “turns” can break the cyclic dependency of resources n q q n Enforce order on VCs Escape VCs: Have at least one VC that uses deadlock-free routing. Ensure each flit has fair access to that VC. Protocol level deadlock: Ensure address and data packets use different VCs prevent cycles due to intermixing of different packet classes Prioritization of traffic classes q Some virtual channels can have higher priority than others 85
Review: Flow Control S Store and Forward S Cut Through / Wormhole Shrink Buffers D D Reduce latency Any other issues? Head-of-Line Blocking Use Virtual Channels Red holds this channel: channel remains idle until read proceeds Blocked by other packets Channel idle but red packet blocked behind blue Buffer full: blue cannot proceed 86
Review: Flow Control S Store and Forward S Cut Through / Wormhole Shrink Buffers D D Reduce latency Any other issues? Head-of-Line Blocking Use Virtual Channels Buffer full: blue cannot proceed Blocked by other packets 87
Communicating Buffer Availability n Credit-based flow control q q q n On/Off (XON/XOFF) flow control q n Upstream knows how many buffers are downstream Downstream passes back credits to upstream Significant upstream signaling (esp. for small flits) Downstream has on/off signal to upstream Ack/Nack flow control q q q Upstream optimistically sends downstream Buffer cannot be deallocated until ACK/NACK received Inefficiently utilizes buffer space 88
Credit-based Flow Control Node 1 t 2 t 3 Node 2 Process dit e r C t 4 t 5 n n Credit round trip delay Flit Process Round-trip credit delay: q n Flit departs router it Cred Time between when buffer empties and when next flit can be processed from that buffer entry Significant throughput degradation if there are few buffers Important to size buffers to tolerate credit turn-around 89
On/Off (XON/XOFF) Flow Control n Downstream has on/off signal to upstream t 1 Foff set to prevent flits arriving before t 4 from overflowing Node 1 Node 2 t 3 t 4 Off Proces s Flit Flit t 5 Fon set so that Node 2 does not run out of flits between t 5 and t 8 t 6 t 7 t 8 On Proces s Flit Foff threshold reached Flit Flit Fon threshold reached 90
Interconnection Network Performance 91
Interconnection Network Performance Latency Zero load latency (topology+routing+ flow control) Throughput given by routing Throughput given by topology Min latency given by routing algorithm Min latency given by topology Throughput given by flow control Injection rate into network 92
Ideal Latency n Ideal latency q Solely due to wire delay between source and destination q D = Manhattan distance n q q q The distance between two points measured along axes at right angles. v = propagation velocity L = packet size b = channel bandwidth 93
Actual Latency n Dedicated wiring impractical q q q q Long wires segmented with insertion of routers D = Manhattan distance v = propagation velocity L = packet size b = channel bandwidth H = hops Trouter = router latency Tc = latency due to contention 94
Latency and Throughput Curve 60 Latency (cycles) 50 40 30 20 10 0 0, 1 0, 3 0, 5 0, 7 0, 9 Injected load (fraction of capacity) Ideal On-chip Network 95
Network Performance Metrics n Packet latency n Round trip latency n Saturation throughput n Application-level performance: system performance q Affected by interference among threads/applications 96
Computer Architecture Lecture 22: Interconnects Prof. Onur Mutlu ETH Zürich Fall 2020 27 December 2020