CS 258 Parallel Computer Architecture Lecture 8 Network

  • Slides: 38
Download presentation
CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof

CS 258 Parallel Computer Architecture Lecture 8 Network Interface Design February 20, 2008 Prof John D. Kubiatowicz http: //www. cs. berkeley. edu/~kubitron/cs 258

Network Transaction Primitive • one-way transfer of information from a source output buffer to

Network Transaction Primitive • one-way transfer of information from a source output buffer to a dest. input buffer – causes some action at the destination – occurrence is not directly visible at source • deposit data, state change, reply 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 2

Programming Models Realized by Protocols CAD Database Multiprogramming Shared address Scientific modeling Message passing

Programming Models Realized by Protocols CAD Database Multiprogramming Shared address Scientific modeling Message passing Data parallel Compilation or library Operating systems support Communication hardware Parallel applications Programming models Communication abstraction User/system boundary Hardware/software boundary Physical communication medium Network Transactions 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 3

Shared Address Space Abstraction • Fundamentally a two-way request/response protocol – writes have an

Shared Address Space Abstraction • Fundamentally a two-way request/response protocol – writes have an acknowledgement • Issues – fixed or variable length (bulk) transfers – remote virtual or physical address, where is action performed? – deadlock avoidance and input buffer full • coherent? consistent? 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 4

Consistency P 3 P 2 P 1 • write-atomicity violated without caching – No

Consistency P 3 P 2 P 1 • write-atomicity violated without caching – No way to enforce serialization • Solution? Acknowledge write of A before writing Flag… 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 5

Properties of Shared Address Abstraction • Source and destination data addresses are specified by

Properties of Shared Address Abstraction • Source and destination data addresses are specified by the source of the request – a degree of logical coupling and trust • no storage logically “outside the address space” » may employ temporary buffers for transport • Operations are fundamentally request response • Remote operation can be performed on remote memory – logically does not require intervention of the remote processor 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 6

Message passing • Bulk transfers • Complex synchronization semantics – more complex protocols –

Message passing • Bulk transfers • Complex synchronization semantics – more complex protocols – More complex action • Synchronous – Send completes after matching recv and source data sent – Receive completes after data transfer complete from matching send • Asynchronous – Send completes after send buffer may be reused 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 7

Synchronous Message Passing Processor Action? • • 2/20/08 Constrained programming model. Deterministic! What happens

Synchronous Message Passing Processor Action? • • 2/20/08 Constrained programming model. Deterministic! What happens when threads added? Destination contention very limited. User/System boundary? Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 8

Asynch. Message Passing: Optimistic • More powerful programming model • Wildcard receive => non-deterministic

Asynch. Message Passing: Optimistic • More powerful programming model • Wildcard receive => non-deterministic • Storage required within msg layer? 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 9

Asynch. Msg Passing: Conservative • Where is the buffering? • Contention control? Receiver initiated

Asynch. Msg Passing: Conservative • Where is the buffering? • Contention control? Receiver initiated protocol? • Short message optimizations 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 10

Features of Msg Passing Abstraction • Source knows send data address, dest. knows receive

Features of Msg Passing Abstraction • Source knows send data address, dest. knows receive data address – after handshake they both know both • Arbitrary storage “outside the local address spaces” – may post many sends before any receives – non-blocking asynchronous sends reduces the requirement to an arbitrary number of descriptors » fine print says these are limited too • Optimistically, can be 1 -phase transaction – Compare to 2 -phase for shared address space – Need some sort of flow control » Credit scheme? • More conservative: 3 -phase transaction – includes a request / response • Essential point: combined synchronization and communication in a single package! 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 11

Active Messages Request handler Reply • User-level analog of network transaction – transfer data

Active Messages Request handler Reply • User-level analog of network transaction – transfer data packet and invoke handler to extract it from the network and integrate with on-going computation • Request/Reply • Event notification: interrupts, polling, events? • May also perform memory-to-memory transfer 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 12

Common Challenges • Input buffer overflow – N-1 queue over-commitment => must slow sources

Common Challenges • Input buffer overflow – N-1 queue over-commitment => must slow sources • Options: – reserve space per source (credit) » when available for reuse? • Ack or Higher level – Refuse input when full » backpressure in reliable network » tree saturation » deadlock free » what happens to traffic not bound for congested dest? – Reserve ack back channel – drop packets – Utilize higher-level semantics of programming model 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 13

The Fetch Deadlock Problem • Even if a node cannot issue a request, it

The Fetch Deadlock Problem • Even if a node cannot issue a request, it must sink network transactions! – Incoming transaction may be request generate a response. – Closed system (finite buffering) • Deadlock occurs even if network deadlock free! NETWORK 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 14

Solutions to Fetch Deadlock? • logically independent request/reply networks – physical networks – virtual

Solutions to Fetch Deadlock? • logically independent request/reply networks – physical networks – virtual channels with separate input/output queues • bound requests and reserve input buffer space – K(P-1) requests + K responses per node – service discipline to avoid fetch deadlock? • NACK on input buffer full – NACK delivery? • Alewife Solution: – Dynamically increase buffer space to memory when necessary – Argument: this is an uncommon case, so use software to fix 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 15

Network Transaction Processing Scalable Network Message Output Processing – checks – translation – formating

Network Transaction Processing Scalable Network Message Output Processing – checks – translation – formating – scheduling M CA ° ° ° Communication Assist P Node Architecture CA M P Input Processing – checks – translation – buffering – action • Key Design Issue: • How much interpretation of the message? • How much dedicated processing in the Comm. Assist? 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 16

Spectrum of Designs • None: Physical bit stream – blind, physical DMA n. CUBE,

Spectrum of Designs • None: Physical bit stream – blind, physical DMA n. CUBE, i. PSC, . . . • User/System – User-level port CM-5, *T, Alewife – User-level handler J-Machine, Monsoon, . . . • Remote virtual address – Processing, translation Paragon, Meiko CS-2 • Global physical address – Proc + Memory controller RP 3, BBN, T 3 D • Cache-to-cache – Cache controller Dash, Alewife, KSR, Flash Increasing HW Support, Specialization, Intrusiveness, Performance (? ? ? ) 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 17

Net Transactions: Physical DMA • DMA controlled by regs, generates interrupts • Physical =>

Net Transactions: Physical DMA • DMA controlled by regs, generates interrupts • Physical => OS initiates transfers sender auth • Send-side dest addr – construct system “envelope” around user data in kernel area • Receive – must receive into system buffer, since no interpretation in CA 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 18

n. CUBE Network Interface • independent DMA channel per link direction – leave input

n. CUBE Network Interface • independent DMA channel per link direction – leave input buffers always open – segmented messages • routing interprets envelope Os 16 ins 260 cy 13 us Or 200 cy 15 us 18 - includes interrupt – dimension-order routing on hypercube – bit-serial with 36 bit cut-through 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 19

Conventional LAN NI Host Memory NIC trncv NIC Controller Data addr TX RX Addr

Conventional LAN NI Host Memory NIC trncv NIC Controller Data addr TX RX Addr Len Status Next Addr Len Status Next len mem bus DMA IO Bus Proc Addr Len Status Next • Costs: Marshalling, OS calls, interrupts 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 20

User Level Ports • initiate transaction at user level • deliver to user without

User Level Ports • initiate transaction at user level • deliver to user without OS intervention • network port in user space – May use virtual memory to map physical I/O to user mode • User/system flag in envelope – protection check, translation, routing, media access in src CA – user/sys check in dest CA, interrupt on system 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 21

Example: CM-5 • Input and output FIFO for each network • 2 data networks

Example: CM-5 • Input and output FIFO for each network • 2 data networks • tag per message – index NI mapping table • context switching? • Alewife integrated NI on chip • *T and i. WARP also Os 50 cy 1. 5 us Or 1. 6 us 53 cy interrupt 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 10 us Lec 8. 22

User Level Handlers U s e r /s y s te m D a

User Level Handlers U s e r /s y s te m D a ta A d d re s s D e st °° ° M em Mem P P • Hardware support to vector to address specified in message – On arrival, hardware fetches handler address and starts execution • Active Messages: two options – Computation » Handler 2/20/08 in background threaads never blocks: it integrates message into computation in handlers (Message Driven Processing) does work, may need to send messages or block Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 23

J-Machine • Each node a small mdg driven processor • HW support to queue

J-Machine • Each node a small mdg driven processor • HW support to queue msgs and dispatch to msg handler task 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 24

Alewife Messaging • Send message – write words to special network interface registers –

Alewife Messaging • Send message – write words to special network interface registers – Execute atomic launch instruction • Receive – Generate interrupt/launch user-level thread context – Examine message by reading from special network interface registers – Execute dispose message – Exit atomic section 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 25

i. WARP Host Interface unit • Nodes integrate communication with computation on systolic basis

i. WARP Host Interface unit • Nodes integrate communication with computation on systolic basis • Msg data direct to register of neighbor • Stream into memory 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 26

Sharing of Network Interface • What if user in middle of constructing message and

Sharing of Network Interface • What if user in middle of constructing message and must context switch? ? ? – Need Atomic Send operation! » Message either completely in network or not at all » Can save/restore user’s work if necessary (think about single set of network interface registers – J-Machine mistake: after start sending message must let sender finish » Flits start entering network with first SEND instruction » Only a SENDE instruction constructs tail of message • Receive Atomicity – If want to allow user-level interrupts or polling, must give user control over network reception » Closer user is to network, easier it is for him/her to screw it up: Refuse to empty network, etc » However, must allow atomicity: way for good user to select when their message handlers get interrupted – Polling: ultimate receive atomicity – never interrupted » Fine as long as user keeps absorbing messages 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 27

Dedicated processing without dedicated hardware design 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec

Dedicated processing without dedicated hardware design 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 28

Dedicated Message Processor Network dest ° ° ° Mem NI P User 2/20/08 NI

Dedicated Message Processor Network dest ° ° ° Mem NI P User 2/20/08 NI P M P System User M P System • General Purpose processor performs arbitrary output processing (at system level) • General Purpose processor interprets incoming network transactions (at system level) • User Processor <–> Msg Processor share memory • Msg Processor <–> Msg Processor via system network transaction Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 29

Levels of Network Transaction Network dest ° ° ° Mem NI P User M

Levels of Network Transaction Network dest ° ° ° Mem NI P User M P Mem NI M P P System • User Processor stores cmd / msg / data into shared output queue – must still check for output queue full (or make elastic) • Communication assists make transaction happen – checking, translation, scheduling, transport, interpretation • Effect observed on destination address space and/or events • Protocol divided. Kubiatowicz between two layers 2/20/08 CS 258 ©UCB Spring 2008 Lec 8. 30

Example: Intel Paragon Service Network I/O Nodes Devices 16 Mem 175 MB/s Duplex 2048

Example: Intel Paragon Service Network I/O Nodes Devices 16 Mem 175 MB/s Duplex 2048 B NI i 860 xp 50 MHz 16 KB $ 4 -way 32 B Block MESI 2/20/08 ° ° ° EOP rte MP handler Var data 64 400 MB/s $ $ P M P s. DMA r. DMA Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 31

User Level Abstraction (Lok Liu) IQ Proc IQ OQ OQ VAS Proc • Any

User Level Abstraction (Lok Liu) IQ Proc IQ OQ OQ VAS Proc • Any user process can post a transaction for any other in protection domain – communication layer moves OQsrc –> IQdest – may involve indirection: VASsrc –> VASdest • See, for instance: 2/20/08 – “Remote Queues: Exposing Message Queues for Optimization and Atomicity, ” Eric A. Brewer, Fredrik T. Chong, Lok T. Liu, Shamik D. Sharma, John D. Kubiatowicz, SPAA 1995 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 32

Msg Processor Events User Output Queues Compute Processor Kernel System Event Rcv FIFO ~Full

Msg Processor Events User Output Queues Compute Processor Kernel System Event Rcv FIFO ~Full 2/20/08 DMA done Send DMA Dispatcher Rcv DMA Send FIFO ~Empty Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 33

Basic Implementation Costs: Scalar 10. 5 µs CP Registers 7 wds Cache Net FIFO

Basic Implementation Costs: Scalar 10. 5 µs CP Registers 7 wds Cache Net FIFO Net MP 2 1. 5 2 MP 2 CP 2 User OQ 2 User IQ 4. 4 µs 5. 4 µs 250 ns + H*40 ns • Cache-to-cache transfer (two 32 B lines, quad word ops) – producer: read(miss, S), chk, write(S, WT), write(I, WT), write(S, WT) – consumer: read(miss, S), chk, read(H), read(miss, S), read(H), write(S, WT) • to NI FIFO: read status, chk, write, . . . • from NI FIFO: read status, chk, dispatch, read, . . . 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 34

Virtual DMA -> Virtual DMA s. DMA r. DMA Memory CP Registers MP 2

Virtual DMA -> Virtual DMA s. DMA r. DMA Memory CP Registers MP 2 1. 5 Net MP 2 2 MP 2 CP 2 7 wds Cache User OQ hdr 400 MB/s 2048 Net FIFO 400 MB/s User IQ 2048 175 MB/s • Send MP segments into 8 K pages and does VA –> PA • Recv MP reassembles, does dispatch and VA –> PA per page 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 35

Single Page Transfer Rate Effective Buffer Size: 3232 Actual Buffer Size: 2048 2/20/08 Kubiatowicz

Single Page Transfer Rate Effective Buffer Size: 3232 Actual Buffer Size: 2048 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 36

Msg Processor Assessment VAS User Output Queues User Input Queues Compute Processor Kernel System

Msg Processor Assessment VAS User Output Queues User Input Queues Compute Processor Kernel System Event Rcv FIFO ~Full DMA done Send DMA Dispatcher Rcv DMA Send FIFO ~Empty • Concurrency Intensive – Need to keep inbound flows moving while outbound flows stalled – Large transfers segmented • Reduces overhead but adds latency 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 37

Conclusion • Shared Address Space – Request/Response Protocol – Global names for memory locations

Conclusion • Shared Address Space – Request/Response Protocol – Global names for memory locations specify nodes • Many different Message-Passing styles – Global Address space: 2 -way – Optimistic message passing: 1 -way – Conservative transfer: 3 -way • “Fetch Deadlock” – Request Response introduces cycle through network – Fix with: » 2 networks » dynamic increase in buffer space • Network Interfaces – User-level access – DMA – Atomicity 2/20/08 Kubiatowicz CS 258 ©UCB Spring 2008 Lec 8. 38