SHi P Data Ac Quisition Giovanna Lehmann Miotto

  • Slides: 14
Download presentation
+ SHi. P Data Ac. Quisition Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of

+ SHi. P Data Ac. Quisition Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team

+ 2 Outline n Architecture Overview n Design options for back-end data flow n

+ 2 Outline n Architecture Overview n Design options for back-end data flow n Design options for detector FE interface n Summary and Outlook SHIP Collaboration Meeting 08. 06. 17

n Main components n Front End (FE) electronics producing data n Timing controller (TFC)

n Main components n Front End (FE) electronics producing data n Timing controller (TFC) n Front End Host processes (FEH) n Event Filter processes (EFF) n Switched network, PCs, storage Notes: n FE interface directly with a dedicated host computer (no network switch). n The FEH pack the data frames for the EFF. n EFF and FEH processes may share the CPU. n SHi. P data is processed on an ‘elected’ node on a per SHi. P cycle basis. n Start of Spill Clock SPS GBT n SHi. P TDAQ Architecture Custom Optical Links + 3 FRAMES FEH/EFF PC availability Destination address Throttle FRAMES An SPS extraction spill is always fully contained in a SHi. P cycle. SHIP Collaboration Meeting 08. 06. 17

+ 4 Data Flow (logical view) 4. Detector partition FE card data frame merger

+ 4 Data Flow (logical view) 4. Detector partition FE card data frame merger (data frame builder) FE card Data from all partitions sent over the network to an ‘elected EFF process. 5. Each EFF process digests the data for one SHi. P cycle, localizes trigger candidates, applies the trigger selection, and produces physics output stream. data framemerger (data frame builder) FE card FE card Detector partition . . . FE card data framemerger (data frame builder) FE card Detector partition FE card data framemerger (data frame builder) SHIP Collaboration Meeting FE card data framemerger (data frame builder) 1. Data collected by FE cards is sent to a Frond End Host process. 2. Data either sent in small packets (send and forget) or packed into larger packets. 3. The Frond End Host process merges data from a partition into data frames. network FE card Detector partition Spill merger and event Spill merger filter process and event Spill merger filter process and event Spill merger filter process and event filter process FE card Detector partition 08. 06. 17

+ 5 From Architecture to Design n In order to be able to design

+ 5 From Architecture to Design n In order to be able to design the TDAQ system there are still many areas for which requirements need to be specified n Number of FE cards and links n Expected data sizes (mean and peak) n … n Another important ingredient to the design is the definition of the interface to the FE cards n We started evaluating design options in two areas n Backend data flow n Interfaces to FE SHIP Collaboration Meeting 08. 06. 17

+ 6 Assignment of EFF for cycle data n n EFF processes receive data

+ 6 Assignment of EFF for cycle data n n EFF processes receive data corresponding to a full cycle n Processing done on the fly, or n Temporary data storage, for multi level or staged processing. The FEHs need to know which is the best suited EFF to receive the data for a cycle n A data flow manager (possibly integrated with TFC) assigns cycles to EFF nodes and notifies the decision to the FEHs, or n A data flow manager (possibly integrated with TFC) assigns cycles to EFF nodes and notifies the chosen EFF node which will pull the data from the FEHs. n Algorithm to select best EFF will be a function of their available processing and storage capacity. SHIP Collaboration Meeting 08. 06. 17

+ 7 EFF internal organization Cycle Builder Queue of cycle data One process dealing

+ 7 EFF internal organization Cycle Builder Queue of cycle data One process dealing with FEHs, independent analysis processes. Analysis Many identical, independent processes. Cycle builder & analysis Temporary storage SHIP Collaboration Meeting Analysis Cycle builder & analysis 08. 06. 17

+ 8 Defining the FE interfaces n Define and agree on interfaces early on,

+ 8 Defining the FE interfaces n Define and agree on interfaces early on, allowing detector communities and DAQ to develop independently, towards a compatible solution n Keep the FE as simple as possible and move complexity offdetector n Make clear interfaces that do not pre-empt the freedom of profiting from latest technologies in the DAQ n n If possible, limit the number of different physical interfaces to the FE In the next slides two approaches are presented n Feedback from the detector electronics experts appreciated SHIP Collaboration Meeting 08. 06. 17

+ 9 Interfaces to the Front End Readout (FEH) FE FE FE EFF ?

+ 9 Interfaces to the Front End Readout (FEH) FE FE FE EFF ? Define early interfaces to: - Readout - Timing, Fast Control - Slow control TFC Slow Control SHIP Collaboration Meeting Choose as late as possible: - Computing architecture - Storage architecture - Network topology and technology 08. 06. 17

+ 10 Option 1: “Traditional Approach” Readout (FEH) FE FE FE EFF ? TFC

+ 10 Option 1: “Traditional Approach” Readout (FEH) FE FE FE EFF ? TFC Slow Control SHIP Collaboration Meeting 3 physical interfaces, potentially 3 different technologies and protocols. 08. 06. 17

+ 11 Option 2: “Integrated Approach” Readout (FEH) FE FE FE EFF ? 1

+ 11 Option 2: “Integrated Approach” Readout (FEH) FE FE FE EFF ? 1 physical interface to FE (not necessarily only 1 protocol) TFC Slow Control SHIP Collaboration Meeting Different technologies possible on TDAQ/ slow control 08. 06. 17

+ 12 What is n The box could be a set of servers hosting

+ 12 What is n The box could be a set of servers hosting PCIe cards and a switching network. n n and ? Similar to the architecture being chosen by 3 out of 4 LHC experiments for their upgrades. The bidirectional protocol to/from the FE may be GBT n Alternatives may be considered, but GBT ensures synchronicity for the timing signals distribution and has implementations for Altera and Xilinx supported long-term (LHC experiments). ATLAS LHCb SHIP Collaboration Meeting 08. 06. 17

+ 13 Comparing Options Option 1 Option 2 n No coherence required for timing

+ 13 Comparing Options Option 1 Option 2 n No coherence required for timing distribution, data readout, slow control n Agreement on a single physical interface/protocol(s) with FE developers n Possible to choose the simplest technology for each n n No need for an intermediate HW layer Allow for max flexibility on technologies and topology of DAQ n Though not mandatory for SHIP, use a solution that can be applied at CERN to many experiments in radiation environments. Are both options feasible from a FE point of view? SHIP Collaboration Meeting 08. 06. 17

+ 14 Summary and outlook n The overall TDAQ architecture has been defined and

+ 14 Summary and outlook n The overall TDAQ architecture has been defined and documented in note: : http: //cds. cern. ch/record/2162870 n Major input still needed from detectors to be able to start designing a TDAQ system n n Number of links, data sizes, expected data rates, … In the meantime TDAQ design options are being evaluated n Back-end data flow and EFF internal organization n Interfaces to the FE n In parallel, effort is being put into simulating the TDAQ n Feed-back on options for defining the FE interfaces is very welcome SHIP Collaboration Meeting 08. 06. 17