A Level2 trigger algorithm for the identification of

  • Slides: 26
Download presentation
A Level-2 trigger algorithm for the identification of muons in the ATLAS Muon Spectrometer

A Level-2 trigger algorithm for the identification of muons in the ATLAS Muon Spectrometer The ATLAS High Level Trigger Group Presented by Alessandro Di Mattia I. N. F. N – university of “La Sapienza”, Roma Computing in High Energy Physics Interlaken, September 26 -30, 2004

Outline: • The ATLAS trigger • m. Fast algorithm • relevant physics performances •

Outline: • The ATLAS trigger • m. Fast algorithm • relevant physics performances • Implementation in the Online framework • Latency of the algorithm • Conclusions

The ATLAS trigger Level-1 Hardware trigger 2. 5 μs ~ 10 ms ~1 s

The ATLAS trigger Level-1 Hardware trigger 2. 5 μs ~ 10 ms ~1 s 75 k. Hz ~ 200 Hz Rate Target processing time High Level Triggers (HLT) Level-2 + Event Filter Software trigger

Standalone muon reconstruction at Level-2 Task of the Level-2 muon trigger: • • Confirm

Standalone muon reconstruction at Level-2 Task of the Level-2 muon trigger: • • Confirm the Level-1 trigger with a more precise pt estimation within a “Region of interest (Ro. I)”. Contribute to the global Level-2 decision. To perform the muon reconstruction Ro. I data are gathered together and processed in three steps: 1) 2) 3) “Global Pattern Recognition” involving trigger chambers and positions of MDT tubes (no use of drift time); “Track fit” involving drift time measurements, performed for each MDT chamber; Fast “pt estimate” via a Look-up-table (LUT) with no use of time consuming fit methods. Result h, f, direction of flight into the spectrometer, and pt at the interaction vertex.

After L 1 emulation Ap p Mu roxi m on tra ated jm ecu

After L 1 emulation Ap p Mu roxi m on tra ated jm ecu torn y Global Pattern recognition 1 hit from each Trigger Station is required to start the Pattern Recognition on MDT data. Use the L 1 simulation code to select the RPC Trigger Pattern Valid coincidence in the Low-Pt CMA

Muon Roads and “contiguity algorithm” Define “m-roads” around this trajectory in each chamber; Collect

Muon Roads and “contiguity algorithm” Define “m-roads” around this trajectory in each chamber; Collect hit tubes within the roads using the residual of the muon tube. Apply a contiguity algorithm to further remove background hits inside the roads. e (muon hits) = 96% backgr. hits ~ 3% Low pt (~ 6 Ge. V) High pt (~ 20 Ge. V)

Track Fit Use drift time measurement to fit the best straight line crossing all

Track Fit Use drift time measurement to fit the best straight line crossing all points. Compute the track bending using the sagitta method: three points required For a given chamber the m sagitta is: sm ~ 150 mm for muon pt = 20 Ge. V sm ~ 500 mm for muon pt = 6 Ge. V small effects respect to sm

PT estimate Prepare Look Up Tables (LUT) as a set of relations between values

PT estimate Prepare Look Up Tables (LUT) as a set of relations between values of s and pt for different h, f regions (s = f (h , f , pt)). 30 x 60 (h , f) tables for each detector octant. Use linear relation between 1/s and p. T to estimate p. T. Performances including background simulation for the high luminosity environment Resolution comparable with the ATLAS reconstruction program (factor of about 2). Track finding efficiency of about 97% for muons.

Trigger rates (barrel) Low pt L 1 rate (KHz) L 2 rate(KHz) K/p decays

Trigger rates (barrel) Low pt L 1 rate (KHz) L 2 rate(KHz) K/p decays 7. 9 3. 1 b decays 1. 7 1. 0 c decays 1. 0 0. 5 Fake L 1 1. 0 Negligible Total 10. 6 4. 6 L 1 rate (KHz) L 2 rate(KHz) K/p decays 0. 68 0. 04 b decays 0. 50 0. 06 c decays 0. 21 0. 02 W decays 0. 03 0. 02 negligible 1. 42 0. 15 High pt Fake L 1 Total

HLT Event Selection Software HLT Data Flow Software HLT Selection Software Event Filter §

HLT Event Selection Software HLT Data Flow Software HLT Selection Software Event Filter § § § HLTSSW Processing Task HLT Core Software Framework ATHENA/GAUDI Reuse offline components Common to Level-2 and EF 1. . * Steering HLT Algorithms L 2 PU Application Data Manager ROB Data Collector HLT Algorithms 1. . * Event Data Model <<import>> Athena/ Gaudi Monitoring Service Meta. Data Service <<import>> Store. Gate Event Data Model Reconstr. Algorithms Offline Core Software Offline Reconstruction Offline algorithms used in EF Package Interface Dependency

RPC bytestream reflects the organizzation of the trigger logic: – – ROD -> Rx

RPC bytestream reflects the organizzation of the trigger logic: – – ROD -> Rx -> PAD -> Coincidence Matrix (CMA) -> CMA channel 1 ROD = 2 Sector Logic = 2 Rx; RPC detector are read by 64 Logic Sector; Up to 7 PAD into a Rx; up to 8 CMA into a PAD (4 per view); CMA channel = 32/64 depending on the CMA side (Pivot/Confirm); 1 CMA coincidences between RPC planes in a 3 -dimensional area Level-1 Ro. I is the intersection of a CMA processing RPC etaview with a CMA processing RPC phi-view inside 1 PAD. Confirm plane high pt Pivot plane Confirm plane low pt Shown are odd number CMAs only, CMAs overlap in confirm planes, but not in the pivot plane. No way to fit RPC bytestream into RPC detector modules!

RPC RDO Definition • Needed different types: – – “bare” RDO as persistent representation

RPC RDO Definition • Needed different types: – – “bare” RDO as persistent representation of bytestream; contains raw data from the Level-1 and are used by m. Fast to run the Level-1 emulation on one Ro. I; “prepared” RDO (or RIO – Reconstruction Input Object) are obtained from the RDO with some manipulation of the data to resolve the overlap regions and to associate space positions to the hits. Used by the offline reconstruction. BARE: Convenient way to organizing RDOs in IDC is according to PAD. Data requests are simplified thanks to the close correspondence between PAD and Ro. I. PAD -> Coincidence Matrix -> Fired CMA channel PAD CM CM CM … up to 8 Data are strictly limited to the needed ones: no overhead introduced in the data decoding. Fired channel Firedchannel PREPARED: Stored in Storegate in hierarchical structure as defined by offline identifiers up to the RPC chamber modules.

MDT bytestream organization: ROD -> Chamebr System Module (CSM) -> TDC ->TDC channel –

MDT bytestream organization: ROD -> Chamebr System Module (CSM) -> TDC ->TDC channel – 1 ROD = 1 trigger tower (f x h x r = 1 x 2 x 3); – 1 CSM read 1 MDT chamber; one CSM can have up to 18 TDC; – 1 AMT (Atlas Muon TDC) can have up to 24 channel (= “tubes”); To fit MDT bytestream into MDT detector modules is trivial.

MDT RDO definition • Need different types: – “bare” RDO as persistent representation of

MDT RDO definition • Need different types: – “bare” RDO as persistent representation of bytestream; contains MDT raw data and are used by m. Fast to confirm the Level-1 Ro. I. – “prepared” RDO contains refined info (drift time, calibarted time, radius, error). BARE: Convenient way to organizing RDOs in IDC is according to CSM, because can be closely matched both to a detector element and to the trigger tower read-out. No ordering is foreseen for AMT data words. CSM = MDT chamber Mdt. Amt. Hitt CSM -> AMT hit (AMT data word) Data access according to chambers is not efficient: optimization needed. PREPARED: Stored in Storegate with the same structure as RDO but contains a list of offline MDT digits.

Standard MDT data access scheme: use LVL 1 Muon Ro. I info + the

Standard MDT data access scheme: use LVL 1 Muon Ro. I info + the Region Selector mu on 7 MDT chambers LVL 1 Ro. I to be accessed MDT chamber accessed This tail is critical for the MDT converter timing

Improved MDT data access scheme: After L 1 emulation Width < 50 cm Width

Improved MDT data access scheme: After L 1 emulation Width < 50 cm Width ~ 5 cm Ap p Mu roxi m on tra ated jm ecu torn y use Muon Roads + new Region Selector 3 MDT chambers to be accessed; up to 6 in case Roads overlap two chambers. MDT chamber accessed Width < 40 cm Only three MDT chambers are accessed in most of the cases.

Bytestream dataflow Ro. I B. emulation Ro. I m. Fast use L 2 Detector

Bytestream dataflow Ro. I B. emulation Ro. I m. Fast use L 2 Detector Description ROD emulation EF and offline rec. Readout Cabling Detector Geometry Online vs Offline map RDO use RIO use RDO Converter RIO Converter use Simulation Bytestream Offline Detector Description m. Fast uses a dedicated detector description code to reconstruct RDOs: – standalone implementation to ease the integration into the L 2 environment; – detector geometry organized according to readout hierarchy; – minimal use of STL container.

m. Fast implementation • The processing tasks are implemented by “process” classes – acts

m. Fast implementation • The processing tasks are implemented by “process” classes – acts on “C style” data structure; no use of EDM classes. – process versioning implemented through inheritance • The “sequence” classes manage the execution of processes and publish the data sctucture towards the processes and the sequences – provide interfaces to framework components: Message. Svc, Timer. Svc, etc. Process. Base Process. Std Process. TYP Pure virtual implementation Concrete imp. of the data structure I/O and printouts Concrete imp. of the task type Sequence Process. Std name: string type: integer data: struct <TYPE> runs name: string type: integer data&: struct<TYPE> Methods: give. Data() start() Methods: run() printout() Minimal use of STL containers. No memory allocation on demand.

m. Fast sequence diagram m. Fast sequences Framework infrastructure m. Fast execution RPC data

m. Fast sequence diagram m. Fast sequences Framework infrastructure m. Fast execution RPC data access Ro. I reconstruction Level-1 emulation RPC pattern recognition MDT data access MDT pattern recognition Feature extraction Monitoring PAD Id IDC for RPC PAD CM CM CM … up to 8 Fired channel Firedchannel PAD Trigger pattern Muon roads IDC for MDT CSM CSM Amt Amt Amt Muon Features CSM CSM Prepared digits Filling histos for monitoring

m. Fast and Total latency time Optimized code run on m. Fast (Pentium III

m. Fast and Total latency time Optimized code run on m. Fast (Pentium III @ 2. 3 GHz). Physics: single muon, pt=100 Ge. V Cavern Background: High Lumi x 2 • m. Fast latency is the CPU time taken by the algorithm without considering the data access/conversion time: – the presence of Cavern Background does not increase the m. Fast processing time. • The total latency shows timings made on the same event sample before and after optimizing the MDT data access. Optimized version: – total data access time ~ 800 ms; – data access takes the same cpu time of m. Fast; Total First implementation

Conclusions • m. Fast is suitable to perform the muon trigger selection in ATLAS

Conclusions • m. Fast is suitable to perform the muon trigger selection in ATLAS L 2: BARREL RESULTS: – m. Fast reconstructs muon tracks into Muon Spectrometer and measures the PT at the interaction vertex with a resolution of 5. 5% at 6 Ge. V and 4% at 20 Ge. V; – m. Fast allows to reduce the LVL 1 trigger rate from 10. 6 k. Hz to 4. 6 k. Hz (6 Ge. V), and from 2. 4 k. Hz to 0. 24 k. Hz (20 Ge. V). • algorithm fully implemented in the Online framework: multithread tested succesfully. • algorithm and data access time match the L 2 trigger latency: now ready to undergo a next optimization phase more devoted to standardize the software components.

Requirements for online implementation • L 2 latency time set to 10 ms; Trigger

Requirements for online implementation • L 2 latency time set to 10 ms; Trigger architecture • Thread Safety • Data access in restricted Geometrical Region (Ro. I seeding); • Hide aspects of data access behind offline Storgate interfaces; • Use RDO (Raw Data Object) as the atomic data component: – translate the bytestream Raw data into RDO; – conversion mechanism integrated into the data access. Software design • Standardize the data access for every subdetector: – general region lookup to implement Ro. I mechanism, – common interfaces for detector specific code, e. g. RDO converters, – force a common structure for the RDOs, as far as it is possible: fit it into detector modules. • ROB access and data preparation/conversion on demand;

Optimization of MDT data access • Standard implementation of MDT data access was not

Optimization of MDT data access • Standard implementation of MDT data access was not efficient: • ~7 chambers required per Ro. I; • … but typically only 3 chambers have muon hits; • direct impact on the timing performance because: – MDT occupancy dominated by Cavern Background; – MDT converter time scales linearly with the chamber occupancy; • A more efficient access schema has be implemented using: • Muon Roads – refinement of the Ro. I region available after L 1 emulation. The widths of Muon Roads are smaller than the chamber size. • Ad new Region Selector Interface – selects detector elements according to the station (Innermost, Middle, Outermost), to the sector and to the track path.

Physics performances (Barrel) Performances including background simulation for the high luminosity environment: Rejection of

Physics performances (Barrel) Performances including background simulation for the high luminosity environment: Rejection of fake Level-1 trigger: ~10+3 at nominal high luminosity ~10+2 at high luminosity x 5 fake trigger rate reduced to an acceptable level. Resolution comparable with the ATLAS reconstruction program (factor of about 2). a = 300 Me. V, b = 3 x 10 -4 Ge. V-1, c = 0. 04 Track finding efficiency of about 97% for high pt muons. 6 Ge. V 20 Ge. V

RPC bytestream • RPC bytestream reflects the organizzation of the trigger logic: – –

RPC bytestream • RPC bytestream reflects the organizzation of the trigger logic: – – – ROD -> Rx -> PAD -> Coincidence Matrix (CMA) -> CMA channel 1 ROD = 2 Sector Logic = 2 Rx; RPC detector are read by 64 Logic Sector; Up to 7 PAD into a Rx; up to 8 CMA into a PAD (4 per view); CMA channel = 32/64 depending on the CMA side (Pivot/Confirm); Each CMA side read two RPC planes. • The basic chunk of data for the reconstruction is the CMA; • Since the read-out is made by the same hardware components which implements the trigger algorithm, the whole read-out structure reflects the organization of the trigger logic; Definition of RPC RDO is not trivial because the read-out is geared towards the trigger needs.

1 CMA coincidences between RPC planes in a 3 -dimensional area Confirm plane high

1 CMA coincidences between RPC planes in a 3 -dimensional area Confirm plane high pt Pivot plane Confirm plane low pt Projection of PADs onto the Pivot plane Shown are odd number CMAs only, CMAs overlap in confirm planes, but not in the pivot plane. No way to fit CMAs into RPC module! Complication in RPC RDO definition 1 PAD holds info of (up to) 8 CMs: 2 CM eta low pt + 2 CM phi low pt + 2 CM eta high pt + 2 CM phi high pt