ALICE First paper Size 16 x 26 meters

  • Slides: 21
Download presentation
ALICE – First paper

ALICE – First paper

Size: 16 x 26 meters Weight: 10, 000 tons TOF TRD HMPID ITS PMD

Size: 16 x 26 meters Weight: 10, 000 tons TOF TRD HMPID ITS PMD Muon Arm PHOS ALICE Set-up TPC 2

ALICE TPC • • • Large volume gas detector Drift volume and MPWC at

ALICE TPC • • • Large volume gas detector Drift volume and MPWC at the end caps 3 -dim. “continuous” tracking device for charged particles • • • x, y of pad z derived from drift time Designed to record up to 20000 tracks Event rate: about 1 k. Hz Typical event size for a central Pb+Pb collision: about 75 MByte 3 3

ALICE TPC: 5 years of construction 4

ALICE TPC: 5 years of construction 4

5

5

Trigger system • Minimal requirements tr ig g er de te ct or •

Trigger system • Minimal requirements tr ig g er de te ct or • Detect collisions • Initialise readout of detectors • Initialise data transfer to data acquisition (DAQ) • Protection against pile-up trigger system • Select interesting events • Needs real-time processing of raw data and extraction of physics observables Detectors Readout electronics raw data DAQ • High level requirements high-level trigger processed data Why? interaction rate (e. g. 8 k. Hz for Pb+Pb) > detector readout rate (e. g. 1 k. Hz for TPC) > DAQ archiving rate (50 - 100 Hz) 6

What to trigger on? • Every central Pb+Pb collisions produces a QGP - no

What to trigger on? • Every central Pb+Pb collisions produces a QGP - no need for a QGP-trigger • But hard probes are (still) rare at high momentum • In addition, the reconstruction efficiency of heavy quark probes is very low • E. g. Detection of hadronic charm decays: D 0 K– + + • about 1 D 0 per event (central Pb-Pb) in ALICE acceptance • after cuts • signal/event = 0. 001 • background/event = 0. 01 trigger 7

PHOS L 0 trigger Pb. O 4 W- crystal calorimeter for photons, neutral mesons,

PHOS L 0 trigger Pb. O 4 W- crystal calorimeter for photons, neutral mesons, 1 to > 100 Ge. V Array of crystals + APD + preamp + trigger logic + readout DAQ L 0 trigger • tasks • shower finder L 0/L 1 trigger • energy sum • implementation • FPGA • VHDL firmware 8 8

PHOS – muon tracks 9

PHOS – muon tracks 9

D 0 trigger • Detection of hadronic charm decays: D 0 K– + +

D 0 trigger • Detection of hadronic charm decays: D 0 K– + + (6. 75%), c = 124 m • HLT code • TPC tracker TPC+ITS track fitter displaced decay vertex finder ITS TPC • D 0 finder: cut on d 0(K)*d 0( ) Preliminary result: invariant mass resolution is within a factor of two compared to offline 10

Introducing the High Level Trigger ALICE data rates (example TPC) ● Central Pb+Pb collisions

Introducing the High Level Trigger ALICE data rates (example TPC) ● Central Pb+Pb collisions ● event rates: ~200 Hz (past/future protected) event sizes: (after zero-suppression) ● data rates: ● ~75 Mbyte TPC is the largest data source with 570132 channels, 512 timebins and 10 bit ADC value. ~ 15 Gbyte/sec TPC data rate alone exceeds by far the total DAQ bandwidth of 1. 25 Gbyte/sec HLT tasks • Event selection based on software trigger • Efficient data compression 11

HLT requirements • Full event reconstruction in real-time • Main task: reconstruction of up

HLT requirements • Full event reconstruction in real-time • Main task: reconstruction of up to 10000 charged particle trajectories • Method: Pattern recognition in the TPC • Cluster finder • Track fit Global track fit ITS-TPC-TRD Vertex finder • Event analysis • Trigger decision 12

HLT architecture • HLT is a generic high performance cluster Detectors HLT DAQ Mass

HLT architecture • HLT is a generic high performance cluster Detectors HLT DAQ Mass storage

HLT building blocks (1) • Hardware • Nodes • Sufficient computing power for p+p

HLT building blocks (1) • Hardware • Nodes • Sufficient computing power for p+p • 121 Front-End PCs: 968 CPU cores, 1. 935 TB RAM, equipped with custom PCI card for receiving detector data • 51 Computing PCs: 408 CPU cores, 1. 104 TB RAM • Network • Infiniband backbone, Giga. Bit ethernet • Infrastructure • 20 redundant servers for all critical systems

HLT building blocks (2) • Software • Cluster management and monitoring • Data transport

HLT building blocks (2) • Software • Cluster management and monitoring • Data transport and process synchronisation framework • Interfaces to online systems: Experiement control system, Detector control system, Offline DB, . . . • Event reconstruction and trigger applications

First paper 16

First paper 16

First paper 17

First paper 17

First paper 18

First paper 18

First paper 19

First paper 19

First paper 20

First paper 20

Planning pp run • November 200 collision @ 900 Ge. V • December 106

Planning pp run • November 200 collision @ 900 Ge. V • December 106 collisions @ 900 Ge. V • Some collisions @ 2. 4 Te. V • February-> collisions @ 7 Te. V 21