LTC Timing ABCOHT Central timing hardware layout Telegrams
LTC Timing AB/CO/HT Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case Julian Lewis 1
LTC Timing AB/CO/HT Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case 2
Master Slave configuration 3
UTC Time and GPS Symmetricom CS 4000 portable Atomic Clock GPS One pulse per Second RS 485 Timing CERN UTC Time Symmetricom XLI Synchronization module in each timing generator crate PLL One pulse per Second Phase locked 10 MHz 40 MHz PLL Phase locked 10 MHz Control System Phase looked 40 MHz Event encoding clock Set once on startup & on Leap Seconds Event tables Advanced (100 us) One pulse per Second Synchronized 1 KHz (slow timing clock) Timing receiver 25 ns steps Basic Period 1200/900/600 ms UTC time (NTP or GPS) External events MTT Multitask Timing Generator MTT RS 485 Timing CERN UTC Time
Multi-tasking module 16 Virtual CPUs produce the event output stream. The host front end system accesses MTT registers across VME bus (Beam Energy) The VMEP 2 permits hardware triggering tasks like PM event sending Two types of tasks System tasks for millisecond, external events, telegrams. . . Event table tasks Controlled across a FESA API allows. . . MT-CTG Load/Unload event table Run table N times Synchronize it with an event 16 LHC GMT Run table for ever Stop table Abort table 5
Safe Machine Parameters Distribution If length > 5 m SMP @ 10 Hz 16 Bit 1010 (Flags, E & Int. ) Events, UTC, & Telegrams (including SMP) Energy A BEM BPF 1/2 @1 k. Hz-24 Bit 10 8 BEM Energy B BCT “A” Flags TTL Hw Output LHC Timing Generator Safe Machine Parameters Controller for LHC CTRx (CTRV ) BIS Reads thresholds Management Critical Settings Line driver CTR CTR V VV I_beam 1 & 2 BCT “B” EXP CTR VV Reads status LSA Beam Permit Flags CTR CTR V VV EXP
Current hardware status The SMP “mark-1” will be installed by week 15 (90% confidence) There as yet no Beam-Permit Flags wired to the timing, so we can't test PM/XPOC. Auto re-enable of Postmortem is waiting installation. Transmission delay calibration will start next week.
LTC Timing AB/CO/HT Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case 8
What is distributed on the LHC timing cable The LHC telegram LHC machine events Its main function is to continuously retransmit (shadow) information that has already been transmitted by events. Sent out each second, on the second. An event is sent punctually when something happens that affects the machine state. Some are asynchronous that come from external processes, e. g. post-mortems, energy, while others are produced from timing tables corresponding to running machine processes. Some are sent directly such as dump, commit transaction. The UTC time of day Resolution is 25 ns, jitter is less than 1 ns peak to peak, wonder is estimated to be around 10 ns.
Some web addresses http: //ab-dep-co-ht. web. cern. ch/ab-dep-co-ht/timing/Seq/tgm. htm This link shows the current telegram configuration. It also has information about the CTR hardware and other useful stuff. http: //ab-dep-co-ht. web. cern. ch/ab-dep-co-ht/timing/Seq/mtg. Config. htm This link shows all defined timing events for the timing cables, and other useful stuff.
LTC Timing AB/CO/HT Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case 14
Postmortem Event generation Two Beam-Permit-Flags, one per LHC ring, arrive at the LHC central timing inputs from the Beam Interlock System. Beam-Dump events may be sent from the LHC central timing to the control system to dump the beam in one or other ring. The specification requires only one PM event for both rings. In some LHC machine modes such as “Inject & Dump” , sending the PM events will be inhibited. However the beam dumped events always go out. When both rings are dumped, the postmortem event is sent twice within 1 ms.
Postmortem Event suppression Two counters are used in the CTR, one per Beam-Permit-Flag (BPF) Each counter clock is connected to one of the BPF flags The "Disable Post-Mortem Ring 1" disables the counter connected to BPF-1 The "Enable Post-Mortem 1" enables the counter connected to BPF-1 When the counter is disabled and the BPF goes down nothing happens When its enabled the counter makes an output triggering the PM event It will be sent twice if both counters are enabled and both rings are dumped CTR BPF 1 CLK Delay=1 Disabled BPF 1/2 Warn-Inject Disabled VME/P 2 BPF 2 CLK Delay=1 Enabled CTG-MTT Enabled PM-1 Suppress Table Loads LSEQ LHC GMT Disable-1, Enable-1 Dumped 1/2 1 x (PM)
Postmortem Event auto Re-Enable The Disable PM event also triggers a counter in a CTR In this case 2 ms later an output pulse triggers the MTT to send PM enable event So the next BPF transition will trigger the PM event to be sent CTR BPF 1/2 CLK Delay=2 Disabled Warn-Inject Re-Enable VME/P 2 CLK Delay=2 Enabled CTG-MTT Re-Enable PM-1 Suppress Table LHC GMT Disable 1/2
LTC Timing AB/CO/HT Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case 18
LHC Central Timing API LSA High level Sequencer Energy/Ring Intensity/Ring BIS Beam permit Flags Slave/Master LSA Core CMW Server FESA LHC API LHC MTG Safe Params Event Tables 2. 2 G-Bit / S optical link 64 Mb Reflective memories External Events GMT LHC Clocks: 40. 00 MHz GPS clock 1 PPS (1 Hz) clock Basic period clock 19
LSA and FESA The FESA API is implemented on the LHC timing gateway Accesses timing generators across reflective memory Implements Load or Unload event table Get running tables list Set event table run count and synchronization event Stop or Abort event table Set telegram parameters Send an event Read the status of tasks and MTT module
LHC Beam Request BTNI Next injection Beam Type Obviously the next injected beam type is determined by the settings in the injector chain and by nothing else. The LSEQ may request a certain type of beam to be injected, but if the requested value does not correspond to the actual beam type being provided by the injector chain, then the request can not be fulfilled and no injection can take place. This value is thus inherited from the injector chain BKNI Next injection RF Bucket There are 35640 RF buckets around the LHC ring. It is essential that this parameter is established before RF re-synchronization starts between the CPS and the SPS RF systems, namely 450 ms before CPS extraction towards the SPS RNGI Next injection Ring This parameter determines the value of the SPS beam destination in the DEST group of the telegram. Various ways to do this are possible. Its an OP decision. BCNT Number of CPS batches 21
LTC Timing AB/CO/HT Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case 22
23
CBCM Sequence Manager LSA Timing Service TI 8/TI 2 Dump Master ON/OFF LHC User LHC Fill Requests: Bucket Ring Batches Request TI 8/TI 2 Dump Request LHC User LHC – LIC Signal Exchange Gateway FESA API Normal Spare LHC Fill Requests: Bucket Ring Batches Reflective Memory Link LIC Sequence Inhibits Requests Interlocks SIS TI 8/TI 2 & SPS Dump Inhibits CBCM Controlling Injector Chain SPS destination request R 1, R 2 LHC Central Timing Generation LSA Master LHC Timing CPS Batch Request 1, 2, 3, 4 SEX. FW 1 K SPS. COMLN. LSA_ALW LIC Timing CTR HIX. FW 1 K LSA changes Allowed
CNGS Default when no LSEQ request and LSEQ is master TI 8 Default when no TI 8/2 dump and LSEQ is not master Only possible when SIS dump status is “IN” and requested Only possible when LSEQ is master and no TI 8/2 SIS inhibits TI 8 Dump Default when no batches were requested and LSEQ is master TCLP PSB Linac CPS SPS TI 2 Dump SPS Dump LEIR D 3 Dump TI 2 25
CBCM Sequence Manager
The LHC Beam The LHC timing is only coupled by extraction start-ramp event LHC Injection plateaux Injection LSA Beam request: RF bucket Ring CPS batches Extraction Forewarning SPS injection plateaux SPS Cycle for the LHC CPS Batch 1 PSB 1 CPS Batch 2 PSB 2 CPS Batch 3 PSB 3 CPS Batch 4 PSB 4
The LHC beam Operators mark the SPS cycle as “TOLHC” Inheritance mechanism propagates “TOLHC” to all cycles in the beam LSEQ Control affects the way “TOLHC” beams are played The SPS telegram contains a new “DYNAMIC” destination calculated on the fly TI 8/TI 2/TI 8_DMP/TI 2_DMP/SPS_DMP/CNGS/FT SPS injection plateaux SPS Cycle for the LHC CPS Batch 1 PSB 1 CPS Batch 2 PSB 2 CPS Batch 3 PSB 3 CPS Batch 4 PSB 4 “TOLHC” The CBCM evaluates LHC beam requests 1. 2 seconds before the first PSB cycle in the CBCM time domain. The CBCM time domain is 2. 4 seconds ahead of the accelerator complex time domain. So the request must be 3. 6 seconds ahead. Any bad condition will provoke the spare response as usual. 28
TCLP D 3 Dump Killing a “TOLHC” batch The Linac Tail-Clipper timing cuts 99% of the beam at the Linac. The PSB plays the cycle with no or very little beam. The CPS destination is forced from the SPS to D 3 The SPS cycle continues as usual, but no CPS beam is injected, destination is the internal dump. The SPS injection timing for the suppressed batch fires anyway. 29
Basic behavior TOLHC 1. Any abnormal interlock drives the beam into spare. This may result in SPS going in to economy mode. N. B. TI 8 dump or TI 2 dump are only possible when the dump status (From SIS) indicates that the dumps are in place. • • TI 8 or TI 2 destinations are only possible when LSEQ is master, and when there is a valid LSEQ beam request, and there are no TI 8/2 SIS inhibits. When LSEQ is not the master the default destination is the SPS dump A TI 8 or TI 2 dump can be requested. The number of CPS batches delivered is controlable. When LSEQ is master and there is no request, the beam is killed, but the magnetic cycle takes place. Mastership can only be changed in the absence of the LHC User request 30
Nominal fill Use case 1 Prepare injector chain LHC operator asks SPS operator to prepare to fill the LHC. SPS operator removes the SPS LHC cycle request. If we don’t want to deliver the beam to the SPS dump straight away. – LSEQ mastership can not be changed while the LHC beam is playing !!! SPS operator loads/runs the LHC fill sequence. The BCD starts up with LHC beams in spare, and the SPS may be in economy mode. 31
Nominal fill Use case 2 Beam to TI 8/TI 2 Dumps SPS operator want to send the beam to a TI 8/TI 2 dump LSEQ is not the master OP sets the dump targets to move into place and waits (Minutes) OP selects TI 8/TI 2 dump request external conditions on central timing. OP sets LHC user request on. OP sets the CPS batch count to N N x CPS batches are now sent to a TI 8/TI 2 dump 32
Nominal fill Use case 3 Beam to SPS dump OP want to send the beam to SPS dump. LSEQ is not the master The TI 8/TI 2 Dump requests must be removed. The LHC User request must be present N CPS batches are now delivered to the SPS dump 33
Nominal fill Use case 4 LSEQ takes mastership LSEQ now wants to become master. – The LHC user request must be removed. – The current SPS super-cycle will finish and then the LHC beam is spared (Economy Mode) The SPS telegram bit SPS. COMLN. LSEQ_ALW gets set by the CBCM. LSEQ calls the API to request mastership. If SPS. COMLN. LSEQ_ALW isn’t ready, an error is returned. LSEQ becomes master, and the LHC user request is turned back on by SPS operations. All beams are played in normal but TCLP, D 3 insure there is no beam injected into the SPS. The SPS destination is SPS dump, and there is no extraction timing. 34
Nominal fill Use case 5 LSEQ Sends beam to LHC LSEQ wants to send beam to the LHC LSEQ must be the master There must be no TI 8/2 SIS inhibits. LHC user request must be asserted LSEQ makes a request for 1/2/3/4 batches to ring 1/2 bucket N On the next SPS super cycle the request is executed, then cleared. 35
LIC – LHC filling It would be a good idea to test the fill use case out in a dry-run once all cabling and hardware installation has been completed. LSEQ Takes mastership Makes a beam request PM Enable/Disable with BPF transitions 36
- Slides: 36