D. Bortolotti, A. Carbone, D. Galli, I. Lax, U. Marconi, G. Peco, S. Perazzini, V. Vagnoni, M. Zangoli
{"title":"High rate packet transmission via IP-over-InfiniBand using commodity hardware","authors":"D. Bortolotti, A. Carbone, D. Galli, I. Lax, U. Marconi, G. Peco, S. Perazzini, V. Vagnoni, M. Zangoli","doi":"10.1109/RTC.2010.5750409","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750409","url":null,"abstract":"Amongst link technologies, InfiniBand has gained wide acceptance in the framework of High Performance Computing (HPC), due to its high bandwidth and in particular to its low latency. Since InfiniBand is very flexible, supporting several kinds of messages, it is suitable, in principle, not only for HPC, but also for the data acquisition systems of High Energy Physics (HEP) Experiments.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"2006 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130900984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Triggers, data flow and the synchronization between the Auger surface detector and the AMIGA underground muon counters","authors":"Z. Szadkowski","doi":"10.1109/TNS.2011.2142194","DOIUrl":"https://doi.org/10.1109/TNS.2011.2142194","url":null,"abstract":"The aim of the AMIGA project (Auger Muons and Infill for the Ground Array) is an investigation of Extensive Air Showers at energies lower than by standard Auger array, where the transition from galactic to extragalactic sources is expected. The Auger array is enlarged by a relatively small dedicated area of surface detectors with nearby buried underground muon counters at half or less the standard 1.5 km grid. Lowering the Auger energy threshold by more than one order of magnitude allows a precise measurement of the cosmic ray spectrum in the very interesting regions of the second knee and the ankle. The paper describes the working principle of the Master/Slave (standard Auger surface detector/the underground muon counters) synchronous data acquisition, general triggering and the extraction of data corresponding to the real events from underground storage buffers applied in two prototypes: A) with 12.5 ns resolution (80 MHz) built from 4 segments: standard Auger Front End Board (FEB) and Surface Single Board Computer (SSBC) (on the surface) and the Digital Board with the FPGA and the Microcontroller Board (underground), B) with 4-times higher: 3.125 ns resolution (320 MHz) built with two segments only: new surface Front End Board supported by the NIOS® processor and CycloneIII™ Starter Kit board underground, working also with NIOS® virtual processor, which replaces the external TI µC, which in meantime became obsolete. The system with the NIOS® processors can remotely modify and update: the AHDL firmware creating the hardware FPGA net structure responsible for the fast DAQ, the internal structure of the NIOS® (resources and peripherals) and the NIOS® firmware (C code) responsible for software data management. With the standard µC, the µC firmware was fixed and could not be updated remotely. The 80 MHz prototype passed laboratory tests with real scintillators. The 320 MHz prototype (still being optimized) is considered as the ultimate AMIGA design.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129246362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Upgrades for the PHENIX data acquisition system","authors":"M. Purschke","doi":"10.1109/RTC.2010.5750356","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750356","url":null,"abstract":"PHENIX [1] is one of two large experiments at Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC). At the time of this conference, the Run 10 of RHIC is in progress and has generated about a PetaByte of raw data. The following summer shutdown marks the begin of the installation of the PHENIX upgrade detectors, the first of which will be commissioned for the upcoming Run 11. In order to accommodate the new detectors in the PHENIX data acquisition, we will start to implement significant changes to the system, such as the switch to a new generation of readout electronics, and the move to 10 Gigabit Ethernet for the components with the highest data volume. Once fully installed, the new detectors will about triple the current maximum data rate from about 600MB/s to 1.8 GB/s.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125448175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Carrió, V. Castillo, A. Ferrer, V. González, E. Higón, C. Marin, P. Moreno, E. Sanchis, C. Solans, A. Valero, J. Valls
{"title":"Development of an optical link card for the upgrade phase II of TileCal experiment","authors":"F. Carrió, V. Castillo, A. Ferrer, V. González, E. Higón, C. Marin, P. Moreno, E. Sanchis, C. Solans, A. Valero, J. Valls","doi":"10.1109/RTC.2010.5750449","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750449","url":null,"abstract":"This work presents the design of an optical link card developed in the frame of the R&D activities for the phase 2 upgrade of the TileCal experiment as part of the evaluation of different technologies for the final choice in the next two years. The board is designed as a mezzanine which can work independently or plugged in the Optical Multiplexer Board of the TileCal backend electronics. It includes two SNAP 12 optical connectors able to transmit and receive up to 75 Gbps and one SFP optical connector for lower speeds and compatibility with existing hardware as the Read Out Driver. All processing is done in a Stratix II GX FPGA. Details are given on the hardware design including signal and power integrity analysis needed when working with such high data rates and also on firmware development to get the best performance of the FPGA signal transceivers and for the use of a soft core processor to act as controller of the system.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122201130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Commissioning of the ATLAS High Level Trigger with proton collisions at the LHC","authors":"B. Petersen","doi":"10.1109/RTC.2010.5750350","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750350","url":null,"abstract":"ATLAS is one of two general-purpose detectors at the Large Hadron Collider (LHC). The ATLAS trigger system uses fast reconstruction algorithms to efficiently reject a large rate of background events and still select potentially interesting signal events with good efficiency. After a first processing level (Level 1) using custom electronics, the trigger selection is made by software running on two processor farms, containing a total of around two thousand multi-core machines. This system is known as the High Level Trigger (HLT). To reduce the network data traffic and the processing time to manageable levels, the HLT uses seeded, step-wise reconstruction, aiming at the earliest possible rejection of background events.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"14 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114126204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hard Real-Time wireless communication in the northern Pierre Auger Observatory","authors":"R. Kieckhafer","doi":"10.1109/RTC.2010.5750355","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750355","url":null,"abstract":"The Pierre Auger Cosmic Ray Observatory employs a large array of Surface Detector stations to detect the secondary particle showers generated by the arrivals of Ultra High Energy Cosmic Rays. The operational Auger South site uses a tower-based wireless network for communication between the stations and observatory campus. Plans for a larger Auger North array call for a similar system. However, a variety of factors have rendered direct station-to-tower routing infeasible in Auger North. Thus, it will employ a new paradigm, the Wireless Architecture for Hard Real-Time Embedded Networks (WAHREN) designed specifically for highly reliable message delivery over a fixed network, under hard real-time deadlines. This paper describes the WAHREN topology and protocols, as well as real-time performance evaluation, formal verification, testbed operation, and Markov reliability modeling. The status of system hardware development and an on-site Research and Development Array are also discussed.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115861619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DAQ architecture design of Daya Bay Reactor Neutrino Experiment","authors":"Fei Li, X. Ji, Xiao-nan Li, K. Zhu","doi":"10.1109/RTC.2010.5750404","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750404","url":null,"abstract":"The main task of the data acquisition (DAQ) system in Daya Bay Reactor Neutrino Experiment is to record antineutrino candidate events and other background events. There are seventeen detectors in three sites. Each detector will have a separate VME readout crate that contains the trigger and DAQ electronics modules. The DAQ system reads event data from front end electronics modules, concatenates data fragments of the modules and packs them to a subsystem event, then transmits to the backend system to do data stream merging, monitoring and recording.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122594757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Commissioning of the ATLAS High Level muon trigger with beam collisions","authors":"M. Owen","doi":"10.1109/RTC.2010.5750408","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750408","url":null,"abstract":"The ATLAS experiment is a multipurpose experiment at the Large Hadron Collider (LHC) designed to study the interactions of the fundamental particles. The interaction rate of the LHC is such that a three level trigger system is needed to select, in real time, the interesting events to be recorded by AvTLAS. The LHC has recently provided the first pp collisions at √s = 7 TeV and the first data are used to study the performance of the ATLAS High Level muon trigger. Good performance of the algorithms is observed.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"269 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132242242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Stancu, A. Al-Shabibi, S. Batraneanu, S. Ballestrero, C. Caramarcu, B. Martin, D. Savu, R. Sjoen, L. Valsan
{"title":"Network resiliency implementation in the ATLAS TDAQ system","authors":"S. Stancu, A. Al-Shabibi, S. Batraneanu, S. Ballestrero, C. Caramarcu, B. Martin, D. Savu, R. Sjoen, L. Valsan","doi":"10.1109/RTC.2010.5750373","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750373","url":null,"abstract":"The ATLAS TDAQ (Trigger and Data Acquisition) system performs the real-time selection of events produced by the detector. For this purpose approximately 2000 computers are deployed and interconnected through various high speed networks, whose architecture has already been described. This article focuses on the implementation and validation of network connectivity resiliency (previously presented at a conceptual level). Redundancy and eventually load balancing are achieved through the synergy of various protocols: link aggregation, OSPF (Open Shortest Path First), VRRP (Virtual Router Redundancy Protocol), MST (Multiple Spanning Trees). An innovative method for cost-effective redundant connectivity of high-throughput high-availability servers is presented. Furthermore, real-life examples showing how redundancy works, and more importantly how it might fail despite careful planning are presented.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"56 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134187850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time configuration changes of the ATLAS High Level Trigger","authors":"F. Winklmeier","doi":"10.1109/RTC.2010.5750407","DOIUrl":"https://doi.org/10.1109/RTC.2010.5750407","url":null,"abstract":"The ATLAS High Level Trigger (HLT) is a distributed real-time software system that performs the final online selection of events produced during proton-proton collisions at the Large Hadron Collider (LHC). It is designed as a two-stage trigger and event filter running on a farm of commodity PC hardware. Currently the system consists of about 850 processing nodes and will be extended incrementally following the expected increase in luminosity of the LHC to about 2300 nodes. The event selection within the HLT applications is carried out by specialized reconstruction algorithms. The selection can be controlled via properties that are stored in a central database and are retrieved at the startup of the HLT processes, which then usually run continuously for many hours. To be able to respond to changes in the LHC beam conditions, it is essential that the algorithms can be re-configured without disrupting data taking while ensuring a consistent and reproducible configuration across the entire HLT farm. The techniques developed to allow these real-time configuration changes will be exemplified on the basis of two applications: trigger prescales and beamspot measurement. The prescale value determines the fraction of events an HLT algorithm is being executed on, including when it is deactivated. This feature is both essential during the commissioning phase of the HLT as well as for adjusting the mixture of recorded physics events during an LHC run. The primary event vertex distribution, from which the beam spot position and size can be extracted, is measured by a dedicated HLT algorithm on each node and periodically aggregated across the HLT farm and its parameters are published and stored in the conditions database. The result can be fed back to the HLT algorithms to maintain selection efficiency and rejections rates. Finally, the technologies employed to allow the simultaneous database access of thousands of applications in an online environment will be shown.","PeriodicalId":345878,"journal":{"name":"2010 17th IEEE-NPSS Real Time Conference","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116878279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}