{"title":"An automated bandwidth division for the LHCb upgrade trigger.","authors":"T Evans, C Fitzpatrick, J Horswill","doi":"10.1007/s41781-025-00139-2","DOIUrl":"10.1007/s41781-025-00139-2","url":null,"abstract":"<p><p>The upgraded Large Hadron Collider beauty (LHCb) experiment is the first detector based at a hadron collider using a fully software-based trigger. The first 'High Level Trigger' stage (HLT1) reduces the event rate from 30 MHz to approximately 1 MHz based on reconstruction criteria from the tracking system, and consists of <math><mrow><mi>O</mi> <mo>(</mo> <mn>100</mn> <mo>)</mo></mrow> </math> trigger selections implemented on Graphics Processing Units (GPUs). These selections are further refined following the full offline-quality reconstruction at the second stage (HLT2) prior to saving for analysis. An automated bandwidth division has been performed to equitably divide this 1 MHz HLT1 Output Rate (OR) between the signals of interest to the LHCb physics program. This was achieved by optimizing a set of trigger selections that maximize efficiency for signals of interest to LHCb while keeping the total HLT1 readout capped to a maximum. The bandwidth division tool has been used to determine the optimal selection for 35 selection algorithms over 80 characteristic physics channels.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"9 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12095408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144143763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmed Abdelmotteleb, Alessandro Bertolin, Chris Burr, Ben Couturier, Ellinor Eckstein, Davide Fazzini, Nathan Grieser, Christophe Haen, Ryunosuke O'Neil, Eduardo Rodrigues, Nicole Skidmore, Mark Smith, Aidan R Wiederhold, Shunan Zhang
{"title":"The LHCb Sprucing and Analysis Productions.","authors":"Ahmed Abdelmotteleb, Alessandro Bertolin, Chris Burr, Ben Couturier, Ellinor Eckstein, Davide Fazzini, Nathan Grieser, Christophe Haen, Ryunosuke O'Neil, Eduardo Rodrigues, Nicole Skidmore, Mark Smith, Aidan R Wiederhold, Shunan Zhang","doi":"10.1007/s41781-025-00144-5","DOIUrl":"10.1007/s41781-025-00144-5","url":null,"abstract":"<p><p>The LHCb detector underwent a comprehensive upgrade in preparation for the third data-taking run of the Large Hadron Collider (LHC), known as LHCb Upgrade I. With its increased data rate, Run 3 introduced considerable challenges in both data acquisition (online) and data processing and analysis (offline). The offline processing and analysis model was upgraded to handle the factor 30 increase in data volume and the associated demands of ever-growing datasets for analysis, led by the LHCb Data Processing and Analysis (DPA) project. This paper documents the LHCb \"Sprucing\" - the centralised offline data processing and selections - and \"Analysis Productions\" - the centralised and highly automated declarative nTuple production system. The DaVinci application used by analysis productions for tupling spruced data is described as well as the apd and lbconda tools for data retrieval and analysis environment configuration. These tools allow for greatly improved analyst workflows and analysis preservation. Finally, the approach to data processing and analysis in the High-Luminosity Large Hadron Collider (HL-LHC) era - LHCb Upgrade II - is discussed.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"9 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12321665/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144795721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Soft Margin Spectral Normalization for GANs","authors":"Alexander Rogachev, Fedor Ratnikov","doi":"10.1007/s41781-024-00120-5","DOIUrl":"https://doi.org/10.1007/s41781-024-00120-5","url":null,"abstract":"","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"25 9","pages":"1-13"},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141684631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Maeno, A. Alekseev, F. H. Barreiro Megino, Kaushik De, Wen Guan, E. Karavakis, A. Klimentov, T. Korchuganova, Fahui Lin, P. Nilsson, T. Wenaus, Zhaoyu Yang, Xin Zhao
{"title":"PanDA: Production and Distributed Analysis System","authors":"T. Maeno, A. Alekseev, F. H. Barreiro Megino, Kaushik De, Wen Guan, E. Karavakis, A. Klimentov, T. Korchuganova, Fahui Lin, P. Nilsson, T. Wenaus, Zhaoyu Yang, Xin Zhao","doi":"10.1007/s41781-024-00114-3","DOIUrl":"https://doi.org/10.1007/s41781-024-00114-3","url":null,"abstract":"","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"47 20","pages":"1-21"},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139603343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Waleed Esmail, Jana Rieger, Jenny Taylor, Malin Bohman, Karin Schönning
{"title":"KinFit: A Kinematic Fitting Package for Hadron Physics Experiments","authors":"Waleed Esmail, Jana Rieger, Jenny Taylor, Malin Bohman, Karin Schönning","doi":"10.1007/s41781-023-00112-x","DOIUrl":"https://doi.org/10.1007/s41781-023-00112-x","url":null,"abstract":"","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"29 3","pages":"1-21"},"PeriodicalIF":0.0,"publicationDate":"2024-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139448706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Barnyakov, M. Belozyorova, V. Bobrovnikov, Sergey Kononov, D. Kyshtymov, Dmitry Maksimov, Georgiy Razuvaev, A. Sukharev, Korneliy Todyshev, Vitaliy Vorobyev, Anastasiia Zhadan, D. Zhadan
{"title":"Fast Simulation for the Super Charm-Tau Factory Detector","authors":"Alexander Barnyakov, M. Belozyorova, V. Bobrovnikov, Sergey Kononov, D. Kyshtymov, Dmitry Maksimov, Georgiy Razuvaev, A. Sukharev, Korneliy Todyshev, Vitaliy Vorobyev, Anastasiia Zhadan, D. Zhadan","doi":"10.1007/s41781-023-00108-7","DOIUrl":"https://doi.org/10.1007/s41781-023-00108-7","url":null,"abstract":"","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"15 11","pages":"1-13"},"PeriodicalIF":0.0,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139124787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William Balunas, Donatella Cavalli, Teng Jian Khoo, Matthew Klein, Peter Loch, Federica Piazza, Caterina Pizio, Silvia Resconi, Douglas Schaefer, Russell Smith, Sarah Williams
{"title":"A Flexible and Efficient Approach to Missing Transverse Momentum Reconstruction.","authors":"William Balunas, Donatella Cavalli, Teng Jian Khoo, Matthew Klein, Peter Loch, Federica Piazza, Caterina Pizio, Silvia Resconi, Douglas Schaefer, Russell Smith, Sarah Williams","doi":"10.1007/s41781-023-00110-z","DOIUrl":"10.1007/s41781-023-00110-z","url":null,"abstract":"<p><p>Missing transverse momentum is a crucial observable for physics at hadron colliders, being the only constraint on the kinematics of \"invisible\" objects such as neutrinos and hypothetical dark matter particles. Computing missing transverse momentum at the highest possible precision, particularly in experiments at the energy frontier, can be a challenging procedure due to ambiguities in the distribution of energy and momentum between many reconstructed particle candidates. This paper describes a novel solution for efficiently encoding information required for the computation of missing transverse momentum given arbitrary selection criteria for the constituent reconstructed objects. Pileup suppression using information from both the calorimeter and the inner detector is an integral component of the reconstruction procedure. Energy calibration and systematic variations are naturally supported. Following this strategy, the ATLAS Collaboration has been able to optimise the use of missing transverse momentum in diverse analyses throughout Runs 2 and 3 of the Large Hadron Collider and for future analyses.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"8 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10761467/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139098887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abhijit Mathad, Martina Ferrillo, Sacha Barré, Patrick Koppenburg, Patrick Owen, Gerhard Raven, Eduardo Rodrigues, Nicola Serra
{"title":"FunTuple: A New N-tuple Component for Offline Data Processing at the LHCb Experiment.","authors":"Abhijit Mathad, Martina Ferrillo, Sacha Barré, Patrick Koppenburg, Patrick Owen, Gerhard Raven, Eduardo Rodrigues, Nicola Serra","doi":"10.1007/s41781-024-00116-1","DOIUrl":"10.1007/s41781-024-00116-1","url":null,"abstract":"<p><p>The offline software framework of the LHCb experiment has undergone a significant overhaul to tackle the data processing challenges that will arise in the upcoming Run 3 and Run 4 of the Large Hadron Collider. This paper introduces FunTuple, a novel component developed for offline data processing within the LHCb experiment. This component enables the computation and storage of a diverse range of observables for both reconstructed and simulated events by leveraging on the tools initially developed for the trigger system. This feature is crucial for ensuring consistency between trigger-computed and offline-analysed observables. The component and its tool suite offer users flexibility to customise stored observables, and its reliability is validated through a full-coverage set of rigorous unit tests. This paper comprehensively explores FunTuple's design, interface, interaction with other algorithms, and its role in facilitating offline data processing for the LHCb experiment for the next decade and beyond.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"8 1","pages":"6"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11358189/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142112969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G Angloher, S Banik, G Benato, A Bento, A Bertolini, R Breier, C Bucci, J Burkhart, L Canonica, A D'Addabbo, S Di Lorenzo, L Einfalt, A Erb, F V Feilitzsch, S Fichtinger, D Fuchs, A Garai, V M Ghete, P Gorla, P V Guillaumon, S Gupta, D Hauff, M Ješkovský, J Jochum, M Kaznacheeva, A Kinast, S Kuckuk, H Kluck, H Kraus, A Langenkämper, M Mancuso, L Marini, B Mauri, L Meyer, V Mokina, K Niedermayer, M Olmi, T Ortmann, C Pagliarone, L Pattavina, F Petricca, W Potzel, P Povinec, F Pröbst, F Pucci, F Reindl, J Rothe, K Schäffner, J Schieck, S Schönert, C Schwertner, M Stahlberg, L Stodolsky, C Strandhagen, R Strauss, I Usherov, F Wagner, V Wagner, M Willers, V Zema, C Heitzinger, W Waltenberger
{"title":"Optimal Operation of Cryogenic Calorimeters Through Deep Reinforcement Learning.","authors":"G Angloher, S Banik, G Benato, A Bento, A Bertolini, R Breier, C Bucci, J Burkhart, L Canonica, A D'Addabbo, S Di Lorenzo, L Einfalt, A Erb, F V Feilitzsch, S Fichtinger, D Fuchs, A Garai, V M Ghete, P Gorla, P V Guillaumon, S Gupta, D Hauff, M Ješkovský, J Jochum, M Kaznacheeva, A Kinast, S Kuckuk, H Kluck, H Kraus, A Langenkämper, M Mancuso, L Marini, B Mauri, L Meyer, V Mokina, K Niedermayer, M Olmi, T Ortmann, C Pagliarone, L Pattavina, F Petricca, W Potzel, P Povinec, F Pröbst, F Pucci, F Reindl, J Rothe, K Schäffner, J Schieck, S Schönert, C Schwertner, M Stahlberg, L Stodolsky, C Strandhagen, R Strauss, I Usherov, F Wagner, V Wagner, M Willers, V Zema, C Heitzinger, W Waltenberger","doi":"10.1007/s41781-024-00119-y","DOIUrl":"10.1007/s41781-024-00119-y","url":null,"abstract":"<p><p>Cryogenic phonon detectors with transition-edge sensors achieve the best sensitivity to sub-GeV/c <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>2</mn></mmultiscripts> </math> dark matter interactions with nuclei in current direct detection experiments. In such devices, the temperature of the thermometer and the bias current in its readout circuit need careful optimization to achieve optimal detector performance. This task is not trivial and is typically done manually by an expert. In our work, we automated the procedure with reinforcement learning in two settings. First, we trained on a simulation of the response of three Cryogenic Rare Event Search with Superconducting Thermometers (CRESST) detectors used as a virtual reinforcement learning environment. Second, we trained live on the same detectors operated in the CRESST underground setup. In both cases, we were able to optimize a standard detector as fast and with comparable results as human experts. Our method enables the tuning of large-scale cryogenic detector setups with minimal manual interventions.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"8 1","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11557640/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142629776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Caio Daumann, Mauro Donega, Johannes Erdmann, Massimiliano Galli, Jan Lukas Späh, Davide Valsecchi
{"title":"One Flow to Correct Them all: Improving Simulations in High-Energy Physics with a Single Normalising Flow and a Switch.","authors":"Caio Daumann, Mauro Donega, Johannes Erdmann, Massimiliano Galli, Jan Lukas Späh, Davide Valsecchi","doi":"10.1007/s41781-024-00125-0","DOIUrl":"10.1007/s41781-024-00125-0","url":null,"abstract":"<p><p>Simulated events are key ingredients in almost all high-energy physics analyses. However, imperfections in the simulation can lead to sizeable differences between the observed data and simulated events. The effects of such mismodelling on relevant observables must be corrected either effectively via scale factors, with weights or by modifying the distributions of the observables and their correlations. We introduce a correction method that transforms one multidimensional distribution (simulation) into another one (data) using a simple architecture based on a single normalising flow with a boolean condition. We demonstrate the effectiveness of the method on a physics-inspired toy dataset with non-trivial mismodelling of several observables and their correlations.</p>","PeriodicalId":36026,"journal":{"name":"Computing and Software for Big Science","volume":"8 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11316724/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141971963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}