Speaker
Dr
Dagmar Adamova
(NPI ASCR Prague/Rez)
Description
The performance of the Large Hadron Collider (LHC) during the ongoing Run 2 is above expectations both concerning the delivered luminosity and the LHC live time. This resulted in a volume of data much larger than originally anticipated. Based on the status of current data production levels and the structure of the LHC experiment computing models, the estimates of the data production rates and resource needs were re-evaluated for the era leading into the High Luminosity LHC (HL-LHC), the Run 3 and Run 4 phases of LHC operation.
It turns out that the raw data volume will grow ~10 times by the HL-LHC era and the processing capacity needs will grow more than 60 times. While the growth of storage requirements might in principle be satisfied with a 20% budget increase and technology advancements, there is a gap of a factor 6 to 10 between the needed and the available computing resources.
The threat of a lack of computing and storage resources was present already in the beginning of Run 2, but could still be mitigated e.g. by improvements in the experiment computing models and data processing software or utilization of various types of external computing resources. For the years to come, however, new strategies will be necessary to meet the huge increase in the resource requirements.
In contrast with the early days of the LHC Computing Grid (WLCG), the field of High Energy Physics (HEP) is no longer among the biggest producers of data. Currently the HEP data and processing needs are ~1% the size of the largest industry problems. Also, HEP is no longer the only science with large computing requirements.
In this contribution, we will present new strategies of the LHC experiments towards the era of the HL LHC, that aim to bring together the desired requirements of the experiments and the capacities available for delivering physics results.
Primary authors
Dr
Dagmar Adamova
(NPI ASCR Prague/Rez)
Dr
Maarten Litmaath
(CERN)