Speaker
Description
Run 1 and Run 2 of the Large Hadron Collider (LHC) at CERN produced huge volumes of data whose analysis will continue to deliver large numbers of physics results. To make this possible, the LHC experiments have been relying on services of the Worldwide LHC Computing Grid (WLCG).
At present, the LHC experiments are preparing for Run 3 of the LHC which will bring significantly higher luminosity and therefore yet higher volumes of data, which will even be exceeded in the era of the High Luminosity LHC (HL-LHC), starting with Run 4.
While during Run 1 and Run 2 the LHC experiments were the only ones to produce and analyze scientific data at a scale of hundreds of petabytes, the situation is gradually changing. Projects like DUNE in the USA, Belle II in Japan and SKA in Australia and South Africa will also be producing huge volumes of data and plan to make significant use of WLCG services to store and process their data. The LHC experiments will not necessarily remain the biggest scientific data producers in the future.
In this contribution we will present an overview of trends and strategies used by the LHC experiments to adapt their data processing models to future compute and storage resources available within WLCG and to the use of commercial clouds and High Performance Computing (HPC) facilities, while at the same time building collaborations with related big-data projects in order to share and evolve WLCG services together.
Topic | Detectors and new facilities |
---|