Dr Dagmar Adamova (Nuclear Physics Institute AS CR)
After the LHC community successfully completed Run 1, the capacity of the Worldwide LHC Computing Grid (WLCG) became the limiting factor in the processing of ever growing volumes of data produced from LHC collisions. During the last five years the LHC community launched a number of activities to increase computing performance and optimize usage of available resources. These activities are particularly important in preparation for Run 3 (2021-2023) and Run 4 (2026-2029), the era of the High Luminosity LHC, because mainstream technology evolution might fall short by up to a factor 10 with respect to WLCG needs. The endeavours must constantly adapt to the new technologies; they concentrate on redesigning the computing models of the LHC experiments, improving the efficiency of the data processing chains, adaptation of software to fast and/or cheap CPU architectures, and increased use of diverse resources like private and public clouds or high performance computing (HPC) facilities. WLCG was built during a time when there was no experience with or example of such an infrastructure from industry or elsewhere. This situation changed during Run 1 when the global internet and computing industry began to provide on-demand services and developed tools and solutions which are of interest also to WLCG. The latest strategies to increase the WLCG performance are therefore also concerned with utilization of tools provided from outside the LHC community: use of commercial cloud services for LHC data simulation and processing; formation and exploitation of data lakes; use of popular data mining and analytics pipelines. The HEP Software Foundation was formed to try and orchestrate the processes of software transformation toward higher efficiency and adaptation to new computing technologies. The need for essential changes and external expertise was recognized: for example, the use of Python-based notebooks for high-level interactive analysis, or the integration of Machine Learning in reconstruction. There are opportunities for more commonality across experiments and there is an increasing collaboration with other big-data projects like the Square Kilometer Array (SKA). Important steps forward are also expected from the continued collaboration with IT industry through CERN openlab. In this contribution we will present the latest innovative strategies of the LHC community to get prepared for the computing challenges of the next ten years and especially for the demands of the High Luminosity LHC.