26 February 2018 to 9 March 2018
Europe/Berlin timezone

Scientific Programme


The Large Hadron Collider (LHC) was primarily built to explore the fundamental forces in the TeV range and to verify or falsify the Standard Model (SM) of particle physics in this energy range. Specifically, the self-consistency of the SM and precision tests of the SM from the pre-LHC era suggested that the mechanism of ElectroWeak Symmetry Breaking (EWSB) had to show distinct signatures in the detectors at energies between the electroweak scale and roughly 1TeV. In 2012 this expectation was confirmed with the discovery of a Higgs boson in the predicted mass range, specifically at 125GeV. Since then the properties of this new particle are all very well compatible with the ones of the SM Higgs boson. Nevertheless, the SM cannot be the ultimate theory in the description of all fundamental interactions, since several puzzles remain: What is the origin of Dark Matter? Is the gauge structure of the Standard Model valid beyond our experimental reach? Where (if at all) do strong and electroweak interactions unify? Does the Higgs mechanism, which is a simple non-dynamic model, really describe EWSB properly? Are there more new particles with masses up to the few-TeV range as traces of new structures? What is the origin of matter-antimatter asymmetry on the universe?

The great success of the SM in describing (so far) all phenomena observed at the LHC seems to tell us that the key to a potential discovery of New Physics is precision, in particular in the light of the planned high-luminosity extension of the LHC. Recent years have seen tremendous progress at the precision frontier in producing more and more accurate predictions for LHC physics, both conceptually and technically, but mostly within the SM. Major achievements include:

  • Automated next-to-leading order (NLO) calculations for multi-particle production at hadron colliders. Using automated amplitude generators such as aMC@NLO, Blackhat, FeynArts/FormCalc, GoSam, HELACNLO, Madloop, OpenLoops, or Recola in combination with numerical one-loop libraries such as Collier, FF, Golem95C, LoopTools, OneLOop, PJFry  or QCDLoop, made it possible to produce NLO predictions with four particles in the final state in a standard fashion and even up to seven in distinguished cases. Since recently this progress does not only cover QCD, but also the automation of electroweak corrections is in full swing.
  • At the next-to-next-to-leading order (NNLO) frontier plenty of complete QCD predictions for 2→2 particle reactions became available (e.g. for ttbar, WW, ZZ production), rendering experimental analyses at the level of several percent possible. For 2→1 processes (such as single-Higgs production), even NNNLO calculations became feasible.
  • The inclusion of higher-order corrections in multi-purpose Monte Carlo event generators is meanwhile standard. The matching of fixed-order NLO corrections with parton showers as well as the consistent merging of event samples with different jet multiplicities is widely automated. First steps in the matching process are even taken at NNLO.

This progress carries over to important models for Beyond the SM (BSM) physics in a straightforward way, and the automation of NLO corrections in arbitrary BSM theories is in reach. However, New Physics models with their significantly increased number of fundamental particles and their complex production and decay kinematics typically involve additional issues and subtleties when pushed at least to NLO precision:

  • Specific BSM models involve new free parameters, but not all of them are directly fixed by particle properties (like masses) that are directly accessible by experiment. Such parameters (e.g. mixing angles or new coupling constants) are often fixed by MSbar renormalization conditions at some energy scale or other unphysical renormalization conditions. As experience with some concrete models shows (e.g. Two-Higgs-Doublet Model or supersymmetric SM extensions), not all possible renormalization conditions are theoretically consistent or phenomenologically useful (in the sense of being perturbatively unstable and thus prone to large intrinsic uncertainties). A proper choice of renormalization conditions may be nontrivial and model specific.
  • In the absence of new light degrees of freedom, deviations from the SM can be generically described within an effective field theory (EFT), for example based on the SM particle content, but supplemented by higher-dimensional operators. This defines the so-called Standard Model Effective Field Theory (SMEFT). A well-motivated and popular framework involves the complete set of dimension-6 operators. While the SMEFT or restricted subsets as, for instance, an EFT of the top-quark sector are not renormalizable, they still allow for systematic precision tests.  Linked to the wide variety of higher-dimensional operators, precision calculations in the SMEFT require extensions of the existing tools and a thorough understanding of the corresponding renormalization. For example, systematic and automatized computations of higher-order QCD corrections in the EFT framework are partly on their way, but the link between electroweak corrections and an EFT Lagrangian is largely unexplored and not publicly available to the experimental collaborations.  
  • Simplified models are a particularly successful new approach to Dark Matter searches at the LHC and elsewhere. Some of these models are theoretically better defined than others. For well-defined simplified models the link to full, renormalizable ultraviolet completions as well as to effective theories is crucial for an optimal application in particle and astro-particle physics phenomenology.

The intention of the proposed Scientific Program is to boost precision in the most prominent BSM models including SMEFT. In particular, strategies for the automation of higher-order calculations in general BSM theories shall be developed, and useful standards should be defined. Furthermore, an efficient interface between BSM models, precision calculations, and Monte Carlo simulation tools is crucial not only for a proper understanding of its experimental impact. Specific discussion topics will include:

  • precise definitions of BSM models (renormalization, input procedures, etc.),
  • higher-order automation (amplitude generation and inclusion in Monte Carlo generators),
  • higher-order corrections in EFTs,
  • strategies for phenomenological analyses (model independent and model specific),
  • implementation of the results.

The goal of the proposed scientific program is to bring together leading experts in the field of higher-order calculations with experts in the phenomenology of theories Beyond the Standard Model and effective field theories. The workshop program is theory driven, but to make contact to current key analyses at the LHC, we will invite selected experimentalists from ATLAS and CMS. In addition to a sizeable set of more experienced participants, we plan to invite senior postdocs and very junior faculty, to give everybody the opportunity to discuss new ideas.

Your browser is out of date!

Update your browser to view this website correctly. Update my browser now