GPU, MPI, vectorization) and finally one can play with the weights of the sample to optimise/re-use information. Third, one can try to use more efficiently the various types of hardware (e.g. Second one can optimise the integration method to minimise the number of times such functions need to be evaluated. First one can optimise the evaluation of the function being integrated, which in our case is the matrix element. Such efforts can be classified into four different categories. The different avenues for speeding up Monte-Carlo integration is a well covered topic. This issue is particularly striking within (MG5aMC) for VBF-like processes where the current phase-space integration algorithm either fails to generate the requested number of events or takes an unreasonable time to do so. Going to the high-energy regime will amplify multi-scale issues which in turn can reduce the efficiency of event generation. In addition to the High-Luminosity LHC, work is also starting for future high-energy accelerators. Consequently, we have received a specific request to speed-up that step by at least 20% and ideally by a factor of two. While the full CPU time of the LHC experiment is not dominated by event generation, it is still estimated to represent between ten and twenty percent of it. Given the current constraints on the LHC IT budget this will not be possible without significant software improvement. As part of this preparation, the CPU efficiency of our Monte-Carlo generators is crucial due to the sheer number of events that need to be generated. While the LHC is still running, preparation is starting for the High-Luminosity LHC.
0 Comments
Leave a Reply. |