Data Sampling methods in the ALICE O2 distributed processing system

https://doi.org/10.1016/j.cpc.2020.107581Get rights and content
Under a Creative Commons license
open access

Abstract

The ALICE experiment at the CERN LHC focuses on studying the quark-gluon plasma produced by heavy-ion collisions. Starting from 2021, it will see its input data throughput increase a hundredfold, up to 3.5 TB/s. To cope with such a large amount of data, a new online-offline computing system, called O2, will be deployed. It will synchronously compress the data stream by a factor of 35 down to 100 GB/s before storing it permanently.

One of the key software components of the system will be the data Quality Control (QC). This framework and infrastructure is responsible for all aspects related to the analysis software aimed at identifying possible issues with the data itself, and indirectly with the underlying processing done both synchronously and asynchronously. Since analyzing the full stream of data online would exceed the available computational resources, a reliable and efficient sampling will be needed. It should provide a few percent of data selected randomly in a statistically sound manner with a minimal impact on the main dataflow. Extra requirements include e.g. the option to choose data corresponding to the same collisions over a group of computing nodes.

In this paper the design of the O2 Data Sampling software is presented. In particular, the requirements for pseudo-random number generators to be used for sampling decisions are highlighted, as well as the results of the benchmarks performed to evaluate different possibilities. Finally, a large scale test of the O2 Data Sampling is reported.

Keywords

CERN
ALICE
O2
Data Sampling
Message-based system
Distributed processing system
Data quality control
Pseudo-random number generators

Cited by (0)

The review of this paper was arranged by Prof. David W. Walker.