1 Introduction

Ultra-relativistic heavy-ion collisions (i.e. gold–gold (Au–Au) at the Relativistic Heavy-Ion Collider (RHIC) at BNL, with a top collision energy of \(\sqrt{s_{\mathrm{{NN}}}} = 200\) GeV, and lead-lead (Pb–Pb) at the Large Hadron Collider (LHC) at CERN, with current top collision energy for Pb–Pb collisions of \(\sqrt{s_{\mathrm{{NN}}}} = 5.02\) TeV and \(\sqrt{s} = 13\) TeV for pp collisions) are usually seen as the experimental tool for the creation of the quark-gluon plasma. The quark-gluon plasma is a droplet of deconfined matter formed in the very high temperatures of the mentioned collisions. A similar phase transition is one of the sequences in the time-evolution of the early universe. It is the main aim of the ultra-relativistic heavy-ion physics to create the quark-gluon plasma and to study its properties. To understand these properties, control measurements in proton–proton (pp) and proton–nucleus (p–Au at RHIC and p–Pb at the LHC) collisions have to be done. These control measurements have turned out to be very interesting on their own, as we will see in the following. The traditional idea of doing p–Pb and d–Au collisions is to provide a reference for the Pb–Pb and Au–Au collisions, in order to investigate so called cold nuclear matter (initial state) effects.

The investigation of the production of nuclei has been an active part in the studies of heavy-ion collisions since their beginning. This is also connected to the fact that the low-energy heavy-ion physics is clearly more connected to nuclear structure physics, since for instance the studied heavy ions can break up into smaller nuclei that can be investigated. Whereas, in ultra-relativistic heavy-ion collisions the collided ions are mainly creating a zone of large energy density and temperature that is nearly baryon number free.Footnote 1 This fact can be seen from the baryochemical potential \(\mu _{\mathrm{{B}}}\), that is about one GeV at the low energies and close to zero at the LHC. This means that at the LHC anti-protons and anti-neutrons are equally produced as their matter counter pieces. At RHIC \(\mu _{\mathrm{{B}}}\) is still some MeV which leads to slight difference in the measured production yields of protons and anti-protons, that results in a small difference of the production of nuclei and anti-nuclei.

Nevertheless, the small values of the baryochemical potential still allows for the measurement of (sufficiently) high production yields of anti-nuclei and lead to two discoveries by the STAR Collaboration at RHIC. Firstly, they observed the anti-alpha [3] and secondly, they observed the anti-hypertriton [4]. The latter is a bound state of an anti-proton, an anti-neutron and an anti-\({\mathrm {\Lambda }}\) hyperon. With this observation they extended the nuclear chart by adding a third axis, in this case into the positive strangeness direction. It was the first discovered anti-hypernucleus and there is hope to find more in the next data taking campaigns.

The formation of (anti-)(hyper-)nuclei in high-energy collisions is often discussed using two different approaches.Footnote 2 On one hand, the coalescence model, where the objects are formed from its constituents, and on the other hand the thermal model assuming (in the simplest case) that all particles are formed at one temperature, and the corresponding volume and baryochemical potential.

Thermodynamic concepts are typically used to describe systems with large number of particles (\(> 10^{4}\)) in local thermal equilibrium. Typical numbers of produced charged particles per pseudo-rapidity unit at mid-rapidity in collisions at the LHC are:

  • in centralFootnote 3 (0–5%) Pb–Pb collisions (LHC): d\(N_\text {ch}\)/d\(\eta \approx \) 1600 [9], and

  • in high multiplicity (0–5%) p–Pb collisions (LHC): d\(N_\text {ch}\)/d\(\eta \approx \) 45 [10], and

  • in minimum bias pp collisions (LHC): d\(N_\text {ch}\)/d\(\eta \approx \) 6 [11].

In fact, integrating over the full pseudo-rapidity region one gets a value of more than 17000 charged particles (\(\approx \)26000 particles including neutrals) that are produced in central collisions [6]. The lifetime of the created system (medium) must be long enough and the mean free path of the particles in the medium must be short enough, so that equilibrium can be established by several interactions (simulations show about 3–6 are necessary) between its constituents [12].

In the following we focus on recent results by the ALICE and the STAR Collaborations, who are the main contributors on the discussed physics in this review, and the current model developments to describe these results. The review is organized as follows: first a short discussion on the available models, then the focus is put on the (anti-)nuclei data and then the peculiarities of the (anti-)hypertriton are highlighted. A brief summary and a short outlook conclude this manuscript.

2 Models

2.1 Collectivity and general concepts

The evolution of an ultra-relativistic heavy-ion collision is usually connected to three characteristic temperatures, the (pseudo) critical temperature \(T_{\mathrm{{c}}}\), the chemical freeze-out temperature \(T_{\mathrm{{ch}}}\) and the kinetic freeze-out temperature \(T_{\mathrm{{kin}}}\). When the temperature in the collision is higher than \(T_{\mathrm{{c}}}\), a quark-gluon plasma is formed that is after a very short time cooling down until again \(T_{\mathrm{{c}}}\) is reached where the hadronization starts. The particle yields are then fixed at \(T_{\mathrm{{ch}}}\), where the inelastic collisions cease. After further cooling down \(T_{\mathrm{{kin}}}\) is reached, where also elastic processes stop and the spectra of the particles are frozen (see for instance [6, 13,14,15] for a deeper discussion).

While the evolution of the temperatures the system is expanding and the main expansion is in radial direction. This expansion can be described in hydrodynamic models, in one collective flow field. This implies that all particles flow with one velocity, the radial flow velocity. A simple (hydro-like) model that is widely used in the description of transverse momentum spectra of particles is the blast-wave model [17]. The development of the radial ”push” the particle get is shown in Fig. 1, where deuteron transverse momentum spectra are displayed for different centralities in Pb–Pb collisions, compared with pp collisions. One clearly sees that the spectrum in pp collisions is a straight line in a log plot, i.e. an exponential spectrum. Whereas when one goes from peripheral to more central events, the spectra become less exponential and develop a shoulder that leads to a higher average transverse momentum, which is a clear sign of the radial flow [6]. Of particular interest for the heavy-ion community is also the anisotropic flow, that is formulated through a Fourier decomposition of the azimuthal angular dependent production spectra of the investigated particles. This is caused mainly by the spatial eccentricity of non-central collisions, that leads to a momentum eccentricity, i.e. stronger flow in one direction compared to another.

Fig. 1
figure 1

Transverse momentum spectra for different centralities in Pb–Pb collisions, shown together with the one from minimum bias pp collisions. The percentages indicate the overlap of the collisions, where 0–10% are most central and 60–80% are most preripheral collisions. Figure from [16], where also the details about the measurement are given

2.2 Thermal models

As discussed in Sect. 1 the numbers of particles produced in central heavy-ion collisions is quite large (more than 20000), therefore it makes sense to apply statistical methods. In fact, hadron yields measured in heavy-ion collisions at various energies are known to be described surprisingly well by such a statistical approach, i.e. the thermal model [18,19,20,21,22,23], which in the simplest case represents a non-interacting gas of known hadrons and resonances in the grand-canonical ensemble (see, for instance Ref. [24] for an overview). This concept has also been for many years successfully applied to production yields of light nuclei [25,26,27,28,29,30,31], and, even more surprisingly, a very good description of the various light (anti-)(hyper-)nuclei yields measured in heavy-ion collisions is obtained [16, 32,33,34,35,36]. Actually, as pointed out in [8] and visible from Fig. 2 (the black lines indicate the freeze-out curve for three cases), there are significant differences at low energies when the thermal model is used to map the extracted temperature \(T_{\mathrm{{ch}}}\) (in the figure only named T) and baryochemical potential \(\mu _{\mathrm{{B}}}\) onto the QCD phase diagram if nuclei production yields are included or neglected.

Often it is argued (see for instance [38, 39]), that the chemical freeze-out temperature \(T_{\mathrm{{ch}}}\) where the particles are supposed to be produced is much higher than the low binding energies of the nuclei, i.e. 2.2 MeV for the deuteron compared to the \(T_\mathrm {ch} = 156\) MeV chemical freeze-out temperature. Nevertheless, the analysis of the production yields of all particles from heavy-ion collision at RHIC and LHC, including light nuclei, gives a similar temperature, and if only the light particles are used the prediction for the light nuclei is in good agreement with the measured production yields. In fact, if only the nuclei are used to extract a temperature the result is very close to the 156 MeV, namely \(T_\mathrm {ch} \approx 160\) MeV [5, 6, 40]. The circumstance that objects of binding energies of several MeV are described by their production at 160 MeV in thermal models is often called ”snowballs in hell” [20, 39, 41,42,43,44].

From the partition function Z of the hadron resonance gas all thermodynamical quantities for hadrons and light (anti-)(hyper-)nuclei can be computed. Specifically, one can compute, for each hadron, its density \(n(T_\mathrm {ch},\mu ,V)\).Footnote 4 If all hadrons are produced from a state of thermodynamical equilibrium then, for a given data set, e.g. one beam or center-of-mass energy, the measured hadron yield for hadron j, \(\mathrm{d}N_j/dy\) at a given rapidity y but integrated over transverse momentum, should be reproduced as \(\mathrm{d}N_j/dy = V \cdot n(T_\mathrm {ch},\mu ,V)\). In practice, a fit is performed for each data set to the measured yields to determine the 3 parameters \(T_\mathrm {ch}, \mu _B, V\). The potentials \(\mu _Q\) and \(\mu _S\) are fixed by strangeness and charge conservation.

Since the beginning of the 90s a very large body of data on hadron yields produced in ultra-relativistic nuclear collisions has been collected. From an analysis of these data in the spirit of the above approach convincing evidence has been obtained [5, 45,46,47,48,49,50] that the yields of all hadrons produced in central collisions can indeed be very well described, yielding the complete energy dependence of the parameters \(T_\mathrm {ch}, \mu _B, V\) [47, 48], see in particular also the recent fit to the high precision LHC data [5]. For recent reviews see [5, 24].

The description of light nuclei in this approach assumes that the nuclei are handled as normal hadrons, i.e. mesons or baryons, characterised mainly by their quantum numbers (spin S, isospin I, total angular momentum J, baryon number B, strangeness S, and charge Q) and mass. As a matter of fact, the very important feed-down in the baryon and meson sector seems to be negligible for the nuclei (at top RHIC and LHC energies). The population of the light nuclei states is mainly driven by their mass (or mass number A). The weakly decaying hypernuclei leading to additional yields for the light nuclei (of the same mass number A) are suppressed in the analysis since they are usually removed by a selection on topological quantities in the analysis. In addition, these decays have branching ratios and reconstruction efficiencies on the order of 10% each, leading to a suppression of about 100 compared to the pure nuclei yields. The same holds true for hypernuclei decays into daughters of a mass number A-1 or lower. These facts lead to a strong suppression (factor 330 at the LHC) of feed-down from higher mass states for instance seen in the ALICE data [5, 6, 51]. It should be noted, that at low energies the contribution of excited nuclei, are expected to be contributing more strongly to the measured yields of stable nuclei, as discussed in [37].

Fig. 2
figure 2

Black lines: constant energy per particle ratio \(E/N = 1\) GeV for the case of no nuclei (dashed), ground state nuclei (dash-dotted) and excited nuclei (solid) being included in the statistical model calculation. The colored lines correspond to constant entropy per baryon trajectories: \(S/A = 5\) (green), \(S/A = 10\) (orange), and \(S/A = 30\) (blue). Figure from [37], where also the details about the calculations are given

In the beginning to middle of the 80s, important work was done to describe the data on light nuclei production from the Bevalac. Back then, the whole entropy production was connected to baryons and nuclei, since only a small amount of pions is created in comparison to higher energies.Footnote 5 In fact, the entropy per nucleon was found to be directly extractable from the measured d/p ratio and connected through the formula \(S/A = 3.95-\ln {(\mathrm {d/p})}\) [28], whereas the data back then gave values of about 5 to 6. This was understood by hydrodynamic calculations which included decays from particle unstable excited nuclei [52]. Light nuclei yields at these energies have strong contributions from the decay of intermediate mass fragments which are produced in excited states or even only exist as a typically broad resonance state as pointed out and used by Hahn & Stöcker [30, 53]. This work has been re-discovered recently and some work is done to describe the data as SIS18 energies taken with the HADES experiment and the data from STAR from the beam-energy scan at RHIC [37]. These excited nuclei are also important in a recent pre-clustering approach [54,55,56].

Fig. 3
figure 3

The entropy per net-baryon ratio S/A as a function of the collision energy for the case of no nuclei (dashed), ground state nuclei (dash-dotted) and excited nuclei (solid) being included in the statistical model calculation. Extracted using the thermal model description as in [37]

The entropy per net-baryon S/A extracted from the thermal model description displayed in Fig. 2 as a function of the centre-of-mass energy \(\sqrt{s_{\mathrm{{NN}}}}\) is displayed in Fig. 3. It shows a nice agreement between the three cases from 10 GeV onwards, but some difference at low energies if nuclei are included or not. Basically no difference between the S/A ratio is observed for the case when only stable nuclei or in addition excited nuclei are included (to be precise: the difference is less than 1%).

The partition function of the hadron resonance gas used in the thermal model is, in this low temperature, low density regime, usually evaluated in the non-interacting limit [5]. Sometimes, repulsive interactions are modeled with an ’excluded volume’ prescription. As long as all hadrons have the same excluded volume this correction leads to a reduction of the total particle density but does not change the relative densities of individual hadrons. In [57] the excluded volume correction has been used exclusively in this spirit. Whereas in [58,59,60] the excluded volume correction has been applied for baryon sector only, also using a uniform parameter for all baryon species, leading to a suppression of various baryon-to-meson ratios. While each hadron specie can, in principle, have its own individual eigenvolume, a justification of a particular scheme is problematic due to the lack of a reliable empirical input.

An exploratory study of excluded volume corrections for light nuclei has been presented in [61], where a large sensitivity of light nuclei ratios to the choice of eigenvolumes was reported.

An extreme and very recent example of the application of the excluded volume approach is given in [62, 63] and developed in [62, 64,65,66,67,68], which leads to a rather good description of the data but uses a rather ”complicated” set of hard-core radii for hadrons and nuclei connected with a treatment through the induced surface tension and allowing for two separate freeze-out temperatures. From a simple perspective these are additional parameters which always lead to a better description of data. Looking into more detail it seems even more unnatural that the freeze-out temperature of the nuclei is supposed to be higher than the ones for the hadrons (about 185 MeV for nuclei compared to 165 MeV for the hadrons in their approach). So nuclei yields are even frozen before the hadron yields (in the cooling phase of the fireball evolution) making the ”snowballs in hells” arguments even stronger.

In fact, as seen from the aformentioned values for the charged particle multiplicty it is also clear that a grand-canonical description of pp and p–Pb collisions will be rather difficult. For such values it is more appropriate to use a canonical approach, either only strangeness canonical or a full canonical description, i.e. an explicit conservation of the quantum numbers. This approach was developed in the past in [23, 69,70,71,72,73,74,75] and applied to elementary collisions, such as pp, p–Pb, p\(\bar{\mathrm {p}}\) and e\({}^{+}\)e\({}^{-}\) collisions [76,77,78,79,80,81,82,83,84,85,86,87,88]. This concept was applied in [89] to predict the dependence of various (anti-)(hyper-)nuclei production yields and their ratios on the charged particle multiplicity, using a fully canonical treatment.

There are other models which are connected to a thermal description as partial chemical equilibration approaches that are able to describe the data, such as the treatment of the temperature evolution of the chemical potentials [90] or the usage of the Saha equation [91], similar to the one used in cosmology to describe the big bang nucleosynthesis [92]. Furthermore, there is the Hagedorn resonance model that also can describe the data well [93]. Some more ideas are reviewed in [8].

2.3 Coalescence models

A more microscopic approach for the production of composite objects such as deuterons and light nuclei in nuclear and hadronic collisions is the coalescence model. It was first established for the description of data collected at the proton synchrotron at CERN, when for the first time a 25 GeV proton beam was used to study particle production in collisions with a variety of different targets [94]. In view of the surprisingly large cross sections observed for deuteron production in these p-nucleus collisions a mechanism was proposed [95,96,97], in which deuterons are formed by protons and neutrons which are close in phase-space. This idea was further developed to describe the yields of clusters in heavy-ion collisions at different energies. The first time it was used in heavy-ion collisions was at the Bevalac at Lawrence Berkeley Laboratory starting in the 70s [98,99,100,101,102,103]. It was further used as the model applied to data obtained at the Alternate Gradient Synchrotron (AGS) at BNL where several different experiments (E802/E866, E814, E864, E877, E878) have results on the production of light nuclei [104,105,106]. Furthermore, at the CERN SPS it was used for the description of heavy-ion data at three different experiments (NA44, NA49, NA52) [107,108,109,110,111,112,113,114,115,116,117,118,119]. The model was also successfully applied to describe the yields of nuclei at RHIC [4, 121,122,123,124].

In the following, we briefly summarise important aspects of this approach. A deeper discussion of the variants can be found in [6]. An empirical coalescence model based on the above pioneering publications was developed for the analysis of light nucleus production data from relativistic nuclear collisions at the Berkeley Bevalac, see, e.g., the review in [31] and references given there. Such collisions typically lead to the complete disintegration of the overlap zone of the colliding nuclei into their constituent nucleons. In such a situation, the production cross-section of a light nucleus with mass number A is given by the probability that A of the ’produced’ nucleons have relative momenta less than an empirical parameter \(p_0\), to be determined by comparison with measured yields. This model relates the production cross-section of the (light) nucleus, having a momentum \(p_A\), to a scaled power of the production cross-section for nucleons (in practice protons since neutrons are typically not measured)Footnote 6 which have a momentum \(p_{\mathrm {p}}\):

$$\begin{aligned} E_A \frac{\mathrm {d}^3N_A}{\mathrm {d}^3p_A} = B_A \left( E_{\mathrm {p}} \frac{\mathrm {d}^3N_{\mathrm {p}}}{\mathrm {d}^3p_{\mathrm {p}}}\right) ^A, \end{aligned}$$
(1)

where by \(p_A = Ap_{\mathrm {p}}\). This leads to the interesting fact that, for a given nucleus, the coalescence parameter \(B_A\) should not depend on momentum or centrality of the collision but only on the cluster parameters:

$$\begin{aligned} B_A = \left( \frac{4 \pi }{3} p_0^3 \right) ^{(A-1)} \frac{M}{m^A} \end{aligned}$$
(2)

where M and m are the nucleus and the proton mass, respectively, and \(\frac{4 \pi }{3} p_0^3\) is the coalescence volume in momentum space. With this approach a reasonable description of the Bevalac data was obtained, see for instance [31]. In fact, already at the Bevalac measurements it was observed that using this formalism one gets different coalescence radii \(p_0\) for different nuclei (d,t,\(^3\)He). They differ by about 20–30% for the different species. Equation 2 actually demonstrates an independence on the momentum of the particles and in fact at the Bevalac and in elementary collisions such a behaviour is observed, namely \(B_A\) is little dependent on the collision energy and the multiplicity in the events.

The coalescence model approach can also be connected with a thermodynamic treatment, as for instance discussed in [25, 125,126,127]. From this approach one can derive the proportionality

$$\begin{aligned} B_A \propto \left( \frac{1}{V} \right) ^{(A-1)} \end{aligned}$$
(3)

where V is now the volume in coordinate space. Thus often in the first approaches one either used a coordinate space or a momentum space approach.

In the 1990ties, data obtained at the Brookhaven AGS and CERN SPS accelerators at much higher energies provided, however, clear evidence for a momentum and centrality/multiplicity dependence of \(B_A\). Furthermore, it was realised that the production of bound objects from their free constituents violates energy and momentum conservation. To address these issues and to provide a more systematic theoretical description of the coalescence process, new approaches were developed which also took into account the temporal evolution of the fireball formed in the collision, see., e.g. [128,129,130,131,132].

Nevertheless, for a full model description one has to calculate the coalescence process itself which has several drawbacks. The transverse kinetic energies of particles produced in ultra-relativistic heavy-ion collisions lie with some hundreds of MeV to several GeV significantly above the relevant binding energies of the multi-baryon objects (2.2 MeV for deuterons, 8.48 MeV for tritons and 7.72 MeV for \(^3\)He nuclei). This fact is used in transport models (e.g. UrQMD) to argue that one can neglect the structure and intrinsic dynamics of such nuclei altogether. To describe the production of nuclei one usually uses different classes of coalescence models. e.g. momentum-space coalescence, phase-space coalescence with and without potential forces.

Clearly, a full treatment would need a detailed knowledge of the wave function of the nuclei under consideration. A recent discussion is for instance given in [133]. In this approach, the coalescence yield is proportional of the square of the n-body-wave function of the state formed by coalescence, which is usually approximated by a Gaussian function, which is far away from the true distribution (although a recent study showed that the usage of a more realistic function, i.e. the Hulthén wave function leads to similar results [134]). In practice this n-body-wave function is adjusted such that the corresponding rms radius (\(\sqrt{\langle r^2\rangle }\)) agrees with the size of the nucleus of consideration. This is still a crude approximation but at least takes account of the global parameters such as reduced mass and binding energy. The different binding energies are also reflected in the rms radii, which are used to project on in the more sophisticated models.

These considerations notwithstanding, most actual data analyses are still based on the simple momentum space coalescence picture. In [135], e.g., the authors use this approach to describe the apparent thermal ordering observed for the production of light nuclei at different RHIC energies. In this approach, the exponential mass dependence with a parameter p (usually named penalty factor) is introduced indirectly through the coalescence parameter \(B_A\). To reach such an exponential behaviour, which one could call thermal-like, one has to put in by hand a thermal distribution of the nuclei (which is already the case if one assumes a blast-wave distribution for the baryons used in the coalescence calculation). If this is not done one cannot easily reproduce the described observations, while the exponential behaviour comes out naturally in a thermal model as discussed in Sect. 2.2.

In Fig. 4 the coalescence parameter \(B_2\) is shown for the data displayed in Fig. 1 using Formula 1 in the five centrality classes available in Pb-Pb collisions at \(\sqrt{s_{\mathrm{{NN}}}} = 2.76\) TeV [16]. The most peripheral data is rather flat compared with the more central ones. This is in line with the data in more elementary collisions, i.e. pp and p-Pb, where the coalescence parameter \(B_2\) is nearly constant. In [136] the observed increase in the minimum bias data in pp collisions at \(\sqrt{s} = 7\) TeV is explained by the composition of the complete distribution by the single multiplicity \(B_2\) spectra, that are in fact really flat. The large differences between the different centralities in Fig. 4 are explained if one takes the more sophisticated approaches into account that are discussed in the following, i.e. the connection between source and the phase-space correlations.

Fig. 4
figure 4

The coalescence parameters \(B_2\) as a function of the transverse momentum per nucleon for various centrality classes in Pb-Pb collisions at \(\sqrt{s_{\mathrm{{NN}}}} = 2.76\) TeV. Figure from [16]

A slightly different and rather new approach [39, 131, 137,138,139,140] uses the size of the fireball to cope with the above mentioned centrality dependence of the \(B_A\) which was observed first at the AGS and the SPS. The size of the fireball is typically measured in high-energy collisions by the technique proposed by Hanbury Brown and Twiss in 1956 [141,142,143] to estimate the size of stars. This technique was then applied to elementary collisions by Goldhaber et al. in 1960 [144] and is nowadays one of the first physics measurements in heavy-ion collisions, since the measurement can be done using small statistics and using only charged pions which are abundantly produced. The measurement uses the fact that one can construct a correlation function from the two bosons, in the latter case pions, from calculating only their relative momenta. Quantum mechanics encodes the spatial information in this quantity by interference effects. The measurement is widely called HBT interferometry (from Hanbury Brown and Twiss) or more precise two-particle intensity interferometry (nowadays even femtoscopy) and by some model assumptions one can extract the spatial extension and the time evolution of the fireball. The spatial extension is usually identified by the volume of homogeneity. For an informative review see [145, 146].

These facts are used as an input for the model by Scheibl and Heinz [131], where they develop a coalescence approach from phase-space and quantum mechanical aspects of nuclei formation based on [130]. They end up at a formalism which takes into account the probability of formation depending on the size of the fireball, whose expansion is modeled in a semi-realistic way. This allows to predict the \(B_A\) as a function of transverse momentum, or as it is done in the paper as function of transverse mass \(m_\mathrm {T} = \sqrt{p_\mathrm {T}^2 + m^2}\).

For the first studies in Ref. [131] the pion HBT radii were used, mainly because of absence of anything else and the expected scaling with the transverse mass. Whereas, it is more appropriate to use the radii measured by proton-proton correlations (first suggested by Koonin [147]) since the radius of homogeneity might be different for the two cases).Footnote 7

The approach by Scheibl and Heinz [131] was used by Blum et al. [149, 150] to estimate the production probability of anti-nuclei (anti-deuterons and anti-\(^3\)He) by cosmic-ray interactions. They calculate the \(B_A\) from all existing data and from that the production probability of anti-matter in the universe by standard processes. This production mechanism leads to background for anti-matter production from exotic processes such as decay of heavy dark matter particles constructed to explain the anti-matter candidates observed in the AMS02 experiment [151, 152], for more details see [153,154,155,156,157,158,159,160,161].

Connected to the work of Blum at al., a model using a parameterisation of measured HBT radii is derived in [162, 163] to predict coalescence parameters for nuclei up to mass number \(A = 4\). A similar approach is discussed in [164], where the focus is more on the description of the multiplicity dependence of the ratios of the different nuclei to protons.

In a more recent work, Blum and Takimoto [165] derive a direct connection between the two particle function of two nucleons (in the pair rest frame) and the coalescence parameter \(B_A\) following a similar path as in [131]. This is then applied in [166] to the ALICE data, which is rather well described in this framework. A small tension, of about 2\(\sigma \) is only observed for the \(S_3\) ratio, where the production yields of hypertriton, helium, proton and \({\mathrm {\Lambda }}\) are compared such that some uncertainties drop.

There are other recent developments in the description of the light (anti-)(nuclei-)production that should be mentioned here that are somehow close to the coalescence model. A model so far mainly applied to low energy data is FRIGA [167], a method to identify clusters in transport model calculations, and connected to that with a partial overlap the new Parton Hadron Quantum-Molecular Dynamics (PHQMD) [168,169,170] model, that is a newly created transport model which already incorporates the features to find clusters and nuclei in the simulated evolution.

A completely different approach, albeit based on the transport code SMASH [171], is using the \(\pi \)d cross-sections to establish a kind of detailed balance in the transport simulation [42, 43] and can with that very well describe the deuteron production at the LHC. So the same calculations for other higher mass nuclei are eagerly awaited. The main author has recently given an overview talk at QM2019 [44] where he intensively discusses another special ratio, the yield of tritons times the one of protons divided by the squared deuteron yield, that is suggested as direct measure of the neutron fluctuations [172,173,174,175] to help finding the critical point in the beam energy scan at RHIC. Neutrons are in most experiments, in particular for ALICE and STAR, not accessible and therefore their fluctuations can not be measured directly but only through nuclei.

3 (Anti-)nuclei

The ALICE Collaboration has taken data at several energies in pp (900 GeV, 2.76 TeV, 5.02 TeV, 7 TeV, 8 TeV, 13 TeV), p–Pb (5.02 TeV and 8.16 TeV) and Pb–Pb (2.76 and 5.02 TeV) collisions in the last 10 years and results on (anti-)(hyper-)nuclei have been pushed forward in many of those data sets. Unfortunately, not all result is published by now which is similar for the STAR Collaboration where data has been taken data even in two dedicated beam energy scans, that are supposed to help finding the expected critical point in the QCD phase diagram. These data allow for differential studies as function of the centre-of-mass energies of the collisions or even in the charged particle multiplicity of the different events, where overlaps between pp and p–Pb, and p–Pb and Pb–Pb exist. In this sense a system size dependent analysis is possible, which is very interesting thinking of the aforementioned connection between the correlations and the coalescence parameter.

Fig. 5
figure 5

Prediction of several nuclei and hypernuclei, together with experimental results of the ALICE Collaboration [16, 35, 40] and the STAR Collaboration [176]. Figure derived and updated from [6]

From the correctedFootnote 8 transverse momentum spectra the production yields (dN/dy) are extracted.

A comparison of results from the ALICE and STAR Collaboration for different (anti-)(hyper)nuclei with predictions from the thermal model are shown in Fig. 5, that well agree with each other. The plot shows nicely several facts that were only touched slightly before:

  • The suppression of the particle production is strongly dependent on its mass, i.e. dN/dy is proportional to \(\exp {(-m/T_\mathrm {ch})}\), clearly visible in this plot since it uses a logarithmic y-axis.

  • The production yields of anti-nuclei and nuclei are approaching each other at higher energies and become basically equal at the LHC.

  • For hypernuclei there is a strong enhancement visible at low energies, that can be understood as an interplay of the temperature dependence, the baryochemical potential and canonical effects. This makes the upcoming CBM experiment at FAIR [177] and MPD at NICA [178,179,180] hypernuclei factories, with large potential for the discovery of new (multistrange-)hypernuclei, since they both will take data in this energy region.

The fact that dN/d\(y \propto \exp {(-m/T_\mathrm {ch})}\) is nicely visible in Fig. 6, for the measurement of dN/dy of nuclei ((p, d, \(^{3}\)He, \(^{3}\)He)) as a function of the baryon number A in pp collisions at \(\sqrt{s} = 7\) TeV, p–Pb collisions at \(\sqrt{s_{\mathrm{{NN}}}} = 5.02\) TeV and central Pb–Pb collisions at \(\sqrt{s_{\mathrm{{NN}}}} = 2.76\). The slopes are directly connected to the chemical freeze-out temperature, whereas for pp and p–Pb canonical effects play an additional role, which is not the case for Pb–Pb.

Fig. 6
figure 6

Production yield dN/dy normalised by the total angular momentum degeneracy as a function of the mass number A for inelastic pp collisions, minimum bias p–Pb and central Pb–Pb collisions [10, 40, 181,182,183,184]. The empty boxes represent the total systematic uncertainty while the statistical errors are shown by the vertical bars. The lines represent fits with exponential functions. Figure from [184]

Nevertheless, also the coalescence models are describing the data rather well. More qualitatively, this is visible from Fig. 7 where the coalescence parameters from many different experiments [111,112,113,114, 176, 185,186,187,188,189,190,191] are shown together with the expectation using a parameterisation of the HBT volume from STAR data from the beam energy scan at different energies [192], based on formula 3, as a function of \(\sqrt{s_{\mathrm{{NN}}}}\). In fact, very recently models investigated this behaviour in slightly more detail and can describe the shape nicely [193, 194].

Fig. 7
figure 7

Coalescence parameters \(B_2\) and \(B_3\) from different heavy-ion collision experiments [111,112,113,114, 176, 185,186,187,188,189,190,191] as a function of \(\sqrt{s_{\mathrm {NN}}}\). Data from heavy-ion collisions, where open symbols represent the anti-nucleus measurement. The horizontal dashed lines at low energies indicate the \(B_2\) and \(B_3\) values in elementary collisions as pp, p\(\bar{\mathrm {p}}\), p–A and \(\gamma \)A but also the Bevalac heavy-ion data is close to it. The dashed-dotted lines show a simple model assuming \(B_A\propto 1/V^{A-1}\), where the volume V is taken from HBT radius measurements by STAR at their beam energy scan [192]. Please note that the ALICE \(B_3\) measurement from \(^3\)He nuclei is in a broader centrality interval (0–20%) as the corresponding \(B_2\) (0–10%). Figure updated from [6]

To make a more quantitative comparison it makes sense to check for instance the production through ratios, e.g. d/p and \({}^{3}\)He/p ratios, as function of the mean number of charged particles \(\langle \mathrm {d}N_\mathrm {ch}/\mathrm {d}\eta \rangle \), as depicted in Fig. 8. Here the predictions from a thermal model calculation using exact conservation of all quantum numbers through a canonical treatment [89] and a coalescence calculation from a more sophisticated model [164] are compared to ALICE data [16, 136, 184, 195]. Both models describe the data rather well, whereas it seems depending on the multiplicity not one correlation volume \(V_C\) is prefered but it seems its value changes with multiplicity. Also [196] can describe the d/p trend reasonably well. Figure 9 shows predictions from the same models for the \({}^{3}\)He/p ratio compared to the ALICE data [16, 181, 197]. The description of the data trend is again rather well for both models, whereas there is some tension with the p–Pb data.

Fig. 8
figure 8

Ratio between the production yields of deuterons and protons for different multiplicities in pp collisions at \(\sqrt{s} = 7\) TeV [136] and \(\sqrt{s} = 13\) TeV [195], p–Pb collisions at \(\sqrt{s_{\mathrm {NN}}} = 5.02\) TeV [184] and Pb–Pb collisions at \(\sqrt{s_{\mathrm {NN}}} = 2.76\) TeV [16]. The two black lines (connected by the shaded area) are the theoretical predictions of the canonical statistical model for two sizes of the correlation volume \(V_C\) [89], while the magenta line represents the expectation from a coalescence model [164]. Figure taken from [195]

Fig. 9
figure 9

Ratio between the production yields of \({}^{3}\)He and protons in pp, p–Pb, and Pb–Pb collisions as a function of the average charged-particle multiplicity [16, 181, 197]. The expectations for the canonical statistical hadronization model (Thermal-FIST [198]) [89] and two coalescence approaches are shown [164]. For the thermal model, two different values of the correlation volume are displayed and connected through the grey band. The uncertainties of the coalescence calculations, which are due to the theoretical uncertainties on the emission source radius, are denoted as shaded bands. Figure from [197]

Similarily, one can check the measured coalescence parameter \(B_2\) and \(B_3\) in slices of \(p_{\mathrm{{T}}}/A\) as a function of the charged particle multiplicity against the models. This is shown in Fig. 10 for \(B_2\) compared to the coalescence model from [162], and in Fig. 11 with the same model and in addition an estimate based on the thermal model [5, 89].

Fig. 10
figure 10

Coalescence parameter \(B_2\) at \(p_{\mathrm{{T}}}/A = 0.75\) GeV/c as a function of charged particle multiplicity in pp collisions at \(\sqrt{s} = 7\) TeV [136] and \(\sqrt{s} = 13\) TeV [195], p–Pb collisions at \(\sqrt{s_{\mathrm {NN}}} = 5.02\) TeV [184] and Pb–Pb collisions at \(\sqrt{s_{\mathrm {NN}}} = 2.76\) TeV [16]. The two lines are theoretical predictions based on two different parameterisations of the HBT radius, see [195]

The coalescence model here uses two different constraints on the HBT radii and it seems that they both work only for a limited region in multiplicity of the data (at lower multiplicities the description from the HBT radii works well, whereas at higher multiplicities the constrain to the \(B_2\) is prefered from the data). This is less clear for the \(B_3\), where the data from heavy-ions is rather between the two coalescence calculations. Here the thermal model does a very good job, since for the heavy-ion data the grand-canonical model [5] describes the measurements, and at lower multiplicities its continuation in the canonical statistical model [89] works rather well.

Fig. 11
figure 11

The coalescence parameter as a function of the mean charged-particle multiplicity density for pp, p–Pb, and Pb–Pb collisions [16, 181, 197]. In addition, the expectations from the coalescence [162] and the thermal models (grand-canonical SHM [5] and the canonical statistical model CSM [89] using a Blast-Wave model description for the transverse momentum dependence) are shown. Figure from [197]

4 (Anti-)hypernuclei

(Anti-)hypernuclei provide a unique access to the hyperon-nucleon (YN) and the hyperon-hyperon (YY) interaction that otherwise is not easily accessible, since neither hyperon targets nor hyperon beams are feasible to allow for precision measurements that are needed to get a deeper understanding of their interaction.Footnote 9 Whereas, the understanding of the aforementioned interaction is of exceptionally importance in the discussion of neutron stars, see e.g. [205,206,207,208,209,210,211,212,213,214].

In the following we will focus on the (anti-)hypertriton. The hypertriton is a bound state of a \({\mathrm {\Lambda }}\) hyperon, a proton and a neutron, and as such the lightest hypernucleus. Even being discovered in the 1954 [215,216,217], only shortly after the discovery of hypernuclei by Danysz and Pniewski [218] in 1953, its properties are still not well constraint.

The best measured quantity is the \({\mathrm {\Lambda }}\)-separation energy, i.e. the energy needed to remove the \({\mathrm {\Lambda }}\) from the deuteron. The currently accepted value is only \(B_{{\mathrm {\Lambda }}} = \)(130 ± 50) keV [207, 219,220,221]. As all hypernuclei the hypertriton decays weakly and the small separation energy already tells one that the lifetime of the object should be close to the lifetime of the free \({\mathrm {\Lambda }}\) hyperon. A visualization of the radial wave function of the hypertriton, assuming a \({\mathrm {\Lambda }}\)–deuteron bound state, is shown in Fig. 12. In comparison also a triton-like wave function is shown. One clearly sees that the hypertriton is a huge object and the rms radius extracted from the wave function is about 10 fm. This means also from a simple quantum mechanical picture the \({\mathrm {\Lambda }}\) should basically decay freely, being so far away from the deuteron core [6]. Measurements in the 1970s using emulsions and bubble chambers led to a value close to the free \({\mathrm {\Lambda }}\) lifetime, mainly because of the large uncertainties. Whereas with the STAR measurement [4] different heavy-ion experiments started a series of measurements which changed the picture since the uncertainties started to get smaller due to much larger statistics in these experiments. All heavy-ion experiments measured a significantly lower lifetime and the so-called ”hypertriton lifetime puzzle” was born.

Fig. 12
figure 12

Wavefunction (red) of the hypertriton assuming a s-wave interaction for the bound state of a \({\mathrm {\Lambda }}\) and a deuteron. The root mean square value of the radius of this function is \(\sqrt{\langle r^2 \rangle } = 10.6\) fm. In blue the corresponding square well potential is shown. In addition, the magenta curve shows a triton-like object using a similar calculation as the hypertriton, namely a deuteron and an added nucleon, resulting in a much narrower object as the hypertriton. As shown in [6]

It is worth to mention, that the hypertriton branching ratios have so far not been experimentally determined. Therefore usually a theoretical branching ratio is used [222] when the measured spectra are corrected for the branching ratio.Footnote 10 The four main decay modes are \({}^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}} \rightarrow {}^{3}{{\mathrm{He}}} + \pi ^{-}\), \({}^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}} \rightarrow {}^{3}\mathrm{{H}} + \pi ^{0}\), \({}^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}} \rightarrow \mathrm{{d}} + \mathrm{{p}} + \pi ^{-}\) and \({}^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}} \rightarrow \mathrm{{d}} + \mathrm{{n}} + \pi ^{0}\). It is clear from the above mentioned main contributing techniques to the hypertriton measurements, i.e. emulsions and bubble chambers, that the problem is connected to the neutral particles (n and \(\pi ^{0}\)) that are not possible to detect with these methods.

The most important fact connected to the production of this object is the very small \({\mathrm {\Lambda }}\)-separation energy of only 130 keV, that should lead to a suppression of its production probability because of the violent environment in a heavy-ion collision. That is easily visible from the comparison of the \({\mathrm {\Lambda }}\)-separation energy with the temperature of the fireball in the order of 150 MeV. These three orders of magnitude between the two values and the connected ”impossibility” to form these loosely bound objects goes under the name ”snowballs in hell” [32, 41, 223].

Nevertheless, also the hypertriton production is measured and its production yields are well described by the thermal model [5, 8, 35]. There are also calculations in the coalescence approach that describe the data rather well [7, 133, 166, 175, 224,225,226,227,228,229,230,231]. More on this topic can be found in [6].

Fig. 13
figure 13

Hypertriton lifetime measured by the ALICE Collaboration [35, 232], the HypHI Collaboration [233] and the STAR Collaboration [4, 234] in heavy-ion collisions, compared with previously published results [235,236,237,238,239,240]. The blue dashed line with the shaded band represents the world average of the shown hypertriton lifetime measurements. The magenta full horizontal line indicates the lifetime of the free \({\mathrm {\Lambda }}\) hyperon as reported by the Particle Data Group [241]. The dashed-dotted lines of different colours are theoretical model expectations of the hypertriton lifetime

In Fig. 13 the current experimental and theoretical status of the ”hypertriton lifetime puzzle” is shown. The data points (red) without boxes are from emulsions and bubble chambers [235,236,237,238,239,240] and the ones with boxes (corresponding to the systematic uncertainties) all have been measured in heavy-ion collision [4, 35, 232,233,234] are depicted together with the the lifetime of the free \({\mathrm {\Lambda }}\) hyperon as reported by the Particle Data Group [241], as the magenta full horizontal line. One clearly sees that the uncertainties shrink significantly over time since large data sets have been collected by the ALICE and STAR Collaborations and being utilised in the measurement. In fact, there is even a new preliminary measurement by the ALICE Collaboration that has even smaller uncertainties as the last published data point [242]. The blue dashed line in Fig. 13 represents the world average (shaded blue area corresponds to the connected uncertainty) using the displayed data (in red) and utilising the prescription given in [243], that is in line with the one by the Particle Data Group (PDG) [241].

In any case, the uncertainties, in particular the systematic ones, have to be taken seriously and lead to a deviation of 4\(\sigma \) of the world average compared with the free \({\mathrm {\Lambda }}\) lifetime.

In addition, several theoretical model expectations are displayed in Fig. 13 as dashed dotted lines (of different colours) with a focus on the most reliable model calculations of the hypertriton lifetime. The most recent ones are leading to much smaller tension, or better agreement with the world average.

In fact, the value by Gal and Garcilazo [244] is the first calculation which includes also a final-state interaction effect of the pion. This leads to a reduction of the expected hypertriton lifetime down to 81% of the free \({\mathrm {\Lambda }}\) value due to additional attraction from the pion final-state interaction. A more recent approach [245] also incorporates in addition a hypertriton wavefunction that is based on chiral effective field theory, distorted waves for the pions and by that also p-wave interactions, where in [244] only s-waves were included. In addition,the authors of [245] show for the first time that the off-shell \(\Sigma \rightarrow \mathrm{{N}} + \pi \) weak decay contribution to the hypertriton decay rate brings its lifetime down by about 10%. In Fig. 13 we show the values of both calculations with uncertainties, whereas the one in grey corresponds to a \(\tau ({{} ^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}}}) = 191 ({}^{+32}_{-15}) (\pm 22)\) ps matching the \(B_{{\mathrm {\Lambda }}} = \)(130 ± 50) keV provided by Avraham Gal [246], based on [245].

An exception is the most recent calculation, by Hildenbrand and Hammer [247], which is using a pionless effective field theory (EFT) approach with \({\mathrm {\Lambda }}\)d degrees of freedom, that finds a value of the hypertriton lifetime very close (about 0.98\(\tau _{{\mathrm {\Lambda }}}\)) to the one of the free \({\mathrm {\Lambda }}\) as long as the commonly accepted separation energy of \(B_{{\mathrm {\Lambda }}} = \)(130 ± 50) keV is used. They studied also the dependence of the lifetime on the separation energy, while increasing up to 2 MeV, they obtain about 0.9\(\tau _{{\mathrm {\Lambda }}}\). The same authors get a similar rms radius as discussed above (in connection to Fig.  12) in their model [248].

An interesting twist comes from the STAR Collaboration who managed first to measure the 3-body decay mode \({}^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}} \rightarrow \mathrm{{d}} + \mathrm{{p}} + \pi ^{-}\) experimentally and could by that calculate a ratio of the obtained production in the two decay channels. To be precise they calculated \(\mathrm{{\Gamma }}({}^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}} \rightarrow {}^{3}{{\mathrm{He}}} + \pi ^{-}) / (\mathrm{{\Gamma }}({}^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}} \rightarrow {}^{3}{{\mathrm{He}}} + \pi ^{-})+\mathrm{{\Gamma }}({}^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}} \rightarrow \mathrm{{d}} + \mathrm{{p}} + \pi ^{-})\)), which is very close to \(\hbox {R}_{3}\) defined as:

$$\begin{aligned} \mathrm{{R}}_{3} = \mathrm{{\Gamma }}({}^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}} \rightarrow {}^{3}{{\mathrm{He}}} + \pi ^{-}) / (\mathrm{{\Gamma }}({}^{3}_{{\mathrm {\Lambda }}}\mathrm{{H}} \rightarrow \mathrm{{all}}~\pi ^{-}~{ \mathrm {channels}}). \end{aligned}$$
(4)

It can be connected to the assignment of the total angular momentum of the hypertriton, as discussed in [240, 249, 250]. The current experimental situation together with these two models is depicted in Fig. 14. The experimental data [234, 236, 240, 251,252,253] agree nicely with the assumption of a \(J=1/2\) system.Footnote 11

Fig. 14
figure 14

Compilation of the available experimental measurements [234, 236, 240, 251,252,253] and two theoretical calculations [249, 250] for \(\hbox {R}_{3}\). Figure taken from [234]

Fig. 15
figure 15

Comparison of the STAR results [254] (separated into hypertriton in dark red, anti-hypertriton in blue and the average in red) with earlier measurements [239, 258,259,260] (in black) for \(B_{{\mathrm {\Lambda }}}\). The magenta lines indicate the recalibrated earlier measurements. The STAR measurement plotted here is based on a combination of hypertriton and anti-hypertriton assuming CPT invariance. Figure from [254]

Fig. 16
figure 16

Measurements of the relative mass-to-charge ratio differences between nuclei and anti-nuclei \(\mathrm{{\Delta (m/|q|)/(m/|q|)}}\). The ALICE [255] and the STAR [254] results are shown together separated by their mass on the y-axis. The dotted vertical line at zero on the horizontal axis is the expectation from CPT invariance. The horizontal error bars represent the sum in quadrature of the statistical and systematic uncertainties. Figure from [254]

In addition, the STAR Collaboration published recently a new paper [254] showing the \({\mathrm {\Lambda }}\) separation energy \(B_{{\mathrm {\Lambda }}}\), extracted from their mass measurement via the invariant mass technique, together with a CPT test via the mass difference between nuclei and corresponding anti-nuclei. They find slightly different values for \(B_{{\mathrm {\Lambda }}}\) for hypertriton and anti-hypertriton, that are still in agreement in uncertainties, as shown in Fig. 15. Assuming CPT invariance they are also averaged in the same figure. The CPT test itself is shown in Fig. 16, comparing ALICE results [255] and STAR results [254] by the difference of the mass per charge of particle and anti-particle normalised to the mass per charge, i.e. \(\mathrm{{\Delta (m/|q|)/(m/|q|)}}\). No deviation from CPT invariance is observed, since all differences agree with zero. One can actually already see from Fig. 15 that the \(\mathrm{{\Delta (m/|q|)/(m/|q|)}}\) difference between hypertriton and anti-hypertriton is about 300 keV and thus the value plotted in Fig. 16 can only be about 0.0001, since the charge q is unity and the mass of the hypertriton 2.991 GeV/\(c^{2}\).

The averaged \({\mathrm {\Lambda }}\) separation energy is measured to be \(B_{{\mathrm {\Lambda }}} = (0.41 \pm 0.12 \pm 0.11)\) MeV, which actually close to a recalibration of the old data done in [256], and displayed in Fig. 15 as magenta horizontal lines. This is done mainly since the official PDG averaged masses changed since the 60s and 70s when these old data was established. An increase of \(B_{{\mathrm {\Lambda }}}\) from 130 to 410 keV is intensively discussed in the community and seen as strong reason to remeasure its value in other experiments. Nevertheless, the by a factor three increased \(B_{{\mathrm {\Lambda }}}\) value still leads to a very large object, large rms radius, that should result in a strong influence on production yield calculations for the hypertriton in a coalescence approach. This fact is not the case for the thermal model where the size of the object plays no role.

The implications of an increased \({\mathrm {\Lambda }}\)-separation energy of the hypertriton in a chiral effective field theory approach are studied in [257]. The authors find no strong evidence against the new experimental result in the description of hypernuclei in their approach.

Since there is a direct connection between \(B_{{\mathrm {\Lambda }}}\) and \(\hbox {R}_{3}\) it is interesting to study its consistency through models, as done in [245, 247]. In fact, [245] show in a calculation that the extracted \({\mathrm {\Lambda }}\) separation energy and the \(\hbox {R}_{3}\) value by the STAR Collaboration [234, 254] are in agreement within the given uncertainties. Whereas, in [247] the test reveals a discrepancy of about 1.8\(\sigma \) between the calculation (\(\hbox {R}_3\)=0.55 ± 0.09 [247]) and the value by the STAR Collaboration (\(\hbox {R}_3\)=0.32 ± 0.05 (stat.) ± 0.08 (syst.) [234])

5 Summary and outlook

The presented selected highlights show an active field of research in a special environment, i.e. ultra-relativistic heavy-ion collisions. The production of (anti-)(hyper-)nuclei is rather well described by thermal and coalescence models, depending on their implementation. Most sophisticated models do a very good job, thus more differential tests of the models have been started and some of them have been discussed here. In particular, the ratios of production yields or the coalescence parameters as function of charged particle multiplicity have shown some tension and might help to identify the right approach to understand the production mechanism of loosely bound objects, as light (anti)(hyper-)nuclei, in high-energy collisions.

The hypertriton production is nicely described by thermal models and also coalescence models do a rather good job. Even when the object itself can be considered as rather large, sometimes even called the ultimate halo nucleus, which enters the coalescence calculations strongly.

The lifetime of the hypertriton gives some riddles, since it is expected naïvely to be close to the lifetime of the free \({\mathrm {\Lambda }}\), whereas the current world average is about 4\(\sigma \) away from it. Recent models incorporating final-state interactions agree very well with the world average. New measurements on the \({\mathrm {\Lambda }}\) separation energy and the \(\hbox {R}_{3}\) allow to further constrain these modelsFootnote 12. Also new measurements of the lifetime are expected to come out in future that will have much higher precision than before. Even the measurement of the production and lifetime of the hypertriton in pp collisions is expected soon, since the signal was shown recently [264].

So far no evidence was found on any CPT invariance violation in the nuclei sector, whereas the precision of the experimental tests increased and will get even better in the future.

The ultimate test of the different production models will come through the measurement of flow observables, in particular the anisotropic flow of all (hyper-)nuclei described at once in one model, that require large amounts of data as expected in the upcoming runs at the LHC [6, 265]. First measurements have been done [266, 267] but many more are eagerly awaited.

A completely new way to study the YN and YY interactions was only slightly touched here, namely femtoscopy. This method started helping to improve the scarce YN and YY scattering data. In fact, one can by combing that with the information from hypernuclei constrain and improve chiral effective field theory more [265] and there is hope to even study three body forces in this way, e.g. by studying \({\mathrm {\Lambda }}\)d correlations [268].

In conclusion, the knowledge of the production of light (anti-)(hyper-)nuclei increased significantly, but still there is no complete understanding of the underlying mechanism. Clearly, more data is needed but also models need improvement to cope with all the indicated issues.