1 Introduction

Modern developments in laser technologies have kick-started the attosecond revolution, which formed the field of attoscience, dealing with dynamics on the attosecond (\(10^{-18}~\hbox {s}\)) timescale [1,2,3]. Attosecond science was born with the study of above-threshold ionisation (ATI) and high-order harmonic generation (HHG) driven by strong laser pulses. As it has matured over the past three decades, attoscience has given us access to phenomena which were previously thought to be inaccessible—including the motion of valence electrons in atoms [4], charge oscillations in molecules [5], as well as the direct observation of the electric-field oscillations of a laser pulse [6]—and it has also spurred advances in ultrafast pulse generation which have opened a completely new window into the dynamics of matter.

The meteoric progress of attoscience has been fuelled, on the one hand, by formidable experimental efforts, and, on the other hand, it has been supported by a matching leap in our theoretical capabilities. These theoretical advances have come in a wide variety, forming two opposing families of analytical and numerical approaches. While these two families generally work together, the dichotomy between analytical and numerical methods is sometimes perceived as a source of tension within the attoscience community.

In this Topical Review, we present an exploration of this dichotomy, which collects the arguments presented in the panel discussion ‘Quantum Battle 3—Numerical versus Analytical Methods’ held during the online conference ‘Quantum Battles in Attoscience’ [7]. Our main purpose is to resolve the tension caused by this dichotomy, by identifying the critical tension points, developing the different viewpoints involved, and finding a common ground between them.

This process forms a natural dialogue between the analytical and numerical perspectives. We delegate this dialogue to two hypothetical ‘combatants’—

Analycia:

Hi, I’m Analycia Formuloff, and I am an attoscience theorist working with analytical approaches.

Numerio:

Hello, my name is Numerio Codeman, and I’m a computational scientist working on ab initio methods.

—who will voice the different views expressed during the panel discussion.

We follow the dialogue between Analycia and Numerio through three main questions. First, in Sect. 1, we explore the scope and nature of analytical and numerical methods, including the interchangeability of the terms ‘numerical’ and ‘ab initio’. We then analyse, in Sect. 2, the relative advantages and disadvantages of the two approaches, using non-sequential double ionisation (NSDI) as a case study. Finally, in Sect. 3, we examine their roles in scientific discovery, via the case study of resonant HHG. In addition, in Sect. 4, we present some extra discussion points, as well as our combatants’ responses to the questions raised by audience members, and a summary of the responses to several polls taken during the live session.

2 ‘Ab initio’ and analytical methods

A constructive discussion is always based on a good knowledge of the subject. To this end, in this section we tackle the subtleties in the definitions of ‘ab initio’, ‘numerical’ and ‘analytical’ methods: we detail their differences, and we present a rough classification of the various theoretical methods used in attosecond science. We first concentrate on Numerio’s speciality, ab initio methods, then we move to Analycia’s forte, analytical theories. For each combatant, we first introduce their theoretical approach and then list the main methods in the corresponding toolset. After these presentations, Numerio and Analycia discuss the friction points they have with each other’s methods.

2.1 Ab initio and numerical methods

In its dictionary sense, ab initio is Latin for ‘from the beginning’. Thus, a theoretical method can be defined to be ab initio when it tackles the description of a certain physical process starting from first principles, i.e. using the most fundamental laws of nature that—according to our best understanding—govern the physics of the phenomena that we aim to describe.

Within an ab initio framework, the inputs of the theoretical calculation should be limited to only well-known physical constants, with any interactions kept as fundamental as possible. This means that no additional simplifications or assumptions may be made on top of what we believe are the established laws of nature. In other words, the specific aspects of the physical process of interest need to be approached without using specially tailored models.

We now bring our combatants to the stage, to discuss the consequences of this definition.

Analycia :

This is an extremely stringent definition, which will substantially limit the number of methods that can be classified in the ab initio category. But, more importantly, it just delays the real question: what does ‘fundamental’ mean in this context?

Numerio :

The answer to this question is, in essence, a choice of ‘reference frame’, within theory-space, which will frame our work. This choice is tightly connected to the physical regime that we want to describe. We know that attoscience, as part of atomic, molecular and optical physics, is ultimately grounded on the Standard Model of elementary interactions in particle physics, which gives—in principle—the ‘true’ fundamental laws. However, much of this framework is largely irrelevant at the energies that concern us. Instead, we are only interested in the quantum mechanics of electrons and atomic nuclei interacting with each other and with light, and this gives us the freedom to restrict ourselves to quantum electrodynamics (QED), or with its ‘friendlier’ face as light-matter interaction [8].

Analycia :

What does this mean, precisely? If QED is the right framework, that means we must retain a fully relativistic approach as well as a full quantisation of the electromagnetic field.

Numerio :

For most problems in attoscience, this would be overkill, as relativistic effects are rarely relevant. Instead, it is generally acceptable to work in the context of non-relativistic quantum mechanics, and to introduce relativistic terms into the Hamiltonian at the required level of approximation. These are the basic laws responsible for ‘a large part of physics and the whole of chemistry’, as recognised by Dirac as early as 1929 [9].

Analycia :

I can see how this is appropriate, so long as spin-orbit and inner-core effects are correctly accounted for. However, what about field quantisation?

Numerio :

We normally deal with strong-field settings where laser pulses are in coherent states comprising many trillions of photons, which means that a classical description for electromagnetic radiation is suitable.

Analycia :

That typically works well, yes, but it is also important to keep in mind that it can blind you to deep questions that lie outside of that framework [10]. In any case, though: as ‘fundamental’, would you be satisfied with a single-electron solution of the Schrödinger equation?

Numerio :

No, this would not be appropriate—cutting down to a single electron is generally going too far. While this can be very convenient for numerical reasons, restricting the dynamics to a single particle invariably requires adjusting the interactions to account for the effect of the other electrons in the system, via the introduction of a model potential. This approximation can be validated using a number of techniques which can make it very solid, but it always entails a semi-empirical step and, as such, it rules out the ‘ab initio’ label in its strict sense.

Analycia :

That leaves a many-body Hamiltonian of formidable complexity.

Numerio :

It does! Let me show you how it can be handled.

The ‘ab initio’ toolset The complexity of simulating the time-dependent Schrödinger equation (TDSE) with the multi-electron Hamiltonian of atomic and molecular systems can be tamed using a wide variety of approaches. Most of these are inherited from the field of quantum chemistry, and they differentiate from each other via the level of approximation they employ, and by the ranges of applicability where they are accurate.

Fig. 1
figure 1

Schematic representation of the hierarchy of methods to describe electron correlation

However, every approach in this space must face a challenging trade-off between accuracy in capturing the relevant many-body effects, on the one hand, and the computational cost that it requires, on the other. The key difficulty here is the handling of electron-correlation effects, which are difficult to manage at full rigour. Because of this, many methods adopt an ‘intermediate’ approach, which allows for lower computational expense, while at the same time limiting the accuracy of the physical description.

Numerical methods thus form a hierarchy, schematised in Fig. 1, with rising accuracy as more electron correlation effects are included:

  • Single-Active-Electron (SAE) approaches are the simplest numerical approaches to the TDSE [11], though they are only ab initio for atomic hydrogen, and require model potentials to mimic larger systems. Nevertheless, they can be used effectively to tackle problems where electron correlation effects do not play a role, and their relative simplicity has allowed multiple user-ready software packages that offer this functionality in strong-field settings [12,13,14,15].

  • Density functional theory (DFT) allows an effective single-particle description [16], widely considered as ab initio, which still includes electron correlation effects through the use of a suitable ‘exchange-correlation functional’ [17]. Within attoscience, examples of approaches which specifically target attosecond molecular ionisation dynamics include real-time time-dependent (TD)-DFT [18,19,20] and time-dependent first-order perturbation theory static-exchange DFT [5, 21]. More broadly, TD-DFT approaches are robust enough that they appear in several user-ready software packages [22,23,24,25] suitable for attoscience.Footnote 1

  • Non-equilibrium Green’s function theory also allows one to describe the many-body problem from first principles by using effectively single-particle approaches [27, 28].

  • Quantum-chemistry approaches go beyond the SAE approximation and DFT to include, directly, the effects of electron correlation [29]. The starting point for this is generally the Hartree–Fock (HF) mean-field approach, though this is rarely sufficient on its own. Because of this, quantum-chemistry methods climb the ladder all the way to the full Configuration Interaction (CI) limit, a complete description of electron correlation (which is generally so computationally intensive that it is out of reach in practice).

    Most of the standard approaches of quantum chemistry were developed to describe bound states of molecular systems [29, 30], and they have also proven to be highly successful for modelling band structures in solid-state systems [31]. Nevertheless, they often require significant extensions to work well in attoscience, particularly regarding how the ionisation continuum is handled. Recent examples of these extensions include ab initio methods based on the algebraic diagrammatic construction (ADC) [32,33,34,35,36] and its restricted-correlation-space extension (RCS-ADC) [37,38,39,40], multi-reference configuration interaction (MRCI) [41, 42], and multi-configuration time-dependent Hartree [43] and Hartree–Fock [44] methods, as well as restricted-active-space self-consistent-field (RAS-SCF) [45,46,47] approaches.

  • Basis-set development is another crucial element of the numerical implementation work for ab initio methods in attoscience, since the physics accessible to the method, as well as its computational cost, are often determined by the basis set in use. Recent work on basis sets includes the development of B-spline functions, both on their own [32,33,34,35,36,37,38,39,40, 48], and in hybrid combinations with Gaussian-type orbitals (GTOs) [45,46,47], as well as finite-difference approaches [49,50,51], finite-element discrete-variable-representation functions [52, 53], grid-based methods [54, 55], and simple plane-waves [56].

A more extensive (though still non-exhaustive) list of methods is shown in Table 1. Here, we focus on methods for the description of ultrafast electron dynamics, happening on the attosecond time-scale. For numerical methods tackling the (slower) nuclear motion in attoscience, we refer the reader to Ref. [57].

Table 1 Rough survey of numerical methods for attoscience and strong-field physics

2.2 Analytical methods

The use of analytical methods to describe strong-field phenomena has a long-storied pedigree dating back to the 1960s [61], before laser sources could reach sufficient intensities to drive quantum systems beyond the lowest order of perturbation theory. As a general rule, analytical methods are approaches for which the governing equations can be solved directly, under suitable approximations, and the solutions can be written down in ‘exact’ or ‘closed’ form.

We return to our combatants on the stage, where Numerio

is dissatisfied with this definition.

Numerio :

That just seems like it is kicking the can down the road. What does ‘closed form’ mean?

Analycia :

As it turns out, when the term ‘closed form’ is placed under examination, its precise meaning turns out to be rather elusive and ultimately quite ambiguous [62, 63]. That is to say: which ‘forms’ does the term ‘closed form’ actually include? Which ones does it exclude? Does it stop at elementary functions, i.e. exponentials and logarithms? Or must it cover special functions, like the Bessel functions? And if we do intend to include special functions as part of the toolbox of analytical methods, which special functions should be included? Do hypergeometric functions or Meijer G functions make the cut? What about newly minted functions expressly defined to encapsulate some hard numerical problem? (As one example of this, take the recent proof of the integrability of the Rabi model of quantum optics [64]—should the functions defined in that work be considered special functions?) More importantly: what does it really mean for a function to be a ‘special’ function?

Numerio :

But hasn’t this question been answered long ago?

Analycia :

Well, this is the kind of question where one could hope that we could look to the mathematicians to provide an answer—say, by supplying an objective classification of functions, from elementary through exponentials to the ‘higher’ transcendentals—as to where this class should stop. Unfortunately, however, when such objective classifications are attempted, they run into a bog of vague answers and incomplete taxonomies which leave out large classes of useful functions. Ultimately, as Paul Turán put it [65], ‘special’ functions are simply useful functions: they are a shared language that we use to encapsulate and communicate concepts and patterns [62], and their boundaries (and with it, the boundaries of analytical methods in general) are subjective and a product of tradition and consensus.

Numerio :

This distinction seems like trivial semantics to me, to be honest.

Analycia :

At first glance, yes, but it is important to keep in mind that, as a rule, special functions like the Bessel functions are defined as the solutions of hard problems—canonical solutions of ordinary differential equations, integrals that cannot be expressed in elementary terms, non-summable power series—and when it comes to evaluating them in practice, they generally require at least some degree of numerical calculation. In this regard, then, what is to stop us from packaging up one of the numerical problems that face us, be it a full TDSE simulation or one of its modular components, calling it a special function, and declaring that methods that use it are ‘analytical’?

Numerio :

That sounds rather absurd.

Analycia :

Indeed it does, at face value, but it is not all that far from how special functions are actually defined, i.e. as the solution of a tough differential equation, or to dodge the fact that a given integral cannot be evaluated in elementary terms by re-christening it as an integral representation. More importantly, it encodes a serious question—what happens when the ‘back end’ of analytical methods involves more numerical calculations than the TDSE simulations they were intended to replace?

Numerio :

They should give the job to me, of course!

The analytical toolset These issues aside, the analytical methods of strong-field physics, as traditionally understood in the field, form a fairly well-defined set. This set can be further subdivided into three main classes:

  • Fully quantum models, which retain the full coherence of the quantum-mechanical framework. These frameworks date back to key conceptual leaps in the early days of laser-matter interaction [61, 66,67,68,69], but they also include applications of more standard perturbation-theory tools.

    The central method in this category is known as the Strong-Field Approximation (SFA) [61, 66, 67] (see Ref. [70] for a recent review), which builds on the solvability of the field-driven free-particle problem, used to great effect for HHG [71]. The SFA is more properly a family of related methods [72] with the key commonality of taking the driving laser field as the dominant factor after the electron has been released; in its fully quantum version, it produces observables in the form of highly oscillatory time integrals.

  • Semiclassical models, which bridge the gap between the full quantum description and the classical realm by incorporating recognizable trajectory language but still keeping the quantum coherence of the different pathways involved. The paradigmatic example is the quantum-orbit version of the SFA [73], obtained by applying saddle-point approximations to the SFA’s time integrals, which results in trajectory-centred models analogous to Feynman path integrals [74] where the particles’ positions are generally evaluated over complex-valued times [73, 75] and are often complex-valued themselves [76,77,78,79].

    As a general structure, the relationship between semiclassical methods and the full TDSE is the same as between ray optics and wave optics for light, a correspondence that can be made rigorous as an eikonal limit [80]. This has the caveat that optical tunnelling requires evanescent waves in the classically forbidden region, with an optical counterpart in the use of complex rays for evanescent light [81]. The presence of these complex values complicates the analysis, but it also presents its own opportunities for insight [82].

    The recent development of analytical methods has centred on correcting the SFA to account in various ways for the interaction of the photoelectron with its parent ion, from straightforward rescattering [83] through fuller Coulomb corrections [76, 84] and explicit path-integral formulations [85], which now span a wide family of approaches [86]. On the other hand, it is important to keep in mind that there are also multiple techniques, such as semiclassical propagators [87], which are independent of the SFA.

  • Fully classical models, which can retain a small core of quantum features (most often, the tunnelling probability obtained from tunnelling theories [69]) but which generally treat all the particle dynamics using classical trajectories. This includes the paradigmatic Simple Man’s Model [88, 89] for HHG, but it also covers much more elaborate methods, often of a statistical kind, that look at classical trajectory ensembles to understand the dynamics [90], and in particular the specific formulation as Classical Trajectory Monte Carlo [91] ‘shotgun’ approach to predicting photoelectron momentum and energy spectra.

These methods are summarised in Table 2.

Table 2 Rough survey of analytical methods of strong-field physics and attoscience

2.3 Hybrid methods

In addition to the purely numerical and purely analytical approaches discussed above, it is also possible to use hybrid approaches, which involve nontrivial analytical manipulations coupled with numerical approaches that incur significant computational expense.

This class of methods can include relatively simple variations on standard themes, such as multi-channel SFA approaches that include transition amplitudes and dipole transition matrix elements derived from quantum chemistry [93], but it also includes long-standing pillars like Molecular Dynamics and other rate-equation approaches—which use ab initio potential-energy surfaces and cross sections, but discard the quantum coherence—that are now being applied within attoscience [94]. Beyond these simpler cases, there is also a wide variety of novel and creative methods, such as the classical-ensemble back-propagation of TDSE results employed recently to analyse tunnelling times [95], which hold significant promise for the future of attoscience.

2.4 Friction points

Now with a complete set of basic definitions, our combatants Analycia

and Numerio

turn their discussion to more specific aspects of analytical and ab initio

methods.

2.4.1 Numerical \(\ne \) ab initio

Analycia :

It seems to me that the classification of numerical and ab initio methods, as presented in the ‘ab initio toolset’, is out of step with the definitions as originally stated. Are you using the terms ‘ab initio’ and ‘numerical’ interchangeably?

Numerio :

You are right. It is important to emphasize the difference between numerical methods and ab initio methods. Both classes share and benefit from the development and application of ‘computational thinking’, but strictly speaking the latter category is a subset of the former. On the other hand, in the literature, the two terms are often used interchangeably.

Analycia :

That may be, but then that is a problem with the literature. There are many methods on the toolset that are very far from ab initio as you defined it.

The clearest examples of this are the methods based on the SAE approximation [96,97,98,99,100,101]. This approach neglects, in an extremely crude way, the two-body nature of the Coulomb electrostatic repulsion between the different electrons, which is often called ‘electron correlation’. Should these methods really be called ‘ab initio’?

Numerio :

Most of these methods try—with varying degrees of success—to correct for the neglect of electron correlations by introducing various parameterisations of effective one-particle Hamiltonians. However, these constructions are for the most part semi-empirical, and as such they introduce significant physics beyond the fundamental laws, and definitely cannot be called ab initio methods.

Analycia :

It is good to see that laid out clearly. In a similar vein, what about DFT and TD-DFT? I notice that many of the openly available DFT packages explicitly market themselves as being ‘ab initio’ approaches [22,23,24].

Numerio :

DFT is a rigorously ab initio method, and it takes its validity from strict theorems (originally for static systems [17, 102] and subsequently extended to time-dependent ones [103,104,105]) that show that the complexity of the full multi-electron wavefunction can be reduced to single-electron quantities. In brief, there exists an ‘exchange-correlation functional’ that allows us to get multi-electron rigour while calculating only single-electron densities.

Analycia :

That may be the case in the ideal world of mathematicians, but it does not work in the real world. The formal DFT and TDDFT frameworks only work if one knows what the exchange-correlation functional actually is, as well as the functionals for any observables such as photoelectron spectra. In practice, however, we can only guess at what those might be. I have a deep respect for DFT and TDDFT: for large classes of systems, it is our only viable tool, and there is a large body of science which validates the functionals it employs. Nevertheless, the methods for validating the ‘F’ in DFT are semi-empirical and do not have the full-sense rigour of ab initio.

Numerio :

Yes, those are fair points. However, it is worth noting that there also exists a rigorous method to construct approximate parameterised functionals. This is based on introducing parameters whose value can be fixed by requiring them to satisfy the known exact properties of the functional. These parameters are of universal nature in the sense that once they have been determined, they are kept fixed for all systems to be calculated. Having said this, in practice, when the DFT Hamiltonian ends up in the form of a semi-empirical parameterisation [106,107,108], then this takes it out of the ab initio class.Footnote 2

Analycia :

So, are there any numerical methods which truly satisfy the ab initio definition?

Numerio :

Yes, there are. Most of the approaches based on quantum chemistry possess potentially full ab initio rigour. In practice, of course, for some applications this full potential is not needed, and the degree of electron correlation in the calculation can be restricted in order to reduce the calculation time. However, even in those cases, there is still an ab initio method underlying the computation.

That said, even within an ab initio method, it is common to introduce semi-empirical parametrisations of the Hamiltonian. This happens most often when we cannot describe (or do not need to) every term in the Hamiltonian to an ab initio standard.

Analycia :

What kind of interactions would this approach apply to?

Numerio :

The introduction of pseudo-potentials can be used, for example, to model the effect of core electrons in an atom or molecule. Another common case is the effect of spin-orbit interactions in a semi-relativistic regime. This can be seen as a non-ab initio description of certain degrees of freedom or interactions whose effect is not dominant within a given physical process or regime. Sometimes this has a limited scope, but it can also extend out to what we have described as ‘hybrid’ methods (such as Molecular Dynamics simulations), which are not fully ab initio but which nevertheless maintain a very strong ab initio identity.

Analycia :

This does not really paint a picture of a ‘single class’ of ab initio methods: instead, you have depicted a continuum of methods, which goes smoothly from a full accounting of electron correlation down to restricted numerical simulations which operate under substantial approximations.

Numerio :

I agree, and if you press me I should be able to organise these methods on a spectrum, between approaches which are fully ab initio and techniques which are simply numerical approaches.

2.4.2 Analytical methods generally involve computation

Having conceded that ab initio

methods span a rather large continuum, Numerio

strikes back at just how ‘analytical’ the analytical approaches really are.

Numerio :

Since you are so keen to hold ab initio methods to the ‘golden standard’ of the definition, it is only fair that we do the same for analytical methods. Many of the methods you have listed look rather heavy on the numerics to me, particularly on the fully quantum side. To pick on something, perturbation theory is certainly purely analytical on its own, but those models often require accurate matrix elements for the transitions they describe, and those can only be obtained from quantum chemistry, often at great expense.

Analycia :

Yes, that is true—

   [Numerio interrupts Analycia]

Numerio :

And is that not also the case even for the ‘stars’ of the show? The SFA, in its time-integrated version, produces integrals which are highly oscillatory, and this generally implies a significant computational cost.

Analycia :

I agree, the SFA and related methods often involve a large fraction of numerical effort. Even for the quantum-orbit version, the key stages in the calculation—the actual solution of the saddle-point equations—rely completely on numerical methods. On the other hand, of course, this is typically at a much lower computational cost than most TDSE simulations.

Numerio :

For most methods, that is quite clear. However, this lower computational cost is much less clear for some of the more recent approaches that implement Coulomb corrections on the SFA. The analytical complexity of those methods can get very high—does that not come together with a higher computational cost?

Analycia :

To be honest, the computational expense in some of the more complex Coulomb-corrected approaches (in particular those that utilise ensembles of quantum trajectories) to the SFA can, in fact, exceed that of some of the simpler single-electron TDSE simulations.

Numerio :

I also notice that you have classified several classical trajectory methods as ‘analytical’, including statistical ensemble and Monte Carlo approaches that often involve substantial computation expense calculating aggregates of millions of trajectories. More importantly, this goes beyond the raw numbers—the Newtonian equations of motion are only solvable in closed form for field-driven free particles. As soon as any sort of atomic or molecular potential is included, one must turn to numerical integration.

Analycia :

Yes, that is also correct. The character of the method is different to a direct simulation of the TDSE, but the numerical component cannot be denied.

Numerio :

So, similarly to the continuum of ‘ab-initio-ness’ we agreed upon earlier, what you are saying is that for analytical methods there is also a continuous spectrum between fully analytical and exclusively numerical.

Analycia :

Yes, I suppose I am. We should then be able to place the theoretical methods of attoscience on a two-dimensional spectrum depending on how much they have an analytical and ab initio character.

Our two theorists sit down to chart the methods they have discussed so far, and report their findings in Fig. 2.

Fig. 2
figure 2

Rough spectrum of the theoretical methods of attoscience, ranked by their analytical (horizontal) and ab initio (vertical) character

2.4.3 Quantitative versus qualitative insights

Numerio :

It seems we have completely eliminated the dichotomy that we started with between analytical and ab initio methods.

Analycia :

So it seems, at least on the surface, but there is still a clear difference between the two approaches. In this regard, I would like to make a somewhat contentious claim: it is more important to distinguish methods according to whether the insights we can obtain from them are of a more quantitative nature or of a more qualitative one. It seems to me that it is the spectrum between those two extremes that carries more value.

Numerio :

Speaking of ‘qualitative methods’ is certainly unusual in the physical sciences, and to me it feels like it carries some negative connotations.

Analycia :

Perhaps this is because that phrase has been mistakenly associated too tightly with biological and social sciences, and physical scientists sometimes want to distance themselves from that perception? If that is so, then it is important to work to de-stigmatise that classification.

In any case, though, I am curious to know whether the attoscience community agrees that this is a more important distinction.

   [The audience response to this poll is presented in Table 3.]

Table 3 Audience polls taken over the Zoom (upper rows) and Twitter (lower rows) platform during the presentation

2.4.4 Analytical \(\ne \) approximate

Numerio :

You mentioned above that some of the ‘analytical’ methods of attoscience can involve substantial computational expense. What is the point of performing such computations, for approaches that can only ever be approximate?

Analycia :

This is a common misconception. ‘Analytical’ does not necessarily mean ‘approximate’, and there are problems where analytical approaches can be fully exact. Some of this list is limited to the canonical examples (the particle in a box, the harmonic oscillator, the hydrogen atom, the free particle driven by an electromagnetic field), but it is important to emphasise that it also covers perturbation theory, which is exact in the regimes where it holds. And, in that sense, it includes the exact solutions for single- and few-photon ionisation and excitation processes, which are crucial to large sections of attoscience, particularly when it comes to matter interacting with XUV light.

Numerio :

With the caveat we discussed above, surely? The perturbation-theory calculations are exact in their own right, but their domain of applicability without numerical calculations is extremely limited.

2.4.5 Ab initio \(\ne \) exact

Analycia :

One striking aspect which is implicit in your description of ab initio methods, and in how they are handled in the broader literature, is the implication that any ab initio method is automatically exact.

Numerio :

No, that is inaccurate. The two descriptors are distinct and they should not be considered as synonyms.

Analycia :

Perhaps the term is used to somehow overvalue the results of a numerical simulation? It is easy to fall into the trap of thinking that a result obtained in an ab initio fashion is automatically quantitatively accurate, but that is a misconception. The clearest examples of this difference are simulations that work in reduced dimensionality, e.g. 1D or 2D, but, more generally, plenty of ab initio approaches make full use of approximations when they are necessary.

Numerio :

That is true, and if a method’s approximations cannot be lifted then it does not really fit the definition of ab initio. However, it is common to use ‘lighter’, more flexible numerical methods—which use approximations to reduce the cost—for more intensive investigations, while still benchmarking them against an orthodox, fully ab initio approach, and then we can be confident in the accuracy of the more flexible methods.

Analycia :

But that is no different to how we benchmark analytical methods. What is it about ab initio approaches that singles them out as the ‘gold standard’, then?

Numerio :

I would say that the key feature is the existence of a systematic way to improve the accuracy which does not rely on any empirical fittings or parametrisations, as the central part of the numerical convergence of the method. When this is present, we can expect to get a description at the same level of accuracy for the same physical observables, even if we change the system in question, and we can also estimate the error we make in a systematic way. Under these conditions, then, the ab initio methods can achieve fidelities that are so high that they can be considered to be fully exact.Footnote 3

2.4.6 The choice of basis set

Analycia :

You mentioned that a key part of ‘the ab initio toolset’ is the development of suitable basis sets. This sounds odd to me: any two basis sets should be equivalent, so long as they are both complete—which, on the ab initio side, corresponds to numerical convergence.

Numerio :

That is formally true, but it is not very useful in practice. The basis set used to implement an ab initio method—to formulate and solve the Schrödinger equation—is a crucial factor in the numerical aspects, and it determines the level of accuracy of the calculations as well as the computational cost required to reach convergence to a stable solution that captures the full physics of your problem.

More broadly, this is a source of approximation (and thus an entry point for errors), as well as a powerful ally in the search for new physics. In short, the choice of basis set largely determines the subspace of the solutions that we can reasonably explore, and this in turn influences the physics that can be investigated with the method.

Analycia :

You mentioned that many of the ab initio approaches in attoscience have their roots in quantum chemistry, and I understand that quantum chemists have worked very hard at optimising basis sets for their work. Why can’t those sets be used in attoscience?

Numerio :

The basis sets most commonly used in traditional quantum chemistry, particularly Gaussian-type orbitals (GTOs), have indeed been highly optimised by tailored fitting procedures over many years and, as a result, they have enabled the flourishing of ab initio quantum chemical methods [29]. There, the driving goal is to have accurate and fast numerical convergence for the physical quantities that interest quantum chemists, such as ground-state energies and electric polarizabilities.

Analycia :

Ah—and these goals do not align well with attoscience?

Numerio :

Exactly. These basis sets are generally poorly suited to describe free electrons in the continuum. As such, traditional basis sets struggle when describing molecular ionisation over a wide range of photoelectron kinetic energies [109, 110]. By extension, this limits our ability to describe general attosecond and strong-field physics.

Analycia :

So this is where the attoscience-specific development of basis sets comes in, then.

Numerio :

Yes. For attoscience the key requirement is an accurate description of wavefunctions with oscillatory behaviour far away from the parent molecular region, and this drives the development when existing basis sets are insufficient.

Analycia :

So what determines the choice of basis set in any given situation?

Numerio :

This depends on a number of factors—some down to numerical convenience in the specific implementation, but also, often, determined by the physics that the method seeks to describe. Within any particular ab initio framework, the use of new basis sets allows us to explore different parts of the Hilbert space of the system under investigation, and to look for new and interesting solutions there.

Analycia :

This sounds reasonable enough, but it also speaks against the strict definition of ‘ab initio’ as you formulated it, which requires us not to input any physics beyond the fundamental interactions. To the extent that the basis-set choice determines the subspace where solutions will play out, that represents an additional input about the physics which is built directly into the code. This can then limit the reach of the method; one clear example of this is the elimination of double-ionisation effects if a continuum with doubly ionised states is not included in the basis set. Given these limitations, can we ever truly reach the ab initio ideal?

Numerio :

When phrased in those terms, then I agree that it is an ideal, but there is also no denying the practical reality that has been achieved in describing the full complexity of quantum mechanics as regards attoscience. And, I would argue, the methods we have available do offer systematic ways to ensure convergence in a controlled fashion, and we can very well say that we are approaching physics in an ab initio way.

3 Advantages and disadvantages of analytical and numerical methods

In the previous section we developed, through our combatants Numerio and Analycia, a framework that allows us to place the theoretical methods of attoscience in a continuous spectrum: from analytical to numerical, and from ab initio to approximate, as well as from methods that offer qualitative insights to ones whose output is most valuable in its quantitative aspects. In this section, we move on to focus on the strengths and weaknesses of methods across the theoretical spectrum established in Fig. 2. This analysis is crucial, as it enables an impartial evaluation of different methods, which in turn allows attoscientists to use the most suitable tools for the job at hand. Understanding the advantages and disadvantages of different methods, as well as their successes and shortcomings, allows us to highlight the most efficient one—or the most effective combination—for the chosen application, and it is an important guide in the development of hybrid methods.

3.1 Fundamental strengths and weaknesses

Continuing the conversation Analycia

and Numerio

each make a case for their respective methods, and attempt to scrutinise the shortcomings of each other’s favoured methods.

Numerio :

The main advantage that has struck me in recent years is the impressive progress in the application of numerical methods to problems of increasing complexity. A number of problems which were once well beyond our reach are now possible. This has been achieved both through the development and refinement of efficient computational methods, as well as the increasing availability of high-performance computing (HPC) platforms. Such methods can also act as benchmarks against which to test the validity of simpler, smaller-scale, or more approximate methods. Their other clear advantage is their generality, which enables their application to a variety of physical problems.

Analycia :

True, but despite these advantages, you must admit there can be a heavy price to pay. As you mention, application of these methods can require large-scale HPC resources, and such calculations can be extremely time-consuming, even if optimised codes and efficient numerical methods are used. It may not be possible to perform a large number of such calculations, which then makes it infeasible to perform scans over laser parameters that are often crucial to understand the physics. Additionally, an inherent difficulty in many methods is the rapid increase in required numerical effort with the number of degrees of freedom of the target system. This often restricts methods, for instance, to the treatment of one active electron, or to linearly polarised laser fields. Releasing these restrictions, and others, then incurs a significant computational cost.

Analytical methods, however, are not encumbered with many of the difficulties encountered by numerical methods. Their inherent approximations afford them a large speed advantage as well as a high degree of modularity. These qualities allow them to provide an intuitive physical picture of the complex dynamics. They may also avoid the unfavourable scaling properties with which numerical methods can be saddled, allowing them to explore a more expansive parameter space. This, coupled with the understanding they provide, can be used to direct more resource-expensive numerical or experimental approaches.

Numerio :

Yes, I’m aware that analytical methods provide a number of advantages, but the price tag is the required level of approximation that enables analyticity. Approximation is a double-edged sword: ideally we would only discard unnecessary details in order to highlight the important processes, but, most commonly, we also end up discarding important details, and this may imply that some physical processes are not accurately captured. Approximations also often carry more restrictive regimes of validity, and this makes them less general than ab initio approaches. So, despite the advantage they may have with regard to scaling properties, they can also be rather restricted in some respects. This can often come in the form of rather unrealistic assumptions, such as the assumption of monochromatic laser pulses.

Unable to find common ground on their favoured methods, Numerio

and Analycia

decide to look at the specific example of NSDI.

3.2 In context: non-sequential double ionisation

This case study on NSDI will explore the impact that the characteristics of various methods can have in understanding a physical process. However, before we rejoin our debating combatants, we will present a few of the key concepts of NSDI.

NSDI has been studied using a wide variety of analytical and numerical methods. These include both classical and quantum approaches, solving the Newtonian equations of motions and the TDSE. This range of methods is a testament to the difficulty in modelling this process and thus makes it an ideal case study.

What is NSDI? Put simply, NSDI is a correlated double ionisation process, where the recollision of a photoelectron with its parent ion leads to the ionisation of a second electron. Historically, NSDI was discovered as an anomaly, where the experimental ionisation rate did not agree with analytical computations for sequential double ionisation for lower laser intensities, giving rise to the famous ‘knee’ structure (see Fig. 3) [111, 113,114,115,116,117]. Originally, there was contention over the precise mechanism, but over time the three-step model [88, 118, 119] involving the laser-driven recollision was accepted. The three steps of this model are (1) strong-field ionisation of one electron, (2) propagation of this electron in the continuum, and (3) laser-driven recollision and the release of two electrons. This classical description is based on strong approximations, and it is generally considered to be an analytical method (although, as we discussed in Sect. 1.4.2, it relies on some numerical computations). In particular, the exploitation of classical trajectories gives it the intuitive descriptive power of an analytical method.

Within the three-step model, two main mechanisms have been identified for NSDI. The first is electron impact (EI) ionisation, where the returning electron has enough energy to release the second electron, leading to simultaneous emission of both electrons, as depicted in Fig. 4a. The alternative mechanism is recollision with subsequent ionisation (RESI), which occurs when the returning electron only has enough energy to excite the second electron (but not remove it directly), and this second electron is subsequently released by the strong field, leading to a delay between the ionisation of the first and second electron, as shown in Fig. 4b. The separation of these mechanisms is best expressed by semi-analytic models based on the SFA [120, 121], where the mechanisms can be represented as Feynman diagrams and linked to rescattering events [122].

Fig. 3
figure 3

Singly- and doubly charged helium ion yields as a function of laser intensity, at a wavelength of \(\lambda = {780}~\hbox {nm}\), showing the ‘knee’ structure associated with the transition between NSDI and sequential double ionisation [111] (Reprinted with permission from Ref. [111]. © 1994 by the American Physical Society)

Fig. 4
figure 4

Schematic of the two main mechanisms in NSDI [112]. a Electron impact ionisation (EI), where the recolliding electron mediates direct double ionisation. b The recollision excitation with subsequent ionisation (RESI) mechanism, where the recolliding electron excites the bound electron, which is subsequently released by the field

The NSDI Toolset Here, we summarise some of the methods that are available to model NSDI. For detailed reviews on these methods, see Refs. [123, 124].

  • Three-step model This simple and intuitive classical description neglects the Coulomb potential and quantum effects [88, 118, 119]. Nonetheless, this formulation has become the accepted mechanism of NSDI.

  • Classical models These can be split into those with some quantum ingredients like a tunnelling rate [125,126,127,128,129] and those that are fully classical, so that ionisation only occurs by overcoming a potential barrier [90, 130,131,132,133,134,135,136]. The electron dynamics are approximated by classical trajectories, which permits a clear and intuitive description. The contributions of classes of trajectory can be analysed, which is crucial in tracing the origin of certain physical processes. However, the model neglects quantum phenomena such as interference [137, 138].

  • Semi-classical SFA The Coulomb potential is neglected, but the dynamics can be understood via intuitive quantum orbits, and the different mechanisms can easily be separated [120,121,122, 135,136,137, 139,140,144]. This also allows quantum effects such as tunnelling and interference to be included, with interference effects in NSDI being predicted [137, 138, 144] and measured [145] fairly recently.

  • Reduced-dimensionality TDSE simulations Solution of the TDSE assuming that a particular aspect of the motion can be restricted to the laser polarisation axis. One-dimensional treatments restrict the entire electron motion to this axis [146], and two-dimensional treatments restrict the centre of mass [147], while treating electron correlation in full dimensionality. Similar approximations are made in other methods, such as the multi-configurational time-dependent Hartree method [44], which treats NSDI with the assumption of planar electron motion.

  • Ab initio full dimensional TDSE simulation Full quantum mechanical treatment of a two-electron atom through direct solution of the time-dependent close-coupling equations [148,149,150,151,152,153]. Such methods are computationally intensive, although efficiency improvements have been made in recent years. To date, these methods have not been extended to treat molecules or atoms other than helium.

We rejoin our two debating attoscientists, whose discussion has now moved on to the specifics of different analytic and ab initio

methods in NSDI. The discussion begins with a debate on the positive and negative aspects of a direct ab initio

approach.

3.2.1 Full-dimensional numerical solution of the TDSE

Numerio :

As a numericist, I often feel that there is no substitute for solving the TDSE in its full dimensionality. In the context of NSDI, this is a daunting computational task, involving solution of many coupled radial equations—often thousands—on a two-dimensional grid. The first code development to do this began in the late 1990s [148,149,150] and, by 2006, calculations could be carried out for double ionisation of helium at 390 nm [153]. However, these calculations typically required enormous computational resources—an entire supercomputer, in fact—using all 16,000 cores available at that time on the UK’s national high-end computing platform (HECToR).

Following the literature over the next few years, I noted the development of a number of similar approaches [154,155,156,157,158,159,160,161,162]. In NSDI applications in particular, I was struck by the significant progress made in reducing the scale of such calculations by the tsurff method [58, 163,164,165]. This approach allowed calculations for double ionisation of helium at 800 nm to be carried out using only 128 CPUs for around 10 days [164]. Figure 5 shows a recent highlight of this work, the two-electron momentum distribution for helium at 780 nm [164]. The calculation successfully displayed the expected minimum in the distribution when both electrons attain equal momenta greater than \(2U_p\). Watching these developments unfold over the past 15 years, it has become clear to me that even a daunting problem such as this is well within our grasp and should be attempted.

Analycia :

These are quite intensive calculations, so my first question would be: is it always worth it? Calculations should not only be feasible—they should also be justifiable. The large scale of each single calculation can be a very limiting factor, since you may need further computations, perhaps to average over the range of intensities present in the laser focus, or to scan over a particular laser parameter. Here, you may encounter additional hurdles, since it is well known that the computational cost can scale very unfavourably with certain laser parameters, particularly wavelength. Even with the efficiency savings that you mention, the method may struggle to perform calculations at longer wavelengths, or in sufficient quantity to scan over experimental uncertainties.

Secondly, it is true that significant progress has made these large-scale calculations more tractable. However, this does not necessarily mean that the results will be easy to analyse. Disentangling the complex web of physical processes included in such calculations can be very difficult. This requires tools and switches within the method, for example, to evaluate the role of certain interactions, and thereby aid your understanding. Even with such analysis tools at hand, gaining strong physical insight may be an arduous procedure, involving further large-scale calculations, and these may not even be guaranteed to provide the insights you desire.

Numerio :

Absolutely, you have highlighted the main difficulties with ab initio methods that I have encountered. The scale of the calculations can impose a limit on their scope, and their complexity can obscure interpretation. On the other hand, simpler methods avoid these difficulties, but they rely on approximations which need to be justified. For me, the ideal tool would be a method with qualities representing the best of both worlds—a method where many small-scale but accurate TDSE calculations could be carried out to provide detailed interpretation. Although this is feasible in some fields, in the context of NSDI currently it is not. However, equipped with an arsenal of ab initio methods, there is an opportunity to benchmark simpler methods which fall short of a full ab initio treatment. If their approximations can be validated by such comparisons, then their interpretive power will be valuable.

Analycia :

I think now we are beginning to agree.

Fig. 5
figure 5

Two-electron momentum distribution for double ionisation of helium at 780 nm, calculated using the tsurff method [164] (Reprinted with permission from Ref. [164]. © 2016 by the American Physical Society)

The debate above highlights that both calculation and interpretation are important. Often, an ab initio

approach can provide a calculation, but detailed interpretation may require analytical techniques. To discuss these techniques further, the debate now moves to focus on the merits of analytical methods used to study NSDI.

3.2.2 Analytical approaches

Fig. 6
figure 6

Comparison of experimental data (upper row) [166] (Reproduced from Ref. [166] under a CC BY license.) with theoretical focal-averaged distributions (lower row) selected from [137]. (Reprinted with permission from Ref. [137]. © 2016 by the American Physical Society.) The left and right columns present 16 fs and 30 fs laser pulse lengths, respectively, with \(\lambda ={800}~\hbox {nm}\) (\(\omega ={0.057}~\hbox {a.u.}\)) and \(I = 10^{14}~\hbox {W/cm}^{2}\) (\(U_p = {0.22}~\hbox {a.u.}\)). Specific features associated with quantum interference are marked by polygons in both upper and lower panels. It was necessary to account for interference effects in the theoretical results to get this agreement

Analycia :

You see in my experience working on NSDI, descriptive power is often enabled by the high degree of modularity that analytical methods possess. This modularity may be harnessed to determine the physical origin of an effect by switching certain interactions on and off. Like intermediate-rigour numerical methods, the light computational demand means that large sets of individual calculations may be carried out where necessary.

A good example of the power of modularity in analytical models is the use of interference in SFA models for NSDI to match experimental results [137, 138]. In Fig. 6, we see experimental results [166] for two pulse lengths. The lower panels show the results of the SFA model [137] that uses a superposition of different excited states in the RESI mechanism of NSDI. Including interference leads to a good match, which provides strong evidence for interference effects in NSDI. This was only possible because interference effects could be switched on and off,Footnote 4 thereby allowing analysis of the different shapes and structures within the distribution. Each of these shapes could then be directly attributed to different excited states, which demonstrates the power of the modularity of analytical methods in providing an intuitive understanding of the physics.

Numerio :

The interpretive power is certainly valuable, and the availability of switches such as these is often the key to a good physical understanding. My main concern, however, is that the approximations may affect the accuracy of the results. In particular, the SFA neglects the Coulomb potential, and it is known that this influences the famous finger-like structure in NSDI seen in Fig. 7, causing a suppression of two-electron ejection with equal momenta. Furthermore, we would expect a host of other Coulomb effects just as there are in single electron ionisation [92]. Thus, care must be taken with the conclusions that you draw from such an analytical model. As I said earlier, many numerical methods may not afford this degree of modularity, but it would strengthen my confidence in the conclusions if an ab initio method also observed these effects. In this way, a numerical method could be guided by analytical predictions to assess the accuracy of certain approximations.

Analycia :

This is a fair point, but the considerable speed advantage means that you can often do additional checks and analysis to get around this problem. The SFA model presented could be solved in five minutes on a desktop computer, whereas, as you mentioned, ab initio models will take days on hundreds of cores. The fast SFA calculations can then account for additional factors such as focal volume averaging, even though it increases the overall runtime by a factor of ten or more. It can also perform scans through intensity and frequency in a timely manner. Such scans can provide important insights, for example in Fig. 8 where the contributions of various excited states are monitored as a function of laser intensity and frequency. Their relative contributions then explain the shapes appearing in various regions of the momentum distribution. The extra analysis can increase the overall runtime by factors of 100–1000, which is still perfectly manageable for the SFA, but would be out of the question for most ab initio methods.

Furthermore, there is always a place for analytical methods in performing computationally inexpensive initial investigations, which then provide the evidence needed to commit to using more expensive ab initio or experimental efforts. In recent work on interference in NSDI, motivated by predictions of SFA models, experimental work was done to investigate interference effects [170].

Numerio :

Yes, I agree in some cases the extra analysis is beneficial. However, ab initio methods are still much more generalised than their analytical counterparts. Take Fig. 5, where many different processes contribute, including both the RESI and EI mechanisms, together with sequential double ionisation. The presented SFA model includes only the RESI mechanism.

Analycia :

There are two sides to this: it is nice to be able to clearly separate EI and RESI in the SFA, but it is true that it introduces a lack of flexibility.

With the goal of reaching some kind of agreement, I would posit that the benefits of both types of models outweigh the negatives. In the case of classical and semi-classical models, they have clearly led to huge leaps in understanding for the mechanisms of NSDI. Furthermore, I would add that NSDI in particular is a good candidate for hybrid models. Strongly correlated dynamics and multi-electron effects are well-suited to an ab initio approach, while the main ionisation dynamics are well-described by semi-classical models.

That said, I would also like to know how the broader community feels about this.

   [The audience response to this poll is presented as Poll 2 in Table 3]

Fig. 7
figure 7

Photoelectron spectra for NSDI in helium driven by a wavelength of 800 nm and intensity of \(4.5\times 10^{14}~\hbox {W/cm}^{2}\) [169], showing a the correlated momentum distribution, with a detail b shown with superimposed results from a classical electron scattering model, as well as (c) the electron energy spectrum of \(\hbox {He}^{2+}\) and \(\hbox {He}^{+}\) (Reprinted with permission from Ref. [169]. © 2016 by the American Physical Society)

Fig. 8
figure 8

Scan over intensity (\(U_p\)) and frequency (\(\omega \)), that attributed preferential excitation to states with different orbital angular momenta l (s-, p- and d-states) in the RESI process for different pulse lengths [137] to the shapes found in [166]. The contributions of s-states are displayed in a, d and g, those of p-states in b, e and h and those of d-states in c, f and i (Reprinted with permission from Ref. [137]. © 2016 by the American Physical Society)

Within the context of NSDI, our combatants have discussed the merits and drawbacks of their respective approaches and have begun to appreciate the computational and interpretational qualities that analytical and ab initio

contribute. In the following section, we focus on how progress in scientific discovery can be aided by both types of method.

4 Scientific discovery

The seed of a scientific discovery can be planted in the form of a bump or a dip on a smooth curve of experimental data, as a whimsical term in the denominator of some equation, or as a quirky splash in numerical results. In other words, a scientific discovery can be triggered by experimental results or theoretical ones, either analytical or numerical. As soon as something new has been spotted, to become a real full-grown discovery it has to be examined and explained by each of these aforementioned components, each branch of research, and in the end there has to be an agreement among all of them.

In some cases, the initiating site is analytical and the others come next, as in the case of optical tunnelling ionisation, predicted in 1965 [61] before its much later experimental observation in 1989 [171]Footnote 5. Sometimes, the role of a trigger is played by numerical calculations, as for coherent multi-channel strong-field ionisation [174], which was shortly followed by its experimental validation [4]. It can even be a little bit of both, as in the first description of the RABBITT scheme in 1990 [175], or in single-photon laser-enabled Auger decay (sp-LEAD), which was predicted in 2013 [176], first observed in 2017 [177], and further characterised in [178]. There are also theoretical predictions—both analytical, like molecular Auger interferometry [179], and numerical, like HHG in topological solids [180,181,182]—which have already sparked experimental efforts to confirm them, but which are still waiting for their observations to come. On the other side, we have discoveries which arise from experimental observations and are then explained theoretically, such as NSDI, which was discussed in detail in Sect. 2.

In this section, we tell another story of scientific discovery in attoscience, through the case study of resonant HHG, which also starts from recorded experimental data.

4.1 Experimental kick-off

By the year 2000, HHG was already a full-grown discovery. It had been observed [183, 184] and theoretically modelled [71, 88, 118, 185] a decade previously. After this breakthrough, many features of HHG were under active investigation, both experimentally and theoretically. In particular, resonances in the HHG spectrum had been extensively studied since the 1990s. Some structures in the HHG spectra of atomic gases had been very early on attributed to single-atom resonances [186, 187]. Then, more recent measurement [188] and theoretical works [189, 190] explained these structures with multiphoton resonances with bound excited states linked to enhancement of specific electron trajectories that were recolliding multiple times with the ionic core.

Fig. 9
figure 9

First observations of resonant HHG. a Figure taken from [191]: High-order harmonic spectra from (1) indium and (2) silver plumes. (Reprinted with permission from Ref. [191] ©The Optical Society.) b Figure taken from [192]: Spectra of the harmonic supercontinuum generated with double optical gating from the different values of input pulse duration for a helium target gas. (Reprinted from Ref. [192], with the permission of AIP Publishing) c Figure taken from [193]: Top, the raw HHG spectrum from xenon at an intensity of \(1.9 \times 10^{14}~\hbox {W/cm}^{2}\). Bottom, experimental HHG spectrum divided by the krypton wave packet (blue) and the relativistic random-phase approximation (RRPA) calculation of the xenon photoionisation cross section from [194] (green). The red and green symbols are PICS measurements from [195, 196], respectively, each weighted using the anisotropy parameter calculated in [194]

In this context, Ganeev et al. [191] first measured, in 2006, a strong enhancement of a single harmonic, by two orders of magnitude, in the HHG spectrum of plasma plumes. Their result is shown in Fig. 9a. At that time, they attributed this resonance to the multiple recolliding electron trajectories that had previously been observed and modelled in atoms [188, 190], and they related these trajectories to multiphoton resonances with excited states.

Then, in 2008, when studying the spectra of single attosecond pulses generated in noble gases, Gilbertson et al. measured for the first time a strong enhancement in the HHG spectrum of helium [192], as shown in Fig. 9(b). Since they employed single attosecond pulses, they recorded continuous spectra which allowed them to see the enhancement perfectly, as it would otherwise fall between two harmonics if observed in an attosecond pulse train. They did not give any explanation for this enhancement, as it was not their main focus, but they observed that it appears at the energy of the 2s2p autoionising state (AIS) of helium.

Then, in 2011, Shiner et al. measured a strong enhancement at 100 eV in the HHG spectrum of xenon gas [193, 197], shown in Fig. 9c. The experimental HHG spectrum is displayed on the upper panel. From that spectrum, the authors extracted the photoionisation cross section (PICS) by first dividing by the spectrum of krypton (obtained at the same conditions), and then multiplying by the photoionisation cross section of krypton from Ref. [198]. The obtained experimental PICS is shown as a blue line in the lower panel. The green curve is the photoionisation cross section of xenon from Ref. [194]. The very good agreement between the two curves, combined with the qualitative agreement with a toy model including only the 4d and 5p states of xenon, allowed the authors to relate the enhancement at 100 eV to the giant resonance of xenon.

Thus, by the late 2000s, there were observations of resonant enhancement features in HHG from plasma plumes as well as few- and many-electron rare-gas atoms—but no solid theoretical explanation.

We now hand again the stage to our theoretical acquaintances, who have begun discussing the ingredients required for a theoretical model for resonant HHG and will guide us through the rest of the story.

4.2 Building the model

Numerio :

An explanation of the observed process demands the creation of a model, and this requires a thorough analysis of the experimental data revealing the same phenomenon and distinguishing its essential features. The essential feature of resonant HHG, which is common for all observations independently from the medium—gaseous or plasma—, is the enhancement of one or of a group of high harmonics. This does not sound like a lot to start with. However, this already gives a hint that the desired explanation has no connection to propagation effects. This fact restricts the model to an account of the single-particle response only.

Analycia :

There have been a number of attempts to create the model describing resonant HHG. One group of theories is based on bound-bound transitions [189, 190, 199, 200], but it cannot be applied for plateau harmonics due to the crucial role [191] played by the free-electron motion. Another group of theories mentions a connection of the multi-electron excited states to the enhanced yield of harmonics [201,202,203]. In particular, the enhancement of high harmonics generated in xenon [193] was associated [202] with the region of the well-known ‘giant’ dipole resonance in the photoionisation (photorecombination) cross section of xenon atoms.

Numerio :

This sounds closer to the ingredients that are likely required to explain the phenomenon. Does this not get us closer to resolving the puzzle?

Analycia :

Indeed it does! After revealing a similar correspondence between experimental HHG enhancements [191, 192] and transitions with high oscillator strengths between the ground state and AIS of the generating ions [204, 205], the model of resonant HHG was forged in the form of the ‘four-step model’ [206].

Numerio :

It seems like there should be a vivid similarity with the common three-step model for HHG, should there not?

Analycia :

Of course! The four-step model [206] extends the three-step model [88, 118, 119] to include the resonant harmonic emission along with the ‘classic’, nonresonant one. The first two steps of the four-step model—(1) tunnelling ionisation, and (2) free-electron motion—repeat those of its forerunner. Then, if the energy of the electron returning back to the parent ion is close to the one of the ground—AIS transitions, the third step of the three-step model turns into two: (3) electron capture into the AIS, and (4) relaxation from the AIS down to the ground state, accompanied by the XUV emission.

Numerio :

Sure, that seems like a possible chain of events, but how can it lead to a higher emission probability if it requires an extra step?

Analycia :

You are right that a substitution of one step by two of them should intuitively cause a decrease in probability, but the combination of higher probability for electron capture into the AIS (corresponding to the looser localisation of the AIS), together with the high oscillator strength of the transition between the AIS and the ground state, results in an increase in the resonant harmonic yield by several orders of magnitude. Perhaps you could argue this is similar to how NSDI may dominate over sequential double ionisation despite having more steps, as discussed in Sect. 2.

By this point, a convincing model has been suggested; however, it is far from an end of the story of ‘scientific discovery of resonant HHG’, and a series of hurdles still has to be surmounted.

4.3 Challenging the model: numerical calculations

Numerio :

Alright, this model sounds physically reasonable enough, but we still need some actual proof that it’s describing the experiment properly. Small-scale numerical simulations were of great help in that matter. When building the four-step model, Strelkov also performed some TDSE simulations at the SAE level and compared it with several experimental results for singly ionised indium and tin, as shown in Fig. 10. The very good agreement shows that a single active electron is able to accurately model the process.

Analycia :

Sure, that is an important result, but in the paper, Strelkov also made an analytical estimate of the enhancement using the oscillator strength and lifetime of the resonant transition. The result of this estimate is shown in Fig. 10 as blue squares for several singly ionised atoms. The good agreement both with experiment and with the TDSE calculations marks another step in the confirmation of the four-step model.

Numerio :

Indeed, that was quite convincing already, but all these considerations were time independent. When building a model in attosecond science, it is often useful to have a dynamical point of view on the process under study. Tudoroskaya and Lein investigated resonant HHG and the four-step model using time-frequency analysis [207]. They solved the SAE TDSE for 1D model potentials with a shape resonance that models an AIS. They were able to reproduce an enhancement of more than two orders of magnitude at the harmonic order corresponding to the shape resonance. Their time-frequency analysis confirmed that the harmonic emission at resonance starts when the electron returns to the ionic core. More interestingly, it shows that the duration of the emission at resonance is much longer than the emission duration at the other harmonic orders. More precisely, the emission duration at resonance corresponds to the shape resonance lifetime, indicating that the electron gets trapped in the resonance and emits from there, thus validating the four-step model.

After this convincing achievement, the model seems to be validated, especially from Numerio

’s point of view. But for Analycia

the story is not finished yet.

Fig. 10
figure 10

Comparison of experimental measurements [191, 208,209,210,211,212] with analytical theory and single-electron TDSE simulations [206] for the enhancement factor in resonant HHG in plasma-plume ions, as reported in Ref. [206] (Adapted with permission from Ref. [206]. © 2010 by the American Physical Society)

4.4 Generalisation: analytical theory

Numerio :

Perfect, we now have the model of resonant HHG in our arsenal, which allows us to conduct a qualitative analysis and to make qualitative predictions. Moreover, we also possess quantitative answers based on SAE TDSE solutions for a number of generating particles in given laser fields. So I believe we have all we wanted then?

Analycia :

Not so fast! Even though there is a tool providing us with a quantitative answer, it cannot be easily re-applied for a different generating system or slightly different field parameters, in other words, there is a lack of generality. This creates a strong demand for a computationally cheap and more flexible tool.

Numerio :

Do you have some concrete solutions in mind?

Analycia :

This theoretical demand has been satisfied within the introduction of the analytical theory of resonant HHG [213]. The analytical theory is built on two pillars: Lewenstein’s SFA-based theory [71] (conventional for HHG), and Fano’s theory [214], which guides the treatment of AISs originated from the configuration interaction.

Numerio :

I understand, each of these theories is indeed very successful in describing the two physical processes at hand. But how do you combine them to reproduce the experimental observations?

Analycia :

The resonant HHG theory delivers the answer—the spectrum of the dipole moment of the system—as a product of the spectrum of the nonresonant dipole moment and a Fano-like factor. The nonresonant dipole moment is the same as in the well-known Lewenstein theory, which captures the field configuration and the major characteristics of the ground state of the generating particle. On the other hand, the Fano-like factor encodes the resonance and depends on the AIS’s features: its energy and its energy width, as well as the dipole matrix element for the transition between the AIS and the ground state.

As a result, the harmonic spectrum in the resonant case is identical to the one in the nonresonant case far from the resonance, while in the vicinity of the resonance it acquires a Lorentzian-like-shape profile due to the Fano-like factor (see Fig. 11). This Fano-like profile around the resonance carries the information about two major properties of the resonant harmonics—their behaviour in amplitude and phase—which result in an enhancement and an emission time delay of resonant harmonics, respectively.

Numerio :

Ok, I agree this analytical theory provides a much more general picture of the process. But is it really that useful? I mean, what could we do with resonant HHG?

Analycia :

The two features of resonant harmonics, amplitude and phase, provide us with extra handles for improving the generation of attosecond pulses, an intensity boost and an elongation of duration, and they also provide an opportunity to study the structure of the AIS using the harmonic spectrum.

With a robust framework in place, our scientists discuss the final obstacles faced by the theory.

Fig. 11
figure 11

Squared absolute value (red) and phase of the Fano-like factor calculated analytically (solid) and numerically within SAE TDSE (with symbols) for HHG in tin plasma plume [213] (Reprinted with permission from Ref. [213]. © 2014 by the American Physical Society)

4.5 Closure: ab initio calculations

Analycia :

Although the model and the analytical theory of resonant HHG coincide with the results of numerical TDSE calculations in the SAE approximation, this theory encountered significant resistance, both in conferences and in peer review, insofar as the model potential used in these calculations is artificial and does not reflect the fully multi-electron nature of AISs. What is your opinion regarding this issue?

Numerio :

I would say that, on the one hand, this is an instance of a broader discussion regarding the role, advantages and disadvantages of the use of model potentials in numerical calculations. On the other hand, however, this limitation can be addressed using fully ab initio calculations, eliminating this final uncertainty.

Recent first-principle calculations for resonant HHG by manganese atoms and ions [215] show the characteristic enhancement observed earlier in the energy region around a group of AISs.

Analycia :

Finally! These results close the remaining questions in the theoretical understanding and description of resonant HHG, and open a wide front of study into the applications of this process, equipped with a full toolset: analytical theory as well as numerical (SAE and ab initio) calculations.

Numerio :

I agree, we are not always on great terms, but we really made a nice team on this one!

   [The audience opinion on the necessity of combining different approaches is presented as Poll 3 in Table 3.]

After this constructive exchange, the two agree to work more tightly together from now on.

As we discuss below in Sect. 4.5 as a response to the first audience question we are not always necessarily after discoveries in our field, but also after finding and solving interesting problems. Nonetheless, any scientific production, or creative activity, before it can considered as scientific, requires confrontation of different points of view. We argue here that this confrontation is all the more efficient and constructive when it involves all the different aspects of scientific work: experimental, analytical, numerical, and ab initio. As we have seen at the start of this section, the initial trigger can be pushed by any of them, but the actual scientific progress generally happens afterwards, when they collaborate together.

5 Discussions

The dialogue between proponents of analytical and ab initio approaches, as we have followed it so far, opens a number of additional questions for deeper examination. We now turn to these more specific points, as well as our (combatants’) responses to the questions raised by audience members during the talk.

During the online conference [7], in addition to the talk, several questions were directed to the audience in the form of polls, both over the Zoom platform as well as to a wider public over Twitter. We present in Table 3 a summary of the results of these polls.

Our combatants return to the stage to resolve several still-itching questions that remain from their conversation.

5.1 Is approximation a strength or a weakness?

The degree of approximation made by a particular method is a typical source of contention between numericists and analyticists. Here, Numerio

and Analycia

discuss how they feel approximation should be characterised.

Analycia :

It has been said that approximation is a downside of analytical methods. However, I would like to argue—perhaps somewhat provocatively—that approximation is more of a strength. Approximation is what drives the interpretation—the qualitative picture—of a physical process as constructed by an analytical model. If you can remove all that is unnecessary and still achieve reasonable agreement with ab initio simulations or with experimental results, then this is when you actually start to gain some real understanding and interpretation of physical processes.

In other words, I do not think any method, analytical or numerical, is scientifically useful by itself. Science stems from the comparison and interplay of different methods, and particularly of different levels of approximations.

Numerio :

I tend to agree that, as an ideal, this is where approximations can really bring clarity to the table. However, in practice, most of the time when we approximate we end up dropping some of the things that we would like to retain. In that sense, approximation is both a blessing and a curse: it simplifies the picture so we can better understand it, but we generally lose out on some of the physics we want to describe. There is rarely a ‘happy medium’ where approximation is purely a strength.

Having said that, though, I should also point out that these benefits and disadvantages of approximation are equally applicable to numerical methods. If an approximate calculation matches experiment or an ab initio simulation in full rigour, then we can be confident that we have captured the physics.

Analycia :

Wait! [eagerly] I think I see what you mean—there is no demand that this approximate calculation needs to be analytical?

Numerio :

Yes. Moreover, approximation also mitigates some of the problems in numerical methods regarding the complexity of interpretation. An overly complex method may yield information that is simply too fine-grained to be analysed easily, but an approximate numerical method can strip away much of that complexity by focusing on a suitable subspace of solutions, and if it correctly matches the rigorous outcome, then we can be confident that we understand the physics.

Numerio and Analycia agree to treat approximation as both a strength and a weakness, and as a vital way to obtain new perspectives on physics, and move on to the question of modularity in ab initio methods.

5.2 Modularity in ab initio methods

Numerio :

It is often thought that ab initio methods do not provide the level of modularity that analytical methods can. However, even if the solutions provided by ab initio methods are numerical, the Hamiltonian is typically comprised of a set of analytical terms. By switching these terms on and off, we can gain insights into their role in a particular aspect of the physics in question.

To give one example of this, in Fig. 12 I show an intensity scan of the degree of coherence of the remaining ion in strong-field ionisation of CO\(_2\). To gain physical insight, we can deactivate a number of interactions and then compare the result to the ‘true’ coherence. This comparison shows how the interplay of different mechanisms contributes, in a non-trivial way, to the total coherence.

Analycia :

This is certainly a good demonstration of modularity within an ab initio method! However, would you say this is a typical example? I imagine that many ab initio methods would struggle to match this level of modularity. In this case, each individual calculation should come at a reasonably small cost in terms of computational time and resource, so that many calculations can be carried out. For some methods, the massive scale of individual calculations means that this level of modularity cannot be afforded.

More broadly, in numerical methods you cannot always do this type of switching procedure, particularly when it comes to spatial or momentum interference patterns [167, 168]. It is extremely rare to come across numerical methods that are able to split between these and provide clear assignments to the different channels that are interfering. So sometimes, yes, you can switch interactions on and off and assign things in a modular way, but ab initio methods are often limited in the degree to which they can do this.

Numerio :

Yes, the degree of modularity often depends on the problem at hand. Activating and deactivating Hamiltonian terms provides insight in certain problems, but others will not be aided by this procedure. I suppose the ideal situation would be to have a method which can solve a given problem using reasonable computational resources, while keeping enough modularity to provide the required physical understanding.

In that regard, one of the strongest tools is the use of approximations specifically tailored to the situation—which is one clear instance of approximation being a strength, as we have just agreed.

Fig. 12
figure 12

Modularity in ab initio calculations: the quantum coherence between the \(\sigma _{g}\) and \(\sigma _{u}\) ionic states of CO\(_2\), as a function of laser intensity, can be examined by switching different couplings on and off, providing a valuable window into which effects are most essential. Plotted from unpublished data obtained during the initial phase of the research reported in [36]

5.3 Are both analytical and numerical methods required in scientific discovery?

Numerio

and Analycia,

agreeing that analytical and ab initio

methods are not always used in equal measure, turn to discuss the impact that this has on knowledge and discovery.

Numerio :

We have presented a case for analytical and numerical methods working best for scientific discovery when they are used in equal measure. However, is this always necessary? Take Fig. 10, where the analytical model matches experiment just as well as the TDSE model. My natural inclination in a case like this would be to carry out a large, multi-electron calculation for this problem—but since the analytical model has described the experiment so well, would it be worthwhile? In some cases, like this one, analytical methods can stand on their own, while in other cases it will not get you very far and you need to really crank the handle of big codes.

Analycia :

I agree—often one method will dominate, and it may be because it works better or it may be due to historical reasons. However, while you can prove wrong a model when its results do not match experiments, it does not work the other way around: you can never prove that a physical model is right. Therefore, the agreement of different theoretical approaches is all the more precious in that regards. In addition to what you said, analytical and ab initio methods are two different powerful tools, which lead to differences in our understanding and interpretations. In different situations, one method is more useful than the other for advancing knowledge.

Numerio :

Yes, the methods we use will affect our understanding, but maybe we should not be too hung up on this. We should mostly be driven by moving between the discoveries of new knowledge. For instance, when we explore different systems such as high-harmonic generation in liquids (e.g. [216, 217]) or in solids (e.g. [218, 219]) , we start from the knowledge we had in the gas phase and push its limits and extend this knowledge. Whether this knowledge originates from analytical, ab initio, or experimental studies is ultimately not so important.

Analycia :

I can see what you are getting at, but in practice we cannot ignore biases different methods imbue in our knowledge. Let us see what our community thinks about this?

   [The audience responses to this question are presented as poll 3 in Table 3.]

Numerio :

It seems it is a mixed bag, with most hedging their bets in how often each method should be used.

Analycia :

We should take this result with a pinch of salt but perhaps we can agree this means it is very contextual. Physical processes we study should be attacked by exploring all of the approaches we have at hand.

Numerio :

Yes, but also we should prioritise these methods by their range of applicability as well as the level of insight they elucidate.

Our combatants decide that they will each include more methods in their arsenal, as well as working together, to aid the process of scientific discovery. However, Numerio

still has one final bone to pick.

5.4 The role of increasing computational power

The consequences of increasing computational power are a common theme in the development of modern physics mentioned time and time again. Numerio

turns to its role in attoscience.

Numerio :

One aspect that was very apparent as we looked at the evolution of numerical and ab initio methods in attoscience is that, even given the considerable challenges initially faced by the field, these methods have achieved many tasks that would have seemed completely impossible even a scant few years ago.

Going out on a limb, I would even claim that these improvements will continue and accelerate, particularly once quantum computers become available, and that these advancements will drastically reduce the need for analytical methods, or even—despite their advantages, which we discussed earlier—eliminate it altogether. And, I wonder, does our community agree with this?

   [The response to the poll is shown in poll 4, in Table 3.]

Analycia :

I find it quite interesting that you should use a phrasing of the form ‘computing power will make analytical theory obsolete’— because of how old that idea is. That concept dates back a full six decades [62], to when electronic computers were first being developed in the 1960s (to replace human computers). Within that context, it is understandable that people got the impression that analytical theory—with its emphasis on special functions, integral transforms and asymptotic methods—would be displaced by raw computation.

However, over the past sixty years, time and time again the facts have demonstrated the opposite: we now place a higher value on special functions and asymptotic methods than we did back then. Of course, it is possible that at least some of the analytical methods of attoscience will be displaced by raw simulations, at least of the single-electron TDSE, but whenever this narrative starts to look appealing, it is important to take the long view and keep this historical context in mind.

5.5 Audience questions and comments

Over the course of the panel discussion [7], questions and comments were raised by the audience which helped challenge and develop the arguments being fielded by the combatants. We present them here, voicing our answers through Numerio and Analycia, and referencing answers already given above.

Reinhard Dörner:

Is  our   field   really   after   discover-ies? Is it not more about finding and solving interesting puzzles?

[This question was motivated by the distinction made by Thomas Kuhn in his famous book The structure of scientific revolutions [220]. Therein, he argued that the times where the most progress is steadily made are times of ‘normal science’, where what scientists do is best described as solving riddles with the tools of the paradigm they are working in [221].]

Numerio:

I agree that we are not looking for new fundamental laws. Here, it is instructive to connect back to the definition of ‘ab initio’, particularly to remind ourselves that we have fixed the fundamental, theoretical ‘reference frame’. In attosecond physics, we are not yet looking for new fundamental laws: we already have established fundamental laws, the ‘rules of the game’, and we are looking for new solutions to the fundamental quantum-mechanical equations of motion. The space of solutions is potentially infinite, as is the amount of new physical phenomena yet to be described. In our case, we are interested in understanding the physics of atoms and molecules, driven by light-matter interactions, in new and unexplored regimes—and I would agree that this can be described as finding and solving interesting puzzles.

Analycia:

I would disagree: the fact that something is not ‘fundamental’ does not stop it from being a discovery. If nothing else, that viewpoint completely disregards discoveries made in other sciences which are not ‘fundamental’. I would say that it is still discovery if it is new knowledge.

Numerio:

Perhaps this is a matter of terminology: to me, speaking of ‘discovery’ entails finding new laws or entirely novel particles or dimensions, which do not occur in attoscience. In our domain, the basic rules are already set, and we are solving a puzzle which is as interesting as it is difficult. There are many different ways of arranging the pieces of this puzzle, with each one representing, in principle, a different physical scenario that we can tackle with our theoretical methods, be they ab initio, analytical, or hybrid. That said, we know only a limited set of such scenarios and I agree that, when we find a new one, it can also be seen as a discovery.

Analycia:

Yes, I see what you mean—but there is not always such a clear split between rules and scenarios, i.e. between laws of physics and their solutions. There is a level where we only have the fundamental laws, but there are also higher levels of understanding and abstraction where the behaviour of a set of solutions can become a ‘rule’, a law of physics, in itself. And, I would argue, our role in attoscience is to discover these laws. However, I do agree that our work mostly takes place within the fixed paradigm of a single set of fundamental laws.

The conclusion that Analycia

and Numerio

take away from this is that a solution to a problem may still be interesting and useful, irrespective of whether it is called a discovery.

Thomas Meltzer :

Do you think the range of applicability of a model is not more important than whether it’s ab initio, numerical, analytical, semiclassical and so on?

Numerio :

I agree that this is an important aspect of any method. That said, I would also say that an even more important lens is whether the model gives us insights of a qualitative or quantitative nature, as we argued in more detail in Sect. 1.4.3.

Analycia :

I have a similar view on this. Here, it is important to remark that one of the big reasons why, I would argue, we should move away from the ‘analytical-versus-ab initio’ view is that, ultimately, it is impossible to have a method which is truly ab initio. We discussed some of this in detail in Sect. 1.4.6, regarding the impact of the choice of basis set has on a method: to the extent that we must supply physics insights into that choice, it takes the method away from the ab initio ideal.

However, I would go beyond that, since there are many other ways in which the base assumptions of how we phrase the problem—which often go unquestioned —can affect the physics. The most obvious example in attoscience is macroscopic effects coming from the propagation of light inside our sample, but there are also other, more esoteric aspects—say, the appearance of collective quantum effects such as superfluorescence [222, 223], or effects coming from field quantisation [10]—which are ruled out by the basic framing, and this takes us away from ever reaching the ab initio ideal.

Numerio :

Those are fair points, but there is also a danger of throwing out the baby with the bathwater here, in discarding the valuable work done in pursuit of that ideal. In that regard, I would argue that a better definition for ‘ab initio’ could be ‘models with approximations that have well-defined error bounds for an explicit parameter range, such that any neglected physics will lie within these bounds’. We know what physics gets neglected, and we should be able to quantify those well enough to know they are not relevant (as well as the types of questions that become inaccessible); for any additional sources of error, like the choice of basis set, the error must be quantifiable.

Here is, I would say, where the range of applicability of the model is most important, as it dictates whether those sources of error are quantifiable and negligible—what one could call ‘allowed’ approximations—or into a regime with unquantified approximations. This is then a major component that determines where we can place our method on the qualitative-versus-quantitative spectrum.

In summary, our combatants agree the range of applicability is a central aspect to consider, which is essential for reshaping ideas on what is ‘ab initio’.

However, they also assert that the characterisation of whether a model provides qualitative or quantitative insights is the most important feature to consider.

(Anonymous) :

Can the single-configuration time- dependent Hartree–Fock method be used effectively to study multi-electron effects on atomic and molecular systems?

Numerio :

The method you mention (TDHF) can definitely be used to describe multi-electron effects. It is equivalent, in its linear-response reformulation, to the Random Phase Approximation with exchange (RPAX) method, which has been widely used (mainly by the condensed-matter community) and which can provide accurate molecular excitation energies and transition moments. Using TDHF in its full time-dependent character to study non-perturbative dynamics (beyond a perturbative approach) is certainly possible.

Analycia :

That sounds quite complicated for limited gain. Is it really worth it?

Numerio :

This is a good method, but we also have available several multi-configura- tion versions, including MCTDHF and TD CAS- and RAS-SCF, which are generally more effective. This makes TDHF, in my opinion, only a computationally cheaper alternative which should be considered for large systems that are not amenable to the full multi-configurational treatment.

JensBiegert :

An experiment is like doing an ab ini- tio simulation in the sense that one can change the boundary conditions, but it does not necessarily allow you to disentangle what happens. However, analytic, semi-analytical and hybrid methods do allow insight.

Numerio :

It is true that ab initio calculations are often characterised as the theoretical analogue of experiments, and that analytical methods drill down on the insightful details. However, in my experience, I feel that this is a mischaracterisation, as I am aware of many instances where ab initio methods were able to disentangle a variety of physical interactions, by virtue of their modular properties.

Analycia :

Oh, really? I would be interested to hear more, as this is an area where I always felt we analyticists held an advantage.

Given the level of interest in this topic, the combatants broadened its scope into the discussion given in Sect. 4.2.