Skip to main content

Advertisement

Log in

Identification of asteroid groups in the \(z_1\) and \(z_2\) nonlinear secular resonances through genetic algorithms

  • Original Article
  • Published:
Celestial Mechanics and Dynamical Astronomy Aims and scope Submit manuscript

Abstract

Linear secular resonances are observed when there is a ratio between the precession period of the longitudes of pericenter or nodes of a minor body and a planet. Nonlinear secular resonances occur for higher-order combinations of frequencies. They can change the shape of asteroid families in the (a, e, \(\sin {(i)}\)) proper elements space. Identifying asteroids in secular resonances requires performing numerical simulations, and then visually inspecting if the resonant argument is librating, which is generally a time-consuming procedure. Here, we use machine learning genetic algorithms to select the most optimal model and training set to best-fit asteroids likely to be in librating states of the \(z_1\) and \(z_2\) secular resonances. We then identify groups in domains of librating asteroids, as predicted by our algorithms, and verified whether these clusters belong to known collisional families. Using this approach, we retrieved all the asteroid families known to interact with the two resonances and identified 5 fairly robust previously unknown groups.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Edward W. Cliver, Carolus J. Schrijver, … Ilya G. Usoskin

Availability of data and material

All data on asteroids in \(z_1\) and \(z_2\) resonant states, labeled and unlabeled, are available from the first author upon reasonable request.

Code Availability Statement

The code used for the numerical simulations is part of the SWIFT package and is publicly available at https://www.boulder.swri.edu/~hal/swift.html, (Levison and Duncan 1994). Machine learning codes were written in the Python programming language and are available from the first author upon reasonable request.

Notes

  1. An important factor affecting secular dynamics would be the effect of non-gravitational forces such as the Yarkovsky effect (Bottke et al. 2002). This effect, caused by the re-emission of light from the asteroid surface, would cause the asteroid semi-major axis to evolve inward or outward, depending on the orientation of its spin axis. It is inversely dependent with respect to the asteroid diameter, and may cause smaller asteroids to drift in or out of resonances in timescales of Myr. Since this effect depends on several physical parameters, such as thermal conductivity, bond albedo, spin orientation, among others, that are poorly known for asteroids smaller than a few km, and since previous analysis of asteroid families interacting with secular resonances (Carruba et al. 2016) showed that asteroids on librating orbits tend to stick to libration for at least a few cycle of libration, even in the presence of Yarkovsky drift, in this work we will identify the librating behavior of asteroids in a purely conservative integration frame. Confirming the orbital nature if librating asteroids identified in this work remains a challenge for future works.

  2. One limit of machine learning HCM with respect to traditional HCM is the accuracy. Some groups identified with the machine learning approach may not be retrieved with the traditional method, and vice versa. In this work, we will focus on the use of the machine learning method because of its speed. A more complete and focused study on the identification of groups among asteroids in librating orbits of the \(z_1\) and \(z_2\) resonances, also including the traditional approach, is left to future studies.

References

Download references

Acknowledgements

We are grateful to Dr. Bojan Novaković and to an anonymous reviewer for comments and suggestions that greatly improved the quality of this paper. We acknowledge the use of data from the Asteroid Dynamics Site (AstDys) [http://hamilton.dm.unipi.it/astdys, Knežević and Milani (2003)] and from the Asteroid Families Portal [AFP Radović et al. (2017)]. We are grateful to Edmilson Roma de Oliveira for discussions that motivated this work, and for very helpful exchanges with Dr. E. Ishida, R. S. de Souza and A. Krone-Martins on active and passive learning algorithms. This publication makes use of data products from the Wide-field Infrared Survey Explorer (WISE) and Near-Earth Objects (NEOWISE), which are a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This is a publication from the MASB (Machine-learning applied to small bodies, https://valeriocarruba.github.io/Site-MASB/) research group.

Funding

VC is grateful to the São Paulo State Science Foundation (FAPESP, Grant 2018/20999-6) and the Brazilian National Research Council (CNPq, Grant 301577/2017-0). SA acknowledges the support from Coordination for the Improvement of Higher Education Personnel (CAPES, Grant 88887.374148/2019-00).

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Valerio Carruba, Safwan Aljbaae and Rita Cassia Domingos. The first draft of the manuscript was written by Valerio Carruba and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to V. Carruba.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Genetic algorithms outcome

The Random Forest and Extra Trees algorithms are ensemble methods that used several standalone classifiers (in our case decision trees, but other methods are possible) to achieve better results. For the case of the Random Forest, the training data can be used to create multiple samples, the bootstrap samples, that can be used to train an independent classifier. For both algorithms, the final decision will be achieved based on a majority vote of the standalone methods. The Extra Trees method does not generally use bootstraps, and it samples data without replacements. More details on these two bagging classifiers can be found in Carruba et al. (2020) and in the reference therein.

The Gradient Boosting classifier, also known as GBoost, is an ensemble boosting method. Contrary to bagging methods, boosting algorithms will monitor the classifiers that provided the least accurate prediction and will consider them with lower weights. A decision is obtained as a weighted average of the result of each standalone classifier. Gradient Boosting will identify weak learners using gradient functions. Interested readers could find more information on the theory behind this algorithm in Carruba et al. (2020) and in the reference therein. Below, we briefly describe key parameters of the Random Forest and Extra Trees algorithms, as reported in Swamynathan (2017):

  1. 1.

    Bootstrap: Whether the algorithm is using bootstrap samples (True) or not (False).

  2. 2.

    Criterion: The function to measure the quality of a split. The supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.

  3. 3.

    max\(\_\)features: The random subset of features to use for each splitting node.

  4. 4.

    min\(\_\)samples\(\_\)leaf: The minimum number of samples required to be at a leaf node.

  5. 5.

    min\(\_\)samples\(\_\)split: The minimum number of data points placed in a node before the node is split.

  6. 6.

    Number of estimators: the number of decision trees algorithms.

Important parameters of the GBoost methods are:

  1. 1.

    learning rate: It controls the magnitude of change in the estimators. A lower learning rate requires a higher number of estimators.

  2. 2.

    \( max\_depth\): Maximum number of levels in each decision tree.

  3. 3.

    Subsample: The fraction of observations to be randomly sampled for each tree algorithm.

The three best-performing models found by genetic algorithms for asteroids near the \(z_1\) resonance were: 1) Random Forest Classifier with bootstrap, an “entropy” criterion, a max\(\_\)features of 0.6, a min\(\_\)samples\(\_\)leaf of 20, a

min\(\_\)samples\(\_\)split of 15, and 100 estimators. 2) Extra Trees with no bootstrap, a “gini” criterion, a max\(\_\)features of 0.7, a min\(\_\)samples\(\_\)leaf of 13, a min\(\_\)samples\(\_\)split of 17, and 100 estimators. 3) Gradient Boosting classifier with a learning rate of 0.1, a max\(\_\)depth of 3, max\(\_\)features equal to 1.0, min\(\_\)samples\(\_\)leaf of 17, a min\(\_\)samples\(\_\)split of 4, 100 estimators, and a Subsample of 0.95.

For the \(z_2\) resonance, we found: 1) a Random Forest Classifier with bootstrap, a “gini” criterion, a max\(\_\)features of 0.2, a min\(\_\)samples\(\_\)leaf of 18, a min\(\_\)samples\(\_\)split of 4, and 100 estimators. 2) An Extra Trees with no bootstrap, a “entropy” criterion, a max\(\_\)features of 0.7, a min\(\_\)samples\(\_\)leaf of 5, a min\(\_\)samples\(\_\)split of 9, and 100 estimators. 3) A Gradient Boosting classifier with a learning rate of 0.1, a max\(\_\)depth of 2, max\(\_\)features equal to 0.25, min\(\_\)samples\(\_\)leaf of 5, a min\(\_\)samples\(\_\)split of 2, 100 estimators, and a Subsample of 0.2.

Table 1 The dynamical groups with at least 10 members among the \(z_1\) librating population in the central main belt, listed from the most to the least numerous, identified with the hierarchical clustering algorithm, at three values of the distance cutoff for the sample of librating asteroids: (1) 8.43, (2) 13.43 and (3) 18.43 m/s.
Table 2 The dynamical groups with at least 10 members among the \(z_1\) librating population in the outer main belt. The three values of the distance cutoff for this region are (1) 15.75, (2) 20.75 and (3) 25.75 m/s
Table 3 The number of objects with albedo, the possible interlopers, the fraction of possible interlopers, the number of asteroids with SDSS-MOC4 taxonomy, the number of possible taxonomy interlopers, and their fraction for the \(z_1\) groups in the outer main belt
Table 4 The dynamical groups with at least 10 members among the \(z_2\) librating population in the inner main belt, in the same fashion as done in Table 1
Table 5 The dynamical groups with at least 10 members among the \(z_2\) librating population in the central main belt. The columns have the same meaning of previous tables

Appendix 2: Resonant asteroid groups

Tables with the groups identified with machine learning hierarchical clustering algorithm and list of interlopers for groups in the \(z_1\) resonance outer main belt are displayed in this section. See Sect. 5.1 for the rationale behind these tables (Tables 1, 2, 3, 4, 5).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Carruba, V., Aljbaae, S. & Domingos, R.C. Identification of asteroid groups in the \(z_1\) and \(z_2\) nonlinear secular resonances through genetic algorithms. Celest Mech Dyn Astr 133, 24 (2021). https://doi.org/10.1007/s10569-021-10021-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10569-021-10021-z

Keywords

Navigation