当前位置: X-MOL 学术Astrophys. J. Suppl. Ser. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
On Neural Architectures for Astronomical Time-series Classification with Application to Variable Stars
The Astrophysical Journal Supplement Series ( IF 8.6 ) Pub Date : 2020-10-01 , DOI: 10.3847/1538-4365/aba8ff
Sara Jamal 1 , Joshua S. Bloom 1, 2
Affiliation  

Despite the utility of neural networks (NNs) for astronomical time-series classification, the proliferation of learning architectures applied to diverse data sets has thus far hampered a direct intercomparison of different approaches. Here we perform the first comprehensive study of variants of NN-based learning and inference for astronomical time series, aiming to provide the community with an overview on relative performance and, hopefully, a set of best-in-class choices for practical implementations. In both supervised and self-supervised contexts, we study the effects of different time-series-compatible layer choices, namely the dilated temporal convolutional neural network (dTCNs), long-short term memory NNs, gated recurrent units and temporal convolutional NNs (tCNNs). We also study the efficacy and performance of encoder-decoder (i.e., autoencoder) networks compared to direct classification networks, different pathways to include auxiliary (non-time-series) metadata, and different approaches to incorporate multi-passband data (i.e., multiple time series per source). Performance—applied to a sample of 17,604 variable stars (VSs) from the MAssive Compact Halo Objects (MACHO) survey across 10 imbalanced classes—is measured in training convergence time, classification accuracy, reconstruction error, and generated latent variables. We find that networks with recurrent NNs generally outperform dTCNs and, in many scenarios, yield to similar accuracy as tCNNs. In learning time and memory requirements, convolution-based layers perform better. We conclude by discussing the advantages and limitations of deep architectures for VS classification, with a particular eye toward next-generation surveys such as the Legacy Survey of Space and Time, the Roman Space Telescope, and Zwicky Transient Facility.



中文翻译:

应用于变星的天文时间序列分类的神经架构

尽管神经网络 (NN) 可用于天文时间序列分类,但应用于不同数据集的学习架构的激增迄今为止阻碍了不同方法的直接相互比较。在这里,我们对天文时间序列的基于 NN 的学习和推理的变体进行了首次全面研究,旨在为社区提供有关相对性能的概述,并希望为实际实现提供一组最佳选择。在监督和自监督环境中,我们研究了不同时间序列兼容层选择的影响,即扩张时间卷积神经网络 (dTCNs)、长短期记忆神经网络、门控循环单元和时间卷积神经网络 (tCNNs) )。我们还研究了编码器-解码器的功效和性能(即,自动编码器)网络与直接分类网络相比,包含辅助(非时间序列)元数据的不同途径,以及合并多通带数据(即每个源的多个时间序列)的不同方法。性能——应用于来自大质量致密光晕天体 (MACHO) 调查的 10 个不平衡类别的 17,604 颗变星 (VS) 样本——通过训练收敛时间、分类准确度、重建误差和生成的潜在变量来衡量。我们发现具有循环神经网络的网络通常优于 dTCN,并且在许多情况下,其精度与 tCNN 相似。在学习时间和内存要求方面,基于卷积的层表现更好。我们最后讨论了深度架构对 VS 分类的优势和局限性,

更新日期:2020-10-01
down
wechat
bug