-
A neural network for determination of latent dimensionality in non-negative matrix factorization Mach. Learn. Sci. Technol. Pub Date : 2021-02-09 Benjamin T Nebgen; Raviteja Vangara; Miguel A Hombrados-Herrera; Svetlana Kuksova; Boian S Alexandrov
Non-negative matrix factorization (NMF) has proven to be a powerful unsupervised learning method for uncovering hidden features in complex and noisy data sets with applications in data mining, text recognition, dimension reduction, face recognition, anomaly detection, blind source separation, and many other fields. An important input for NMF is the latent dimensionality of the data, that is, the number
-
Solving quantum statistical mechanics with variational autoregressive networks and quantum circuits Mach. Learn. Sci. Technol. Pub Date : 2021-02-09 Jin-Guo Liu; Liang Mao; Pan Zhang; Lei Wang
We extend the ability of an unitary quantum circuit by interfacing it with a classical autoregressive neural network. The combined model parametrizes a variational density matrix as a classical mixture of quantum pure states, where the autoregressive network generates bitstring samples as input states to the quantum circuit. We devise an efficient variational algorithm to jointly optimize the classical
-
Automated multi-layer optical design via deep reinforcement learning Mach. Learn. Sci. Technol. Pub Date : 2021-02-09 Haozhu Wang; Zeyu Zheng; Chengang Ji; L Jay Guo
Optical multi-layer thin films are widely used in optical and energy applications requiring photonic designs. Engineers often design such structures based on their physical intuition. However, solely relying on human experts can be time-consuming and may lead to sub-optimal designs, especially when the design space is large. In this work, we frame the multi-layer optical design task as a sequence generation
-
Watch and learn—a generalized approach for transferrable learning in deep neural networks via physical principles Mach. Learn. Sci. Technol. Pub Date : 2021-02-09 Kyle Sprague; Juan Carrasquilla; Stephen Whitelam; Isaac Tamblyn
Transfer learning refers to the use of knowledge gained while solving a machine learning task and applying it to the solution of a closely related problem. Such an approach has enabled scientific breakthroughs in computer vision and natural language processing where the weights learned in state-of-the-art models can be used to initialize models for other tasks which dramatically improve their performance
-
Graph neural networks in particle physics Mach. Learn. Sci. Technol. Pub Date : 2021-01-08 Jonathan Shlomi; Peter Battaglia; Jean-Roch Vlimant
Particle physics is a branch of science aiming at discovering the fundamental laws of matter and forces. Graph neural networks are trainable functions which operate on graphs—sets of elements and their pairwise relations—and are a central method within the broader field of geometric deep learning. They are very expressive and have demonstrated superior performance to other classical deep learning approaches
-
Reinforcement learning enhanced quantum-inspired algorithm for combinatorial optimization Mach. Learn. Sci. Technol. Pub Date : 2021-01-08 Dmitrii Beloborodov; A E Ulanov; Jakob N Foerster; Shimon Whiteson; A I Lvovsky
Quantum hardware and quantum-inspired algorithms are becoming increasingly popular for combinatorial optimization. However, these algorithms may require careful hyperparameter tuning for each problem instance. We use a reinforcement learning agent in conjunction with a quantum-inspired algorithm to solve the Ising energy minimization problem, which is equivalent to the Maximum Cut problem. The agent
-
Reinforcement learning decoders for fault-tolerant quantum computation Mach. Learn. Sci. Technol. Pub Date : 2021-01-08 Ryan Sweke; Markus S Kesselring; Evert P L van Nieuwenburg; Jens Eisert
Topological error correcting codes, and particularly the surface code, currently provide the most feasible road-map towards large-scale fault-tolerant quantum computation. As such, obtaining fast and flexible decoding algorithms for these codes, within the experimentally realistic and challenging context of faulty syndrome measurements, without requiring any final read-out of the physical qubits, is
-
Machine learning for analyzing and characterizing InAsSb-based nBn photodetectors Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Andreu Glasmann; Alexandros Kyrtsos; Enrico Bellotti
This paper discusses two cases of applying artificial neural networks to the capacitance–voltage characteristics of InAsSb-based barrier infrared detectors. In the first case, we discuss a methodology for training a fully-connected feedforward network to predict the capacitance of the device as a function of the absorber, barrier, and contact doping densities, the barrier thickness, and the applied
-
The MLIP package: moment tensor potentials with MPI and active learning Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Ivan S Novikov; Konstantin Gubaev; Evgeny V Podryabinkin; Alexander V Shapeev
The subject of this paper is the technology (the ‘how’) of constructing machine-learning interatomic potentials, rather than science (the ‘what’ and ‘why’) of atomistic simulations using machine-learning potentials. Namely, we illustrate how to construct moment tensor potentials using active learning as implemented in the MLIP package, focusing on the efficient ways to automatically sample configurations
-
In situ compression artifact removal in scientific data using deep transfer learning and experience replay Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Sandeep Madireddy; Ji Hwan Park; Sunwoo Lee; Prasanna Balaprakash; Shinjae Yoo; Wei-keng Liao; Cory D Hauck; M Paul Laiu; Richard Archibald
The massive amount of data produced during simulation on high-performance computers has grown exponentially over the past decade, exacerbating the need for streaming compression and decompression methods for efficient storage and transfer of this data—key to realizing the full potential of large-scale computational science. Lossy compression approaches such as JPEG when applied to scientific simulation
-
PyXtal_FF: a python library for automated force field generation Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Howard Yanxon; David Zagaceta; Binh Tang; David S Matteson; Qiang Zhu
We present PyXtal_FF—a package based on Python programming language—for developing machine learning potentials (MLPs). The aim of PyXtal_FF is to promote the application of atomistic simulations through providing several choices of atom-centered descriptors and machine learning regressions in one platform. Based on the given choice of descriptors (including the atom-centered symmetry functions, embedded
-
Measuring transferability issues in machine-learning force fields: the example of gold–iron interactions with linearized potentials Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Magali Benoit; Jonathan Amodeo; Sgolne Combettes; Ibrahim Khaled; Aurlien Roux; Julien Lam
Machine-learning force fields have been increasingly employed in order to extend the possibility of current first-principles calculations. However, the transferability of the obtained potential cannot always be guaranteed in situations that are outside the original database. To study such limitation, we examined the very difficult case of the interactions in gold–iron nanoparticles. For the machine-learning
-
Natural evolution strategies and variational Monte Carlo Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Tianchen Zhao; Giuseppe Carleo; James Stokes; Shravan Veerapaneni
A notion of quantum natural evolution strategies is introduced, which provides a geometric synthesis of a number of known quantum/classical algorithms for performing classical black-box optimization. The recent work of Gomes et al (2019 arXiv:1910.10675) on heuristic combinatorial optimization using neural quantum states is pedagogically reviewed in this context, emphasizing the connection with natural
-
InfoCGAN classification of 2D square Ising configurations Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Nicholas Walker; Ka-Ming Tam
An InfoCGAN neural network is trained on 2D square Ising configurations conditioned on the external applied magnetic field and the temperature. The network is composed of two main sub-networks. The generator network learns to generate convincing Ising configurations and the discriminator network learns to discriminate between ‘real’ and ‘fake’ configurations with an additional categorical assignment
-
Unsupervised interpretable learning of topological indices invariant under permutations of atomic bands Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Oleksandr Balabanov; Mats Granath
Multi-band insulating Bloch Hamiltonians with internal or spatial symmetries, such as particle-hole or inversion, may have topologically disconnected sectors of trivial atomic-limit (momentum-independent) Hamiltonians. We present a neural-network-based protocol for finding topologically relevant indices that are invariant under transformations between such trivial atomic-limit Hamiltonians, thus corresponding
-
Enabling robust offline active learning for machine learning potentials using simple physics-based priors Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Muhammed Shuaibi; Saurabh Sivakumar; Rui Qi Chen; Zachary W Ulissi
Machine learning surrogate models for quantum mechanical simulations have enabled the field to efficiently and accurately study material and molecular systems. Developed models typically rely on a substantial amount of data to make reliable predictions of the potential energy landscape or careful active learning (AL) and uncertainty estimates. When starting with small datasets, convergence of AL approaches
-
An automated approach for determining the number of components in non-negative matrix factorization with application to mutational signature learning Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Gal Gilad; Itay Sason; Roded Sharan
Non-negative matrix factorization (NMF) is a popular method for finding a low rank approximation of a matrix, thereby revealing the latent components behind it. In genomics, NMF is widely used to interpret mutation data and derive the underlying mutational processes and their activities. A key challenge in the use of NMF is determining the number of components, or rank of the factorization. Here we
-
Machine learning based quantification of synchrotron radiation-induced x-ray fluorescence measurements—a case study Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 A Rakotondrajoa; M Radtke
In this work, we describe the use of artificial neural networks (ANNs) for the quantification of x-ray fluorescence measurements. The training data were generated using Monte Carlo simulation, which avoided the use of adapted reference materials. The extension of the available dataset by means of an ANN to generate additional data was demonstrated. Particular emphasis was put on the comparability of
-
Machine learning for neutron scattering at ORNL* Mach. Learn. Sci. Technol. Pub Date : 2021-01-01 Mathieu Doucet; Anjana M Samarakoon; Changwoo Do; William T Heller; Richard Archibald; D Alan Tennant; Thomas Proffen; Garrett E Granroth
Machine learning (ML) offers exciting new opportunities to extract more information from scattering data. At neutron scattering user facilities, ML has the potential to help accelerate scientific productivity by empowering facility users with insight into their data which has traditionally been supplied by scattering experts. Such support can help in both speeding up common modeling problems for users
-
Artificial applicability labels for improving policies in retrosynthesis prediction Mach. Learn. Sci. Technol. Pub Date : 2020-12-31 Esben Jannik Bjerrum; Amol Thakkar; Ola Engkvist
Automated retrosynthetic planning algorithms are a research area of increasing importance. Automated reaction-template extraction from large datasets, in conjunction with neural-network-enhanced tree-search algorithms, can find plausible routes to target compounds in seconds. However, the current method for training neural networks to predict suitable templates for a given target product leads to many
-
Classifying global state preparation via deep reinforcement learning Mach. Learn. Sci. Technol. Pub Date : 2020-12-31 Tobias Haug; Wai-Keong Mok; Jia-Bin You; Wenzu Zhang; Ching Eng Png; Leong-Chuan Kwek
Quantum information processing often requires the preparation of arbitrary quantum states, such as all the states on the Bloch sphere for two-level systems. While numerical optimization can prepare individual target states, they lack the ability to find general control protocols that can generate many different target states. Here, we demonstrate global quantum control by preparing a continuous set
-
Online accelerator optimization with a machine learning-based stochastic algorithm Mach. Learn. Sci. Technol. Pub Date : 2020-12-31 Zhe Zhang; Minghao Song; Xiaobiao Huang
Online optimization is critical for realizing the design performance of accelerators. Highly efficient stochastic optimization algorithms are needed for many online accelerator optimization problems in order to find the global optimum in the non-linear, coupled parameter space. In this study, we propose to use the multi-generation Gaussian process optimizer for online accelerator optimization and demonstrate
-
Improving the segmentation of scanning probe microscope images using convolutional neural networks Mach. Learn. Sci. Technol. Pub Date : 2020-12-31 Steff Farley; Jo E A Hodgkinson; Oliver M Gordon; Joanna Turner; Andrea Soltoggio; Philip J Moriarty; Eugenie Hunsicker
A wide range of techniques can be considered for segmentation of images of nanostructured surfaces. Manually segmenting these images is time-consuming and results in a user-dependent segmentation bias, while there is currently no consensus on the best automated segmentation methods for particular techniques, image classes, and samples. Any image segmentation approach must minimise the noise in the
-
Fractional deep neural network via constrained optimization Mach. Learn. Sci. Technol. Pub Date : 2020-12-09 Harbir Antil; Ratna Khatri; Rainald Lhner; Deepanshu Verma
This paper introduces a novel algorithmic framework for a deep neural network (DNN), which in a mathematically rigorous manner, allows us to incorporate history (or memory) into the network—it ensures all layers are connected to one another. This DNN, called Fractional-DNN, can be viewed as a time-discretization of a fractional in time non-linear ordinary differential equation (ODE). The learning problem
-
Quantum computation with machine-learning-controlled quantum stuff Mach. Learn. Sci. Technol. Pub Date : 2020-12-09 Lucien Hardy; Adam G M Lewis
We formulate the control over quantum matter, so as to perform arbitrary quantum computation, as an optimization problem. We then provide a schematic machine learning algorithm for its solution. Imagine a long strip of ‘quantum stuff’, endowed with certain assumed physical properties, and equipped with regularly spaced wires to provide input settings and to read off outcomes. After showing how the
-
Site2Vec: a reference frame invariant algorithm for vector embedding of protein–ligand binding sites Mach. Learn. Sci. Technol. Pub Date : 2020-12-08 Arnab Bhadra; Kalidas Yeturu
Protein–ligand interactions are one of the fundamental types of molecular interactions in living systems. Ligands are small molecules that interact with protein molecules at specific regions on their surfaces called binding sites. Binding sites would also determine ADMET properties of a drug molecule. Tasks such as assessment of protein functional similarity and detection of side effects of drugs need
-
A hybrid classical-quantum workflow for natural language processing Mach. Learn. Sci. Technol. Pub Date : 2020-12-08 Lee J O’Riordan; Myles Doyle; Fabio Baruffa; Venkatesh Kannan
Natural language processing (NLP) problems are ubiquitous in classical computing, where they often require significant computational resources to infer sentence meanings. With the appearance of quantum computing hardware and simulators, it is worth developing methods to examine such problems on these platforms. In this manuscript we demonstrate the use of quantum computing models to perform NLP tasks
-
Generalizability issues with deep learning models in medicine and their potential solutions: illustrated with cone-beam computed tomography (CBCT) to computed tomography (CT) image conversion Mach. Learn. Sci. Technol. Pub Date : 2020-12-08 Xiao Liang; Dan Nguyen; Steve B Jiang
Generalizability is a concern when applying a deep learning (DL) model trained on one dataset to other datasets. It is challenging to demonstrate a DL model’s generalizability efficiently and sufficiently before implementing the model in clinical practice. Training a universal model that works anywhere, anytime, for anybody is unrealistic. In this work, we demonstrate the generalizability problem,
-
The influence of sorbitol doping on aggregation and electronic properties of PEDOT:PSS: a theoretical study Mach. Learn. Sci. Technol. Pub Date : 2020-12-04 Pascal Friederich; Salvador Len; Jos Daro Perea; Loc M Roch; Aln Aspuru-Guzik
Many organic electronics applications such as organic solar cells or thermoelectric generators rely on PEDOT:PSS as a conductive polymer that is printable and transparent. It was found that doping PEDOT:PSS with sorbitol enhances the conductivity through morphological changes. However, the microscopic mechanism is not well understood. In this work, we combine computational tools with machine learning
-
Randomized algorithms for fast computation of low rank tensor ring model Mach. Learn. Sci. Technol. Pub Date : 2020-12-04 Salman Ahmadi-Asl; Andrzej Cichocki; Anh Huy Phan; Maame G Asante-Mensah; Mirfarid Musavian Ghazani; Toshihisa Tanaka; Ivan Oseledets
Randomized algorithms are efficient techniques for big data tensor analysis. In this tutorial paper, we review and extend a variety of randomized algorithms for decomposing large-scale data tensors in Tensor Ring (TR) format. We discuss both adaptive and nonadaptive randomized algorithms for this task. Our main focus is on the random projection technique as an efficient randomized framework and how
-
Noise2Filter: fast, self-supervised learning and real-time reconstruction for 3D computed tomography Mach. Learn. Sci. Technol. Pub Date : 2020-12-04 Marinus J Lagerwerf; Allard A Hendriksen; Jan-Willem Buurlage; K Joost Batenburg
At x-ray beamlines of synchrotron light sources, the achievable time-resolution for 3D tomographic imaging of the interior of an object has been reduced to a fraction of a second, enabling rapidly changing structures to be examined. The associated data acquisition rates require sizable computational resources for reconstruction. Therefore, full 3D reconstruction of the object is usually performed after
-
Detecting symmetries with neural networks Mach. Learn. Sci. Technol. Pub Date : 2020-12-04 Sven Krippendorf; Marc Syvaeri
Identifying symmetries in data sets is generally difficult, but knowledge about them is crucial for efficient data handling. Here we present a method how neural networks can be used to identify symmetries. We make extensive use of the structure in the embedding layer of the neural network which allows us to identify whether a symmetry is present and to identify orbits of the symmetry in the input.
-
Graph prolongation convolutional networks: explicitly multiscale machine learning on graphs with applications to modeling of cytoskeleton Mach. Learn. Sci. Technol. Pub Date : 2020-12-04 Cory B Scott; Eric Mjolsness
We define a novel type of ensemble graph convolutional network (GCN) model. Using optimized linear projection operators to map between spatial scales of graph, this ensemble model learns to aggregate information from each scale for its final prediction. We calculate these linear projection operators as the infima of an objective function relating the structure matrices used for each GCN. Equipped with
-
Using deep learning to enhance event geometry reconstruction for the telescope array surface detector Mach. Learn. Sci. Technol. Pub Date : 2020-12-04 D Ivanov; O E Kalashev; M Yu Kuznetsov; G I Rubtsov; T Sako; Y Tsunesada; Y V Zhezher
The extremely low flux of ultra-high energy cosmic rays (UHECR) makes their direct observation by orbital experiments practically impossible. For this reason all current and planned UHECR experiments detect cosmic rays indirectly by observing the extensive air showers (EAS) initiated by cosmic ray particles in the atmosphere. The world largest statistics of the ultra-high energy EAS events is recorded
-
iDQ: Statistical inference of non-gaussian noise with auxiliary degrees of freedom in gravitational-wave detectors Mach. Learn. Sci. Technol. Pub Date : 2020-12-04 Reed Essick; Patrick Godwin; Chad Hanna; Lindy Blackburn; Erik Katsavounidis
Gravitational-wave detectors are exquisitely sensitive instruments and routinely enable ground-breaking observations of novel astronomical phenomena. However, they also witness non-stationary, non-Gaussian noise that can be mistaken for astrophysical sources, lower detection confidence, or simply complicate the extraction of signal parameters from noisy data. To address this, we present iDQ, a supervised
-
Deeply uncertain: comparing methods of uncertainty quantification in deep learning algorithms Mach. Learn. Sci. Technol. Pub Date : 2020-12-04 Joo Caldeira; Brian Nord
We present a comparison of methods for uncertainty quantification (UQ) in deep learning algorithms in the context of a simple physical system. Three of the most common uncertainty quantification methods—Bayesian neural networks (BNNs), concrete dropout (CD), and deep ensembles (DEs) — are compared to the standard analytic error propagation. We discuss this comparison in terms endemic to both machine
-
Compressing deep neural networks on FPGAs to binary and ternary precision with hls4ml Mach. Learn. Sci. Technol. Pub Date : 2020-12-04 Jennifer Ngadiuba; Vladimir Loncar; Maurizio Pierini; Sioni Summers; Giuseppe Di Guglielmo; Javier Duarte; Philip Harris; Dylan Rankin; Sergo Jindariani; Mia Liu; Kevin Pedro; Nhan Tran; Edward Kreinar; Sheila Sagear; Zhenbin Wu; Duc Hoang
We present the implementation of binary and ternary neural networks in the hls4ml library, designed to automatically convert deep neural network models to digital circuits with field-programmable gate arrays (FPGA) firmware. Starting from benchmark models trained with floating point precision, we investigate different strategies to reduce the network’s resource consumption by reducing the numerical
-
Enhancing gravitational-wave science with machine learning Mach. Learn. Sci. Technol. Pub Date : 2020-12-04 Elena Cuoco; Jade Powell; Marco Cavagli; Kendall Ackley; Michał Bejger; Chayan Chatterjee; Michael Coughlin; Scott Coughlin; Paul Easter; Reed Essick; Hunter Gabbard; Timothy Gebhard; Shaon Ghosh; Lela Haegel; Alberto Iess; David Keitel; Zsuzsa Mrka; Szabolcs Mrka; Filip Morawski; Tri Nguyen; Rich Ormiston; Michael Prrer; Massimiliano Razzano; Kai Staats; Gabriele Vajente; Daniel Williams
Machine learning has emerged as a popular and powerful approach for solving problems in astrophysics. We review applications of machine learning techniques for the analysis of ground-based gravitational-wave (GW) detector data. Examples include techniques for improving the sensitivity of Advanced Laser Interferometer GW Observatory and Advanced Virgo GW searches, methods for fast measurements of the
-
i- flow: High-dimensional integration and sampling with normalizing flows Mach. Learn. Sci. Technol. Pub Date : 2020-11-18 Christina Gao; Joshua Isaacson; Claudius Krause
In many fields of science, high-dimensional integration is required. Numerical methods have been developed to evaluate these complex integrals. We introduce the code i-flow, a Python package that performs high-dimensional numerical integration utilizing normalizing flows. Normalizing flows are machine-learned, bijective mappings between two distributions. i-flow can also be used to sample random points
-
Adversarial reverse mapping of equilibrated condensed-phase molecular structures Mach. Learn. Sci. Technol. Pub Date : 2020-11-10 Marc Stieffenhofer; Michael Wand; Tristan Bereau
A tight and consistent link between resolutions is crucial to further expand the impact of multiscale modeling for complex materials. We herein tackle the generation of condensed molecular structures as a refinement—backmapping—of a coarse-grained (CG) structure. Traditional schemes start from a rough coarse-to-fine mapping and perform further energy minimization and molecular dynamics simulations
-
Structure-property maps with Kernel principal covariates regression Mach. Learn. Sci. Technol. Pub Date : 2020-11-06 Benjamin A Helfrecht; Rose K Cersonsky; Guillaume Fraux; Michele Ceriotti
Data analyses based on linear methods constitute the simplest, most robust, and transparent approaches to the automatic processing of large amounts of data for building supervised or unsupervised machine learning models. Principal covariates regression (PCovR) is an underappreciated method that interpolates between principal component analysis and linear regression and can be used conveniently to reveal
-
Coarse-grain cluster analysis of tensors with application to climate biome identification Mach. Learn. Sci. Technol. Pub Date : 2020-11-05 Derek DeSantis; Phillip J Wolfram; Katrina Bennett; Boian Alexandrov
A tensor provides a concise way to codify the interdependence of complex data. Treating a tensor as a d-way array, each entry records the interaction between the different indices. Clustering provides a way to parse the complexity of the data into more readily understandable information. Clustering methods are heavily dependent on the algorithm of choice, as well as the chosen hyperparameters of the
-
A path towards quantum advantage in training deep generative models with quantum annealers Mach. Learn. Sci. Technol. Pub Date : 2020-11-03 Walter Winci; Lorenzo Buffoni; Hossein Sadeghi; Amir Khoshaman; Evgeny Andriyash; Mohammad H Amin
The development of quantum-classical hybrid (QCH) algorithms is critical to achieve state-of-the-art computational models. A QCH variational autoencoder (QVAE) was introduced in reference [1] by some of the authors of this paper. QVAE consists of a classical auto-encoding structure realized by traditional deep neural networks to perform inference to and generation from, a discrete latent space. The
-
Deep learning of interface structures from simulated 4D STEM data: cation intermixing vs. rougheningNotice: This manuscript has been authored by UT-Battelle, LLC, under Contract No. DE-AC0500OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive Mach. Learn. Sci. Technol. Pub Date : 2020-11-03 M P Oxley; J Yin; N Borodinov; S Somnath; M Ziatdinov; A R Lupini; S Jesse; R K Vasudevan; S V Kalinin
Interface structures in complex oxides remain an active area of condensed matter physics research, largely enabled by recent advances in scanning transmission electron microscopy (STEM). Yet the nature of the STEM contrast in which the structure is projected along the given direction precludes separation of possible structural models. Here, we utilize deep convolutional neural networks (DCNN) trained
-
Self-referencing embedded strings (SELFIES): A 100% robust molecular string representation Mach. Learn. Sci. Technol. Pub Date : 2020-11-03 Mario Krenn; Florian Hse; AkshatKumar Nigam; Pascal Friederich; Alan Aspuru-Guzik
The discovery of novel materials and functional molecules can help to solve some of society’s most urgent challenges, ranging from efficient energy harvesting and storage to uncovering novel pharmaceutical drug candidates. Traditionally matter engineering–generally denoted as inverse design–was based massively on human intuition and high-throughput virtual screening. The last few years have seen the
-
Predicting excited states from ground state wavefunction by supervised quantum machine learning Mach. Learn. Sci. Technol. Pub Date : 2020-10-31 Hiroki Kawai and Yuya O. Nakagawa
Excited states of molecules lie in the heart of photochemistry and chemical reactions. The recent development in quantum computational chemistry leads to inventions of a variety of algorithms that calculate the excited states of molecules on near-term quantum computers, but they require more computational burdens than the algorithms for calculating the ground states. In this study, we propose a scheme
-
Towards recognizing the light facet of the Higgs boson Mach. Learn. Sci. Technol. Pub Date : 2020-10-29 Alexandre Alves and Felipe F Freitas
The Higgs boson couplings to bottom and top quarks have been measured and agree well with the Standard Model predictions. Decays to lighter quarks and gluons, however, remain elusive. Observing these decays is essential to complete the picture of the Higgs boson interactions. In this work, we present the perspectives for the 14 TeV LHC to observe the Higgs boson decay to gluon jets assembling convolutional
-
Pulse shape discrimination and exploration of scintillation signals using convolutional neural networks Mach. Learn. Sci. Technol. Pub Date : 2020-10-29 J Griffiths, S Kleinegesse, D Saunders, R Taylor and A Vacheret
We demonstrate the use of a convolutional neural network to perform neutron-gamma pulse shape discrimination, where the only inputs to the network are the raw digitised silicon photomultiplier signals from a dual scintillator detector element made of 6 Li F:ZnS(Ag) scintillator and PVT plastic. A realistic labelled dataset was created to train the network by exposing the detector to an AmBe source
-
Thousands of reactants and transition states for competing E2 and S ##IMG## [http://ej.iop.org/images/2632-2153/1/4/045026/toc_mlstaba822ieqn1.gif] {$_\mathrm{N}$} 2 reactions Mach. Learn. Sci. Technol. Pub Date : 2020-10-29 Guido Falk von Rudorff, Stefan N Heinen, Marco Bragato and O Anatole von Lilienfeld
Reaction barriers are a crucial ingredient for first principles based computational retro-synthesis efforts as well as for comprehensive reactivity assessments throughout chemical compound space. While extensive databases of experimental results exist, modern quantum machine learning applications require atomistic details which can only be obtained from quantum chemistry protocols. For competing E2
-
Deep learning of chaos classification Mach. Learn. Sci. Technol. Pub Date : 2020-10-23 Woo Seok Lee and Sergej Flach
We train an artificial neural network which distinguishes chaotic and regular dynamics of the two-dimensional Chirikov standard map. We use finite length trajectories and compare the performance with traditional numerical methods which need to evaluate the Lyapunov exponent (LE). The neural network has superior performance for short periods with length down to 10 Lyapunov times on which the traditional
-
On the role of gradients for machine learning of molecular energies and forces Mach. Learn. Sci. Technol. Pub Date : 2020-10-23 Anders S Christensen and O Anatole von Lilienfeld
The accuracy of any machine learning potential can only be as good as the data used in the fitting process. The most efficient model therefore selects the training data that will yield the highest accuracy compared to the cost of obtaining the training data. We investigate the convergence of prediction errors of quantum machine learning models for organic molecules trained on energy and force labels
-
K-means-driven Gaussian Process data collection for angle-resolved photoemission spectroscopy Mach. Learn. Sci. Technol. Pub Date : 2020-10-22 Charles N Melton, Marcus M Noack, Taisuke Ohta, Thomas E Beechem, Jeremy Robinson, Xiaotian Zhang, Aaron Bostwick, Chris Jozwiak, Roland J Koch, Petrus H Zwart, Alexander Hexemer and Eli Rotenberg
We propose the combination of k-means clustering with Gaussian Process (GP) regression in the analysis and exploration of 4D angle-resolved photoemission spectroscopy (ARPES) data. Using cluster labels as the driving metric on which the GP is trained, this method allows us to reconstruct the experimental phase diagram from as low as 12% of the original dataset size. In addition to the phase diagram
-
Determination of latent dimensionality in international trade flow Mach. Learn. Sci. Technol. Pub Date : 2020-10-22 Duc P Truong, Erik Skau, Vladimir I Valtchinov and Boian S Alexandrov
Currently, high-dimensional data is ubiquitous in data science, which necessitates the development of techniques to decompose and interpret such multidimensional (aka tensor) datasets. Finding a low dimensional representation of the data, that is, its inherent structure, is one of the approaches that can serve to understand the dynamics of low dimensional latent features hidden in the data. Moreover
-
Deep reinforcement learning for optical systems: A case study of mode-locked lasers Mach. Learn. Sci. Technol. Pub Date : 2020-10-22 Chang Sun, Eurika Kaiser, Steven L Brunton and J Nathan Kutz
We demonstrate that deep reinforcement learning (deep RL) provides a highly effective strategy for the control and self-tuning of optical systems. Deep RL integrates the two leading machine learning architectures of deep neural networks and reinforcement learning to produce robust and stable learning for control. Deep RL is ideally suited for optical systems as the tuning and control relies on interactions
-
Uncovering interpretable relationships in high-dimensional scientific data through function preserving projections Mach. Learn. Sci. Technol. Pub Date : 2020-10-19 Shusen Liu, Rushil Anirudh, Jayaraman J Thiagarajan and Peer-Timo Bremer
In many fields of science and engineering, we frequently encounter experiments or simulations datasets that describe the behavior of complex systems and uncovering human interpretable patterns between their inputs and outputs via exploratory data analysis is essential for building intuition and facilitating discovery. Often, we resort to 2D embeddings for examining these high-dimensional relationships
-
Quantum computing model of an artificial neuron with continuously valued input data Mach. Learn. Sci. Technol. Pub Date : 2020-10-09 Stefano Mangini, Francesco Tacchino, Dario Gerace, Chiara Macchiavello and Daniele Bajoni
Artificial neural networks have been proposed as potential algorithms that could benefit from being implemented and run on quantum computers. In particular, they hold promise to greatly enhance Artificial Intelligence tasks, such as image elaboration or pattern recognition. The elementary building block of a neural network is an artificial neuron, i.e. a computational unit performing simple mathematical
-
Fast reconstruction of single-shot wide-angle diffraction images through deep learning Mach. Learn. Sci. Technol. Pub Date : 2020-10-08 T Stielow, R Schmidt, C Peltz, T Fennel and S Scheel
Single-shot x-ray imaging of short-lived nanostructures such as clusters and nanoparticles near a phase transition or non-crystalizing objects such as large proteins and viruses is currently the most elegant method for characterizing their structure. Using hard x-ray radiation provides scattering images that encode two-dimensional projections, which can be combined to identify the full three-dimensional
-
Baryon density extraction and isotropy analysis of cosmic microwave background using deep learning Mach. Learn. Sci. Technol. Pub Date : 2020-10-08 Amit Mishra, Pranath Reddy and Rahul Nigam
The discovery of cosmic microwave background (CMB) was a paradigm shift in the study and fundamental understanding of the early Universe and also the Big Bang phenomenon. Cosmic microwave background is one of the richest and intriguing sources of information available to cosmologists and one parameter of special interest is baryon density of the Universe. Baryon density can be primarily estimated by
-
Reducing autocorrelation times in lattice simulations with generative adversarial networks Mach. Learn. Sci. Technol. Pub Date : 2020-10-08 Jan M Pawlowski and Julian M Urban
Short autocorrelation times are essential for a reliable error assessment in Monte Carlo simulations of lattice systems. In many interesting scenarios, the decay of autocorrelations in the Markov chain is prohibitively slow. Generative samplers can provide statistically independent field configurations, thereby potentially ameliorating these issues. In this work, the applicability of neural samplers
-
Improving the generative performance of chemical autoencoders through transfer learning Mach. Learn. Sci. Technol. Pub Date : 2020-10-08 Nicolae C Iovanac and Brett M Savoie
Generative models are a sub-class of machine learning models that are capable of generating new samples with a target set of properties. In chemical and materials applications, these new samples might be drug targets, novel semiconductors, or catalysts constrained to exhibit an application-specific set of properties. Given their potential to yield high-value targets from otherwise intractable design
Contents have been reproduced by permission of the publishers.