-
A 4D-Var method with flow-dependent background covariances for the shallow-water equations Stat. Comput. (IF 2.324) Pub Date : 2022-08-11 Daniel Paulin, Ajay Jasra, Alexandros Beskos, Dan Crisan
-
On automatic bias reduction for extreme expectile estimation Stat. Comput. (IF 2.324) Pub Date : 2022-08-09 Stéphane Girard, Gilles Stupfler, Antoine Usseglio-Carleve
-
Fitting double hierarchical models with the integrated nested Laplace approximation Stat. Comput. (IF 2.324) Pub Date : 2022-08-09 Mabel Morales-Otero, Virgilio Gómez-Rubio, Vicente Núñez-Antón
-
Quantile hidden semi-Markov models for multivariate time series Stat. Comput. (IF 2.324) Pub Date : 2022-08-09 Luca Merlo, Antonello Maruotti, Lea Petrella, Antonio Punzo
-
Rate-optimal refinement strategies for local approximation MCMC Stat. Comput. (IF 2.324) Pub Date : 2022-08-09 Andrew D. Davis, Youssef Marzouk, Aaron Smith, Natesh Pillai
-
The computational asymptotics of Gaussian variational inference and the Laplace approximation Stat. Comput. (IF 2.324) Pub Date : 2022-08-09 Zuheng Xu, Trevor Campbell
-
Polya tree-based nearest neighborhood regression Stat. Comput. (IF 2.324) Pub Date : 2022-07-30 Haoxin Zhuang, Liqun Diao, Grace Yi
-
An adaptively weighted stochastic gradient MCMC algorithm for Monte Carlo simulation and global optimization Stat. Comput. (IF 2.324) Pub Date : 2022-07-09 Wei Deng, Guang Lin, Faming Liang
-
Identifiability and parameter estimation of the overlapped stochastic co-block model Stat. Comput. (IF 2.324) Pub Date : 2022-06-28 Jingnan Zhang, Junhui Wang
-
Parallelizing MCMC sampling via space partitioning Stat. Comput. (IF 2.324) Pub Date : 2022-06-27 Vasyl Hafych, Philipp Eller, Oliver Schulz, Allen Caldwel
-
Real time anomaly detection and categorisation Stat. Comput. (IF 2.324) Pub Date : 2022-06-24 Alexander T. M. Fisch, Lawrence Bardwell, Idris A. Eckley
-
Recursive inversion models for permutations Stat. Comput. (IF 2.324) Pub Date : 2022-06-20 Marina Meilă, Annelise Wagner, Christopher Meek
-
Parsimonious hidden Markov models for matrix-variate longitudinal data Stat. Comput. (IF 2.324) Pub Date : 2022-06-15 Salvatore D. Tomarchio, Antonio Punzo, Antonello Maruotti
-
High-dimensional regression with potential prior information on variable importance Stat. Comput. (IF 2.324) Pub Date : 2022-06-14 Benjamin G. Stokell, Rajen D. Shah
-
Joint latent space models for ranking data and social network Stat. Comput. (IF 2.324) Pub Date : 2022-06-13 Jiaqi Gu, Philip L. H. Yu
-
Accelerated parallel non-conjugate sampling for Bayesian non-parametric models Stat. Comput. (IF 2.324) Pub Date : 2022-06-11 Michael Minyi Zhang, Sinead A. Williamson, Fernando Pérez-Cruz
-
Particle gradient descent model for point process generation Stat. Comput. (IF 2.324) Pub Date : 2022-06-07 Antoine Brochard, Bartłomiej Błaszczyszyn, Sixin Zhang, Stéphane Mallat
-
Complexity of zigzag sampling algorithm for strongly log-concave distributions Stat. Comput. (IF 2.324) Pub Date : 2022-06-03 Jianfeng Lu, Lihan Wang
We study the computational complexity of zigzag sampling algorithm for strongly log-concave distributions. The zigzag process has the advantage of not requiring time discretization for implementation, and that each proposed bouncing event requires only one evaluation of partial derivative of the potential, while its convergence rate is dimension independent. Using these properties, we prove that the
-
A generalized likelihood-based Bayesian approach for scalable joint regression and covariance selection in high dimensions Stat. Comput. (IF 2.324) Pub Date : 2022-06-03 Srijata Samanta, Kshitij Khare, George Michailidis
-
Sklar’s Omega: A Gaussian copula-based framework for assessing agreement Stat. Comput. (IF 2.324) Pub Date : 2022-06-02 John Hughes
-
Rule-based Bayesian regression Stat. Comput. (IF 2.324) Pub Date : 2022-05-28 Themistoklis Botsas, Lachlan R. Mason, Indranil Pan
-
Optimally adaptive Bayesian spectral density estimation for stationary and nonstationary processes Stat. Comput. (IF 2.324) Pub Date : 2022-05-29 Nick James, Max Menzies
-
The node-wise Pseudo-marginal method: model selection with spatial dependence on latent graphs Stat. Comput. (IF 2.324) Pub Date : 2022-05-25 Denishrouf Thesingarajah, Adam M. Johansen
-
A comparison of likelihood-free methods with and without summary statistics Stat. Comput. (IF 2.324) Pub Date : 2022-05-19 Christopher Drovandi, David T. Frazier
-
Co-clustering of evolving count matrices with the dynamic latent block model: application to pharmacovigilance Stat. Comput. (IF 2.324) Pub Date : 2022-05-19 Giulia Marchello, Audrey Fresse, Marco Corneli, Charles Bouveyron
-
Importance conditional sampling for Pitman–Yor mixtures Stat. Comput. (IF 2.324) Pub Date : 2022-05-17 Antonio Canale, Riccardo Corradin, Bernardo Nipoti
-
Distributional anchor regression Stat. Comput. (IF 2.324) Pub Date : 2022-05-13 Lucas Kook, Beate Sick, Peter Bühlmann
-
Multilevel estimation of normalization constants using ensemble Kalman–Bucy filters Stat. Comput. (IF 2.324) Pub Date : 2022-05-04 Hamza Ruzayqat, Neil K. Chada, Ajay Jasra
-
Biclustering via structured regularized matrix decomposition Stat. Comput. (IF 2.324) Pub Date : 2022-04-29 Yan Zhong, Jianhua Z. Huang
-
Selecting the derivative of a functional covariate in scalar-on-function regression Stat. Comput. (IF 2.324) Pub Date : 2022-04-23 Giles Hooker, Han Lin Shang
-
Unbiased approximation of posteriors via coupled particle Markov chain Monte Carlo Stat. Comput. (IF 2.324) Pub Date : 2022-04-23 Willem van den Boom, Ajay Jasra, Maria De Iorio, Alexandros Beskos, Johan G. Eriksson
-
Eigenfunction martingale estimating functions and filtered data for drift estimation of discretely observed multiscale diffusions Stat. Comput. (IF 2.324) Pub Date : 2022-04-11 Assyr Abdulle, Grigorios A. Pavliotis, Andrea Zanoni
-
Cauchy Markov random field priors for Bayesian inversion Stat. Comput. (IF 2.324) Pub Date : 2022-03-25 Jarkko Suuronen, Neil K. Chada, Lassi Roininen
The use of Cauchy Markov random field priors in statistical inverse problems can potentially lead to posterior distributions which are non-Gaussian, high-dimensional, multimodal and heavy-tailed. In order to use such priors successfully, sophisticated optimization and Markov chain Monte Carlo methods are usually required. In this paper, our focus is largely on reviewing recently developed Cauchy difference
-
Graphical test for discrete uniformity and its applications in goodness-of-fit evaluation and multiple sample comparison Stat. Comput. (IF 2.324) Pub Date : 2022-03-24 Teemu Säilynoja, Paul-Christian Bürkner, Aki Vehtari
Assessing goodness of fit to a given distribution plays an important role in computational statistics. The probability integral transformation (PIT) can be used to convert the question of whether a given sample originates from a reference distribution into a problem of testing for uniformity. We present new simulation- and optimization-based methods to obtain simultaneous confidence bands for the whole
-
Fast Bayesian inversion for high dimensional inverse problems Stat. Comput. (IF 2.324) Pub Date : 2022-03-22 Benoit Kugler, Florence Forbes, Sylvain Douté
We investigate the use of learning approaches to handle Bayesian inverse problems in a computationally efficient way when the signals to be inverted present a moderately high number of dimensions and are in large number. We propose a tractable inverse regression approach which has the advantage to produce full probability distributions as approximations of the target posterior distributions. In addition
-
Sequential changepoint detection in neural networks with checkpoints Stat. Comput. (IF 2.324) Pub Date : 2022-03-13 Michalis K. Titsias, Jakub Sygnowski, Yutian Chen
We introduce a framework for online changepoint detection and simultaneous model learning which is applicable to highly parametrized models, such as deep neural networks. It is based on detecting changepoints across time by sequentially performing generalized likelihood ratio tests that require only evaluations of simple prediction score functions. This procedure makes use of checkpoints, consisting
-
Low-rank tensor reconstruction of concentrated densities with application to Bayesian inversion Stat. Comput. (IF 2.324) Pub Date : 2022-03-12 Martin Eigel, Robert Gruhlke, Manuel Marschall
This paper presents a novel method for the accurate functional approximation of possibly highly concentrated probability densities. It is based on the combination of several modern techniques such as transport maps and low-rank approximations via a nonintrusive tensor train reconstruction. The central idea is to carry out computations for statistical quantities of interest such as moments based on
-
Sparse functional partial least squares regression with a locally sparse slope function Stat. Comput. (IF 2.324) Pub Date : 2022-03-09 Tianyu Guan, Zhenhua Lin, Kevin Groves, Jiguo Cao
The partial least squares approach has been particularly successful in spectrometric prediction in chemometrics. By treating the spectral data as realizations of a stochastic process, the functional partial least squares can be applied. Motivated by the spectral data collected from oriented strand board furnish, we propose a sparse version of the functional partial least squares regression. The proposed
-
GP-ETAS: semiparametric Bayesian inference for the spatio-temporal epidemic type aftershock sequence model Stat. Comput. (IF 2.324) Pub Date : 2022-03-08 Christian Molkenthin, Christian Donner, Sebastian Reich, Gert Zöller, Sebastian Hainzl, Matthias Holschneider, Manfred Opper
The spatio-temporal epidemic type aftershock sequence (ETAS) model is widely used to describe the self-exciting nature of earthquake occurrences. While traditional inference methods provide only point estimates of the model parameters, we aim at a fully Bayesian treatment of model inference, allowing naturally to incorporate prior knowledge and uncertainty quantification of the resulting estimates
-
On the identifiability of Bayesian factor analytic models Stat. Comput. (IF 2.324) Pub Date : 2022-02-27 Panagiotis Papastamoulis, Ioannis Ntzoufras
A well known identifiability issue in factor analytic models is the invariance with respect to orthogonal transformations. This problem burdens the inference under a Bayesian setup, where Markov chain Monte Carlo (MCMC) methods are used to generate samples from the posterior distribution. We introduce a post-processing scheme in order to deal with rotation, sign and permutation invariance of the MCMC
-
Optimal Bayesian design for model discrimination via classification Stat. Comput. (IF 2.324) Pub Date : 2022-02-22 Markus Hainy, David J. Price, Olivier Restif, Christopher Drovandi
Performing optimal Bayesian design for discriminating between competing models is computationally intensive as it involves estimating posterior model probabilities for thousands of simulated data sets. This issue is compounded further when the likelihood functions for the rival models are computationally expensive. A new approach using supervised classification methods is developed to perform Bayesian
-
Optimal scaling of random walk Metropolis algorithms using Bayesian large-sample asymptotics Stat. Comput. (IF 2.324) Pub Date : 2022-02-18 Sebastian M. Schmon, Philippe Gagnon
High-dimensional limit theorems have been shown useful to derive tuning rules for finding the optimal scaling in random walk Metropolis algorithms. The assumptions under which weak convergence results are proved are, however, restrictive: the target density is typically assumed to be of a product form. Users may thus doubt the validity of such tuning rules in practical applications. In this paper,
-
A numerically stable algorithm for integrating Bayesian models using Markov melding Stat. Comput. (IF 2.324) Pub Date : 2022-02-18 Andrew A. Manderson, Robert J. B. Goudie
When statistical analyses consider multiple data sources, Markov melding provides a method for combining the source-specific Bayesian models. Markov melding joins together submodels that have a common quantity. One challenge is that the prior for this quantity can be implicit, and its prior density must be estimated. We show that error in this density estimate makes the two-stage Markov chain Monte
-
Latent structure blockmodels for Bayesian spectral graph clustering Stat. Comput. (IF 2.324) Pub Date : 2022-02-16 Francesco Sanna Passino, Nicholas A. Heard
Spectral embedding of network adjacency matrices often produces node representations living approximately around low-dimensional submanifold structures. In particular, hidden substructure is expected to arise when the graph is generated from a latent position model. Furthermore, the presence of communities within the network might generate community-specific submanifold structures in the embedding
-
Ensemble Kalman filter based sequential Monte Carlo sampler for sequential Bayesian inference Stat. Comput. (IF 2.324) Pub Date : 2022-02-15 Jiangqi Wu, Linjie Wen, Peter L. Green, Jinglai Li, Simon Maskell
Many real-world problems require one to estimate parameters of interest, in a Bayesian framework, from data that are collected sequentially in time. Conventional methods for sampling from posterior distributions, such as Markov chain Monte Carlo cannot efficiently address such problems as they do not take advantage of the data’s sequential structure. To this end, sequential methods which seek to update
-
Augmented pseudo-marginal Metropolis–Hastings for partially observed diffusion processes Stat. Comput. (IF 2.324) Pub Date : 2022-02-15 Andrew Golightly, Chris Sherlock
We consider the problem of inference for nonlinear, multivariate diffusion processes, satisfying Itô stochastic differential equations (SDEs), using data at discrete times that may be incomplete and subject to measurement error. Our starting point is a state-of-the-art correlated pseudo-marginal Metropolis–Hastings algorithm, that uses correlated particle filters to induce strong and positive correlation
-
Graph matching beyond perfectly-overlapping Erdős–Rényi random graphs Stat. Comput. (IF 2.324) Pub Date : 2022-02-11 Yaofang Hu, Wanjie Wang, Yi Yu
Graph matching is a fruitful area in terms of both algorithms and theories. Given two graphs \(G_1 = (V_1, E_1)\) and \(G_2 = (V_2, E_2)\), where \(V_1\) and \(V_2\) are the same or largely overlapped upon an unknown permutation \(\pi ^*\), graph matching is to seek the correct mapping \(\pi ^*\). In this paper, we exploit the degree information, which was previously used only in noiseless graphs and
-
Correction to: A two-stage Bayesian semiparametricmodel for novelty detection with robust prior information Stat. Comput. (IF 2.324) Pub Date : 2022-02-10 Francesco Denti,Andrea Cappozzo,Francesca Greselin
-
Discriminative clustering with representation learning with any ratio of labeled to unlabeled data Stat. Comput. (IF 2.324) Pub Date : 2022-01-29 Corinne Jones, Vincent Roulet, Zaid Harchaoui
We present a discriminative clustering approach in which the feature representation can be learned from data and moreover leverage labeled data. Representation learning can give a similarity-based clustering method the ability to automatically adapt to an underlying, yet hidden, geometric structure of the data. The proposed approach augments the DIFFRAC method with a representation learning capability
-
Variance reduction for additive functionals of Markov chains via martingale representations Stat. Comput. (IF 2.324) Pub Date : 2022-01-27 D. Belomestny, E. Moulines, S. Samsonov
In this paper, we propose an efficient variance reduction approach for additive functionals of Markov chains relying on a novel discrete-time martingale representation. Our approach is fully non-asymptotic and does not require the knowledge of the stationary distribution (and even any type of ergodicity) or specific structure of the underlying density. By rigorously analyzing the convergence properties
-
Hierarchical sparse Cholesky decomposition with applications to high-dimensional spatio-temporal filtering Stat. Comput. (IF 2.324) Pub Date : 2022-01-18 Jurek, Marcin, Katzfuss, Matthias
Spatial statistics often involves Cholesky decomposition of covariance matrices. To ensure scalability to high dimensions, several recent approximations have assumed a sparse Cholesky factor of the precision matrix. We propose a hierarchical Vecchia approximation, whose conditional-independence assumptions imply sparsity in the Cholesky factors of both the precision and the covariance matrix. This
-
Exact and computationally efficient Bayesian inference for generalized Markov modulated Poisson processes Stat. Comput. (IF 2.324) Pub Date : 2022-01-06 Gonçalves, Flávio B., Dutra, Lívia M., Silva, Roger W. C.
Statistical modeling of temporal point patterns is an important problem in several areas. The Cox process, a Poisson process where the intensity function is stochastic, is a common model for such data. We present a new class of unidimensional Cox process models in which the intensity function assumes parametric functional forms that switch according to a continuous-time Markov chain. A novel methodology
-
Point process simulation of generalised inverse Gaussian processes and estimation of the Jaeger integral Stat. Comput. (IF 2.324) Pub Date : 2021-12-29 Godsill, Simon, Kındap, Yaman
In this paper novel simulation methods are provided for the generalised inverse Gaussian (GIG) Lévy process. Such processes are intractable for simulation except in certain special edge cases, since the Lévy density associated with the GIG process is expressed as an integral involving certain Bessel functions, known as the Jaeger integral in diffusive transport applications. We here show for the first
-
Product-form estimators: exploiting independence to scale up Monte Carlo Stat. Comput. (IF 2.324) Pub Date : 2021-12-21 Kuntz, Juan, Crucinio, Francesca R., Johansen, Adam M.
We introduce a class of Monte Carlo estimators that aim to overcome the rapid growth of variance with dimension often observed for standard estimators by exploiting the target’s independence structure. We identify the most basic incarnations of these estimators with a class of generalized U-statistics and thus establish their unbiasedness, consistency, and asymptotic normality. Moreover, we show that
-
Wavelet-based robust estimation and variable selection in nonparametric additive models Stat. Comput. (IF 2.324) Pub Date : 2021-12-21 Amato, Umberto, Antoniadis, Anestis, Feis, Italia De, Gijbels, Irène
This article studies M-type estimators for fitting robust additive models in the presence of anomalous data. The components in the additive model are allowed to have different degrees of smoothness. We introduce a new class of wavelet-based robust M-type estimators for performing simultaneous additive component estimation and variable selection in such inhomogeneous additive models. Each additive component
-
The recursive variational Gaussian approximation (R-VGA) Stat. Comput. (IF 2.324) Pub Date : 2021-12-20 Lambert, Marc, Bonnabel, Silvère, Bach, Francis
We consider the problem of computing a Gaussian approximation to the posterior distribution of a parameter given N observations and a Gaussian prior. Owing to the need of processing large sample sizes N, a variety of approximate tractable methods revolving around online learning have flourished over the past decades. In the present work, we propose to use variational inference to compute a Gaussian
-
Learning from missing data with the binary latent block model Stat. Comput. (IF 2.324) Pub Date : 2021-12-20 Frisch, Gabriel, Leger, Jean-Benoist, Grandvalet, Yves
Missing data can be informative. Ignoring this information can lead to misleading conclusions when the data model does not allow information to be extracted from the missing data. We propose a co-clustering model, based on the binary Latent Block Model, that aims to take advantage of this nonignorable nonresponses, also known as Missing Not At Random data. A variational expectation–maximization algorithm
-
A Riemannian Newton trust-region method for fitting Gaussian mixture models Stat. Comput. (IF 2.324) Pub Date : 2021-12-17 Sembach, Lena, Burgard, Jan Pablo, Schulz, Volker
Gaussian Mixture Models are a powerful tool in Data Science and Statistics that are mainly used for clustering and density approximation. The task of estimating the model parameters is in practice often solved by the expectation maximization (EM) algorithm which has its benefits in its simplicity and low per-iteration costs. However, the EM converges slowly if there is a large share of hidden information
-
Stochastic approximation cut algorithm for inference in modularized Bayesian models Stat. Comput. (IF 2.324) Pub Date : 2021-12-06 Liu, Yang, Goudie, Robert J. B.
Bayesian modelling enables us to accommodate complex forms of data and make a comprehensive inference, but the effect of partial misspecification of the model is a concern. One approach in this setting is to modularize the model and prevent feedback from suspect modules, using a cut model. After observing data, this leads to the cut distribution which normally does not have a closed form. Previous
-
Emulation-accelerated Hamiltonian Monte Carlo algorithms for parameter estimation and uncertainty quantification in differential equation models Stat. Comput. (IF 2.324) Pub Date : 2021-11-23 Paun, L. Mihaela, Husmeier, Dirk
We propose to accelerate Hamiltonian and Lagrangian Monte Carlo algorithms by coupling them with Gaussian processes for emulation of the log unnormalised posterior distribution. We provide proofs of detailed balance with respect to the exact posterior distribution for these algorithms, and validate the correctness of the samplers’ implementation by Geweke consistency tests. We implement these algorithms