样式： 排序： IF:  GO 导出 标记为已读

Datadriven efficient solvers for Langevin dynamics on manifold in high dimensions Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220926
Yuan Gao, JianGuo Liu, Nan WuWe study the Langevin dynamics of a physical system with manifold structure M⊂Rp based on collected sample points {xi}i=1n⊂M that probe the unknown manifold M. Through the diffusion map, we first learn the reaction coordinates {yi}i=1n⊂N corresponding to {xi}i=1n, where N is a manifold diffeomorphic to M and isometrically embedded in Rℓ with ℓ≪p. The induced Langevin dynamics on N in terms of the reaction

Positive definite multikernels for scattered data interpolations Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220921
Qi YeIn this article, we use the knowledge of positive definite tensors to develop a concept of positive definite multikernels to construct the kernelbased interpolants of scattered data. By the techniques of reproducing kernel Banach spaces, the optimal recoveries and error analysis of the kernelbased interpolants are shown for a special class of strictly positive definite multikernels.

Injectivity of Gabor phase retrieval from lattice measurements Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220907
Philipp Grohs, Lukas LiehrWe establish novel uniqueness results for the Gabor phase retrieval problem: if G:L2(R)→L2(R2) denotes the Gabor transform then every f∈L4[−c2,c2] is determined up to a global phase by the values Gf(x,ω) where (x,ω) are points on the lattice b−1Z×(2c)−1Z and b>0 is an arbitrary positive constant. This for the first time shows that compactlysupported, complexvalued functions can be uniquely reconstructed

On the frame set of the secondorder Bspline Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220906
A. Ganiou D. Atindehou, Christina Frederick, Yébéni B. Kouagou, Kasso A. OkoudjouThe frame set of a function g∈L2(R) is the set of all parameters (a,b)∈R+2 for which the collection of timefrequency shifts of g along aZ×bZ form a Gabor frame for L2(R). Finding the frame set of a given function remains a challenging open problem in timefrequency analysis. In this paper, we establish new regions of the frame set of the secondorder Bspline. Our method uses the compact support of

Understanding neural networks with reproducing kernel Banach spaces Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220905
Francesca Bartolucci, Ernesto De Vito, Lorenzo Rosasco, Stefano VigognaCharacterizing the function spaces corresponding to neural networks can provide a way to understand their properties. In this paper we discuss how the theory of reproducing kernel Banach spaces can be used to tackle this challenge. In particular, we prove a representer theorem for a wide class of reproducing kernel Banach spaces that admit a suitable integral representation and include one hidden layer

Disentangling modes with crossover instantaneous frequencies by synchrosqueezed chirplet transforms, from theory to application Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220828
Ziyu Chen, HauTieng WuAnalysis of signals with oscillatory modes with crossover instantaneous frequencies is a challenging problem in time series analysis. One way to handle this problem is lifting the 2dimensional timefrequency representation to a 3dimensional representation, called timefrequencychirp rate (TFC) representation, by adding one extra chirp rate parameter so that crossover frequencies are disentangled

Generalization bounds for sparse random feature expansions Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220828
Abolfazl Hashemi, Hayden Schaeffer, Robert Shi, Ufuk Topcu, Giang Tran, Rachel WardRandom feature methods have been successful in various machine learning tasks, are easy to compute, and come with theoretical accuracy bounds. They serve as an alternative approach to standard neural networks since they can represent similar function spaces without a costly training phase. However, for accuracy, random feature methods require more measurements than trainable parameters, limiting their

Stable recovery of entangled weights: Towards robust identification of deep neural networks from minimal samples Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220824
Christian Fiedler, Massimo Fornasier, Timo Klock, Michael RauchensteinerIn this paper we approach the problem of unique and stable identifiability from a finite number of inputoutput samples of generic feedforward deep artificial neural networks of prescribed architecture with pyramidal shape up to the penultimate layer and smooth activation functions. More specifically we introduce the socalled entangled weights, which compose weights of successive layers intertwined

Regularization of inverse problems by filtered diagonal frame decomposition Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220824
Andrea Ebner, Jürgen Frikel, Dirk Lorenz, Johannes Schwab, Markus HaltmeierInverse problems are at the heart of many practical problems such as image reconstruction or nondestructive testing. A characteristic feature is their instability with respect to data perturbations. To stabilize the inversion process, regularization methods must be developed and applied. In this paper, we introduce the concept of filtered diagonal frame decomposition, which extends the classical filtered


On 2dimensional mobile sampling Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220810
Alexander Rashkovskii, Alexander Ulanovskii, Ilya ZlotnikovNecessary and sufficient conditions are presented for several families of planar curves to form a set of stable sampling for the Bernstein space BΩ over a convex set Ω⊂R2. These conditions ‘essentially’ describe the mobile sampling property of these families for the PaleyWiener spaces PWΩp,1≤p<∞.

Analysis of a direct separation method based on adaptive chirplet transform for signals with crossover instantaneous frequencies Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220808
Charles K. Chui, Qingtang Jiang, Lin Li, Jian LuIn many applications, it is necessary to retrieve the subsignal building blocks of a multicomponent signal, which is usually nonstationary in realworld and reallife applications. Empirical mode decomposition (EMD), synchrosqueezing transform (SST), signal separation operation (SSO), and iterative filtering decomposition (IFD) have been proposed and developed for this purpose. However, these computational

Recurrence of optimum for training weight and activation quantized networks Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220804
Ziang Long, Penghang Yin, Jack XinDeep neural networks (DNNs) are quantized for efficient inference on resourceconstrained platforms. However, training deep learning models with lowprecision weights and activations involves a demanding optimization task, which calls for minimizing a stagewise loss function subject to a discrete setconstraint. While numerous training methods have been proposed, existing studies for full quantization

The WQN algorithm to adaptively correct artifacts in the EEG signal Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220801
Matteo Dora, Stéphane Jaffard, David HolcmanWavelet quantile normalization (WQN) is a nonparametric algorithm designed to remove transient artifacts from singlechannel EEG in realtime. EEG monitoring machines suspend their output when artifacts in the signal are detected. Removing unpredictable EEG artifacts can improve the continuity of monitoring. We analyse here the WQN algorithm which consists in transporting wavelet coefficient distributions

Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220726
Xin Guo, Junhong Lin, DingXuan ZhouRecently, the Randomized Kaczmarz algorithm (RK) draws much attention because of its low computational complexity and less requirement on computer memory. Many existing results on analysis focus on the behavior of RK in Euclidean spaces, and typically derive exponential converge rates with the base tending to one, as the condition number of the system increases. The dependence on the condition number

The springback penalty for robust signal recovery Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220720
Congpei An, HaoNing Wu, Xiaoming YuanWe propose a new penalty, the springback penalty, for constructing models to recover an unknown signal from incomplete and inaccurate measurements. Mathematically, the springback penalty is a weakly convex function. It bears various theoretical and computational advantages of both the benchmark convex ℓ1 penalty and many of its nonconvex surrogates that have been well studied in the literature. We

Toric symplectic geometry and full spark frames Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220718
Tom Needham, Clayton ShonkwilerThe collection of d×N complex matrices with prescribed column norms and singular values forms an algebraic variety, which we refer to as a frame space. Elements of frame spaces—i.e., frames—are used to give robust signal representations, so that geometrical properties of frame spaces are of interest to the signal processing community. This paper is concerned with the question: what is the probability

Superresolution wavelets for recovery of arbitrarily close pointmasses with arbitrarily small coefficients Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220711
Charles K. ChuiThree families of superresolution (SR) wavelets Ψv,ng(x), Ψu,n,ms(x) and Ψw,n,mds(x), to be called Gaussian SR (GSR), spline SR (SSR) and dualspline SR (DSSR) wavelets, respectively, are introduced in this paper for resolving the superresolution problem of recovering any pointmass h(y)=∑ℓ=1Lcℓδ(y−σℓ), with σℓ−σk≥η for ℓ≠k, σℓ≠0, and cℓ>η⁎ for all ℓ,k=1,…,L, where η>0 and η⁎>0 are allowed to

Complete interpolating sequences for the Gaussian shiftinvariant space Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220708
Anton Baranov, Yurii Belov, Karlheinz GröchenigWe give a full description of complete interpolating sequences for the shiftinvariant space generated by the Gaussian. As a consequence, we rederive the known density conditions for sampling and interpolation.

Eigenconvergence of Gaussian kernelized graph Laplacian by manifold heat interpolation Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220630
Xiuyuan Cheng, Nan WuWe study the spectral convergence of graph Laplacians to the LaplaceBeltrami operator when the kernelized graph affinity matrix is constructed from N random samples on a ddimensional manifold in an ambient Euclidean space. By analyzing Dirichlet form convergence and constructing candidate approximate eigenfunctions via convolution with manifold heat kernel, we prove eigenconvergence with rates as

A noncommutative approach to the graphon Fourier transform Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220628
Mahya Ghandehari, Jeannette Janssen, Nauzer KalyaniwallaSignal analysis on graphs relies heavily on the graph Fourier transform, which is defined as the projection of a signal onto an eigenbasis of the associated shift operator. Large graphs of similar structure may be represented by a graphon. Theoretically, graphons are limit objects of converging sequences of graphs. Our work extends previous research proposing a common scheme for signal analysis of

Gradient projection Newton pursuit for sparsity constrained optimization Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220620
Shenglong ZhouHardthresholdingbased algorithms have seen various advantages for sparse optimization in controlling the sparsity and allowing for fast computation. Recent research shows that when techniques of the Newtontype methods are integrated, their numerical performance can be improved surprisingly. This paper develops a gradient projection Newton pursuit algorithm that mainly adopts the hardthresholding

A sufficient condition for mobile sampling in terms of surface density Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220608
Benjamin Jaye, Mishko MitkovskiWe provide a sufficient condition for sets of mobile sampling in terms of the surface density of the set.

Nonconvex regularization for sparse neural networks Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220603
Konstantin Pieper, Armenak PetrosyanConvex ℓ1 regularization using an infinite dictionary of neurons has been suggested for constructing neural networks with desired approximation guarantees, but can be affected by an arbitrary amount of overparametrization. This can lead to a loss of sparsity and result in networks with too many active neurons for the given data, in particular if the number of data samples is large. As a remedy, in

APframes and stationary random processes Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220602
Hernán D. Centeno, Juan M. MedinaIt is known that, in general, an APframe is an L2(R)frame and conversely. Here, in part as a consequence of the Ergodic Theorem, we prove a necessary and sufficient condition for a Gabor system {g(t−k)eil(t−k),l∈L=ω0Z,k∈K=t0Z} to be an L2(R)Frame in terms of Gaussian stationary random processes. In addition, if X=(X(t))t∈R is a wide sense stationary random process, we study density conditions for

Biorthogonal Greedy Algorithms in convex optimization Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220519
A.V. Dereventsov, V.N. TemlyakovThe study of greedy approximation in the context of convex optimization is becoming a promising research direction as greedy algorithms are actively being employed to construct sparse minimizers for convex functions with respect to given sets of elements. In this paper we propose a unified way of analyzing a certain kind of greedytype algorithms for the minimization of convex functions on Banach spaces

Two families of compactly supported Parseval framelets in L2(Rd) Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220513
A. San Antolín, R.A. ZalikFor any dilation matrix with integral entries A∈Rd×d, d≥1, we construct two families of Parseval wavelet frames in L2(Rd). Both families have compact support and any desired number of vanishing moments. The first family has detA generators. The second family has any desired degree of regularity. For the members of this family, the number of generators depends on the dilation matrix A and the dimension

Divergencefree quasiinterpolation Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220504
Wenwu Gao, Gregory E. Fasshauer, Nicholas FisherDivergencefree interpolation has been extensively studied and widely used in approximating vectorvalued functions that are divergencefree. However, so far the literature contains no treatment of divergencefree quasiinterpolation. The aims of this paper are twofold: to construct an analytically divergencefree quasiinterpolation scheme and to derive its simultaneous approximation orders to both

A tailormade 3dimensional directional Haar semitight framelet for pMRI reconstruction Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220427
YanRan Li, Lixin Shen, Xiaosheng ZhuangIn this paper, we propose a model for parallel magnetic resonance imaging (pMRI) reconstruction, regularized by a carefully designed tight framelet system, that can lead to reconstructed images with much less artifacts in comparison to those from existing models. Our model is motivated from the observations that each receiver coil in a pMRI system is more sensitive to the specific object nearest to

Robust approach for blind separation of noisy mixtures of independent and dependent sources Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220414
A. Ghazdali, A. Ourdou, M. Hakim, A. Laghrib, N. Mamouni, A. MetraneThe framework of this article is to introduce a new efficient Blind Source Separation (BSS) method that handles mixtures of noisecontaminated independent / dependent sources. In order to achieve that, one can minimize a criterion that fuses a separating part, based on Kullback–Leibler divergence to set apart the observed mixtures of either dependent or independent sources, with a regularization part

Hirschman–Widder densities Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220412
Alexander Belton, Dominique Guillot, Apoorva Khare, Mihai PutinarHirschman and Widder introduced a class of Pólya frequency functions given by linear combinations of onesided exponential functions. The members of this class are probability densities, and the class is closed under convolution but not under pointwise multiplication. We show that, generically, a polynomial function of such a density is a Pólya frequency function only if the polynomial is a homothety

Graph signal interpolation with positive definite graph basis functions Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220408
Wolfgang ErbFor the interpolation of graph signals with generalized shifts of a graph basis function (GBF), we introduce the concept of positive definite functions on graphs. This concept merges kernelbased interpolation with spectral theory on graphs and can be regarded as a graph analog of radial basis function interpolation in Euclidean spaces or spherical basis functions. We provide several descriptions of

A shape preserving C2 nonlinear, nonuniform, subdivision scheme with fourthorder accuracy Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220328
Hyoseon Yang, Jungho YoonThe objective of this study is to present a shapepreserving nonlinear subdivision scheme generalizing the exponential Bspline of degree 3, which is a piecewise exponential polynomial with the same support as the cubic Bspline. The subdivision of the exponential Bspline has a crucial limitation in that it can reproduce at most two exponential polynomials, yielding the approximation order two. Also

Filament plots for data visualization Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220326
Nate StrawnThe efficiency of modern computer graphics allows us to explore collections of space curves simultaneously with “dragtorotate” interfaces. This inspires us to replace “scatterplots of points” with “scatterplots of curves” to simultaneously visualize relationships across an entire dataset. Since spaces of curves are infinite dimensional, scatterplots of curves avoid the “lossy” nature of scatterplots

The restricted isometry property of block diagonal matrices for groupsparse signal recovery Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220324
Niklas Koep, Arash Behboodi, Rudolf MatharGroupsparsity is a common lowcomplexity signal model with widespread application across various domains of science and engineering. The recovery of such signal ensembles from compressive measurements has been extensively studied in the literature under the assumption that measurement operators are modeled as densely populated random matrices. In this paper, we turn our attention to an acquisition

A onebit, comparisonbased gradient estimator Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220321
HanQin Cai, Daniel McKenzie, Wotao Yin, Zhenliang ZhangWe study zerothorder optimization for convex functions where we further assume that function evaluations are unavailable. Instead, one only has access to a comparison oracle, which given two points x and y returns a single bit of information indicating which point has larger function value, f(x) or f(y). By treating the gradient as an unknown signal to be recovered, we show how one can use tools from

Analysis vs synthesis with structure – An investigation of union of subspace models on graphs Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220316
M.S. Kotzagiannidis, M.E. DaviesWe consider the problem of characterizing the ‘duality gap’ between sparse synthesis and cosparse analysisdriven signal models through the lens of spectral graph theory, in an effort to comprehend their precise equivalencies and discrepancies. By detecting and exploiting the inherent connectivity structure, and hence, distinct set of properties, of rankdeficient graph difference matrices such as

Nonlinear waveletbased estimation to spectral density for stationary nonGaussian linear processes Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220307
Linyuan Li, Biao ZhangNonlinear waveletbased estimators for spectral densities of nonGaussian linear processes are considered. The convergence rates of mean integrated squared error (MISE) for those estimators over a large range of Besov function classes are derived, and it is shown that those rates are identical to minimax lower bounds in standard nonparametric regression model within a logarithmic term. Thus, those

Improved spectral convergence rates for graph Laplacians on εgraphs and kNN graphs Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220304
Jeff Calder, Nicolás García TrillosIn this paper we improve the spectral convergence rates for graphbased approximations of weighted LaplaceBeltrami operators constructed from random data. We utilize regularity of the continuum eigenfunctions and strong pointwise consistency results to prove that spectral convergence rates are the same as the pointwise consistency rates for graph Laplacians. In particular, for an optimal choice of

Irregular Gabor frames of Cauchy kernels Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220301
Yurii Belov,Aleksei Kulikov,Yurii LyubarskiiThe reason we wrote this note is twofold. First, in contrast to the (now) classical rectangular lattices αZ × βZ, not much is known about irregular ones Λ×M . The recent breakthrough related to semiregular lattices of the form Λ×βZ has been achieved in [1], where the authors considered the Gabor frames, generated by Gaussian totally positive functions of finite type. We also refer the reader to [1]

Nearoptimal performance bounds for orthogonal and permutation group synchronization via spectral methods Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220222
Shuyang LingGroup synchronization asks to recover group elements from their pairwise measurements. It has found numerous applications across various scientific disciplines. In this work, we focus on orthogonal and permutation group synchronization which are widely used in computer vision such as object matching and structure from motion. Among many available approaches, the spectral methods have enjoyed great

On the numerical evaluation of the prolate spheroidal wave functions of order zero Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220222
James BremerWe describe a method for the numerical evaluation of the angular prolate spheroidal wave functions of the first kind of order zero. It is based on the observation that underlies the WKB method, namely that many second order differential equations admit solutions whose logarithms can be represented much more efficiently than the solutions themselves. However, rather than exploiting this fact to construct

An O(1) algorithm for the numerical evaluation of the SturmLiouville eigenvalues of the spheroidal wave functions of order zero Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220218
Rafeh Rehan, James BremerIn addition to being the eigenfunctions of the restricted Fourier operator, the angular spheroidal wave functions of the first kind of order zero and nonnegative integer characteristic exponents are the solutions of a singular selfadjoint SturmLiouville problem. The running time of the standard algorithm for the numerical evaluation of their SturmLiouville eigenvalues grows with both bandlimit and

Wigner analysis of operators. Part I: Pseudodifferential operators and wave fronts Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220201
Elena Cordero, Luigi RodinoWe perform Wigner analysis of linear operators. Namely, the standard timefrequency representation Shorttime Fourier Transform (STFT) is replaced by the AWigner distribution defined by WA(f)=μ(A)(f⊗f¯), where A is a 4d×4d symplectic matrix and μ(A) is an associate metaplectic operator. Basic examples are given by the socalled τWigner distributions. Such representations provide a new characterization

Analysis and algorithms for ℓpbased semisupervised learning on graphs Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220131
Mauricio Flores, Jeff Calder, Gilad LermanThis paper addresses theory and applications of ℓpbased Laplacian regularization in semisupervised learning. The graph pLaplacian for p>2 has been proposed recently as a replacement for the standard (p=2) graph Laplacian in semisupervised learning problems with very few labels, where Laplacian learning is degenerate. In the first part of the paper we prove new discrete to continuum convergence

Multivariate Vandermonde matrices with separated nodes on the unit circle are stable Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220115
Stefan Kunis, Dominik Nagel, Anna StrotmannWe prove explicit lower bounds for the smallest singular value and upper bounds for the condition number of rectangular, multivariate Vandermonde matrices with scattered nodes on the complex unit circle. Analogously to the ShannonNyquist criterion, the nodes are assumed to be separated by a constant divided by the used polynomial degree. If this constant grows linearly with the spatial dimension,

Solving phase retrieval with random initial guess is nearly as good as by spectral initialization Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220120
JianFeng Cai, Meng Huang, Dong Li, Yang WangThe problem of recovering a signal x∈Rn from a set of magnitude measurements yi=〈ai,x〉,i=1,…,m is referred as phase retrieval, which has many applications in fields of physical sciences and engineering. In this paper we show that the smoothed amplitude flow based model for phase retrieval has benign geometric structure under the optimal sampling complexity. In particular, we show that when the measurements

Activation function design for deep networks: linearity and effective initialisation Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220104
M. Murray, V. Abrol, J. TannerThe activation function deployed in a deep neural network has great influence on the performance of the network at initialisation, which in turn has implications for training. In this paper we study how to avoid two problems at initialisation identified in prior works: rapid convergence of pairwise input correlations, and vanishing and exploding gradients. We prove that both these problems can be avoided

Deep microlocal reconstruction for limitedangle tomography Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220104
Héctor AndradeLoarca, Gitta Kutyniok, Ozan Öktem, Philipp PetersenWe present a deeplearningbased algorithm to jointly solve a reconstruction problem and a wavefront set extraction problem in tomographic imaging. The algorithm is based on a recently developed digital wavefront set extractor as well as the wellknown microlocal canonical relation for the Radon transform. We use the wavefront set information about xray data to improve the reconstruction by requiring

Neural collapse under crossentropy loss Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20220103
Jianfeng Lu, Stefan SteinerbergerWe consider the variational problem of crossentropy loss with n feature vectors on a unit hypersphere in Rd. We prove that when d≥n−1, the global minimum is given by the simplex equiangular tight frame, which justifies the neural collapse behavior. We also prove that, as n→∞ with fixed d, the minimizing points will distribute uniformly on the hypersphere and show a connection with the frame potential

Hierarchical isometry properties of hierarchical measurements Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20211222
Axel Flinth, Benedikt Groß, Ingo Roth, Jens Eisert, Gerhard WunderCompressed sensing studies linear recovery problems under structure assumptions. We introduce a new class of measurement operators, coined hierarchical measurement operators, and prove results guaranteeing the efficient, stable and robust recovery of hierarchically structured signals from such measurements. We derive bounds on their hierarchical restricted isometry properties based on the restricted

Highorder approximation rates for shallow neural networks with cosine and ReLUk activation functions Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20211221
Jonathan W. Siegel, Jinchao XuWe study the approximation properties of shallow neural networks with an activation function which is a power of the rectified linear unit. Specifically, we consider the dependence of the approximation rate on the dimension and the smoothness in the spectral Barron space of the underlying function f to be approximated. We show that as the smoothness index s of f increases, shallow neural networks with

On a regularization of unsupervised domain adaptation in RKHS Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20211216
Elke R. Gizewski, Lukas Mayer, Bernhard A. Moser, Duc Hoan Nguyen, Sergiy Pereverzyev, Sergei V. Pereverzyev, Natalia Shepeleva, Werner ZellingerWe analyze the use of the socalled general regularization scheme in the scenario of unsupervised domain adaptation under the covariate shift assumption. Learning algorithms arising from the above scheme are generalizations of importance weighted regularized least squares method, which up to now is among the most used approaches in the covariate shift setting. We explore a link between the considered

Metric entropy limits on recurrent neural network learning of linear dynamical systems Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20211220
Clemens Hutter, Recep Gül, Helmut BölcskeiOne of the most influential results in neural network theory is the universal approximation theorem [1], [2], [3] which states that continuous functions can be approximated to within arbitrary accuracy by singlehiddenlayer feedforward neural networks. The purpose of this paper is to establish a result in this spirit for the approximation of general discretetime linear dynamical systems—including

Frame soft shrinkage operators are proximity operators Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20211214
Jakob Alexander Geppert, Gerlind PlonkaIn this paper, we show that the commonly used frame soft shrinkage operator, that maps a given vector x∈RN onto the vector T†SγTx, is already a proximity operator, which can therefore be directly used in corresponding splitting algorithms. In our setting, the frame transform matrix T∈RL×N with L≥N has full rank N, T† denotes the MoorePenrose inverse of T, and Sγ is the usual soft shrinkage operator

Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20211217
Song Mei, Theodor Misiakiewicz, Andrea MontanariConsider the classical supervised learning problem: we are given data (yi,xi), i≤n, with yi a response and xi∈X a covariates vector, and try to learn a model fˆ:X→R to predict future responses. Random feature methods map the covariates vector xi to a point ϕ(xi) in a higher dimensional space RN, via a random featurization map ϕ. We study the use of random feature methods in conjunction with ridge regression

Error analysis for denoising smooth modulo signals on a graph Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20211207
Hemant TyagiIn many applications, we are given access to noisy modulo samples of a smooth function with the goal being to robustly unwrap the samples, i.e., to estimate the original samples of the function. In a recent work, Cucuringu and Tyagi [11] proposed denoising the modulo samples by first representing them on the unit complex circle and then solving a smoothness regularized least squares problem – the smoothness

On the evaluation of the eigendecomposition of the Airy integral operator Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20211115
Zewen Shen, Kirill SerkhThe distributions of the kth largest level at the soft edge scaling limit of Gaussian ensembles are some of the most important distributions in random matrix theory, and their numerical evaluation is a subject of great practical importance. One numerical method for evaluating the distributions uses the fact that they can be represented as Fredholm determinants involving the socalled Airy integral

Introduction to the Special Issue on Harmonic Analysis and Machine Learning Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20211201
David Donoho,Hrushikesh Mhaskar,DingXuan Zhou 
A multiscale environment for learning by diffusion Appl. Comput. Harmon. Anal. (IF 2.974) Pub Date : 20211117
James M. Murphy, Sam L. PolkClustering algorithms partition a dataset into groups of similar points. The clustering problem is very general, and different partitions of the same dataset could be considered correct and useful. To fully understand such data, it must be considered at a variety of scales, ranging from coarse to fine. We introduce the Multiscale Environment for Learning by Diffusion (MELD) data model, which is a family