当前期刊: Journal of Complexity Go to current issue    加入关注   
显示样式:        排序: 导出
我的关注
我的收藏
您暂时未登录!
登录
  • Gauss-Newton methods with approximate projections for solving constrained nonlinear least squares problems
    J. Complex. (IF 0.888) Pub Date : 2020-01-21
    Max L.N. Gonçalves; Tiago C. Menezes

    This paper is concerned with algorithms for solving constrained nonlinear least squares problems. We first propose a local Gauss-Newton method with approximate projections for solving the aforementioned problems and study, by using a general majorant condition, its convergence results, including results on its rate. By combining the latter method and a nonmonotone line search strategy, we then propose a global algorithm and analyze its convergence results. Finally, some preliminary numerical experiments are reported in order to illustrate the advantages of the new schemes.

    更新日期:2020-01-22
  • Algorithms and complexity for functions on general domains
    J. Complex. (IF 0.888) Pub Date : 2020-01-21
    Erich Novak

    Error bounds and complexity bounds in numerical analysis and information-based complexity are often proved for functions that are defined on very simple domains, such as a cube, a torus, or a sphere. We study optimal error bounds for the approximation or integration of functions defined on Dd⊂Rd and only assume that Dd is a bounded Lipschitz domain. Some results are even more general. We study three different concepts to measure the complexity: order of convergence, asymptotic constant, and explicit uniform bounds, i.e., bounds that hold for all n (number of pieces of information) and all (normalized) domains. It is known for many problems that the order of convergence of optimal algorithms does not depend on the domain Dd⊂Rd. We present examples for which the following statements are true: 1. Also the asymptotic constant does not depend on the shape of Dd or the imposed boundary values, it only depends on the volume of the domain. 2. There are explicit and uniform lower (or upper, respectively) bounds for the error that are only slightly smaller (or larger, respectively) than the asymptotic error bound.

    更新日期:2020-01-22
  • Fast multivariate multi-point evaluation revisited
    J. Complex. (IF 0.888) Pub Date : 2019-04-22
    Joris van der Hoeven; Grégoire Lecerf

    In 2008, Kedlaya and Umans designed the first multivariate multi-point evaluation algorithm over finite fields with an asymptotic complexity that can be made arbitrarily close to linear. However, it remains a major challenge to make their algorithm efficient for practical input sizes. In this paper, we revisit and improve their algorithm, while keeping this ultimate goal in mind. In addition we sharpen the known complexity bounds for modular composition of univariate polynomials over finite fields.

    更新日期:2020-01-04
  • Estimates of efficiency for two methods of stable numerical summation of smooth functions
    J. Complex. (IF 0.888) Pub Date : 2019-07-25
    S.G. Solodky; S.A. Stasyuk

    We consider a classical ill-posed problem of reconstruction of continuous functions from their noisy Fourier coefficients. We study the case of functions of two variables that has been much less investigated. The smoothness of reconstructed functions is measured in terms of the Sobolev classes as well as the classes of functions with dominated mixed derivatives. We investigate two summation methods, that are based on ideas of the rectangle and the hyperbolic cross, respectively. For both of these methods we establish the estimates of the accuracy on the classes that are considered as well the estimates of computational costs. Moreover, we made the comparison of their efficiency based on obtained estimates. A somehow surprising outcome of our study is that for both types of the considered smoothness classes one should employ hyperbolic cross approximation that is not typical for the functions under consideration.

    更新日期:2020-01-04
  • On the complexity of extending the convergence region for Traub’s method
    J. Complex. (IF 0.888) Pub Date : 2019-07-29
    Ioannis K. Argyros; Santhosh George

    The convergence region of Traub’s method for solving equations is small in general. This fact limits its applicability. We locate a more precise region containing the Traub iterations leading to at least as tight Lipschitz constants as before. Our convergence analysis is finer, and obtained without additional conditions. The new theoretical results are tested on numerical examples that illustrate their superiority over earlier results.

    更新日期:2020-01-04
  • Influence of the regularity of the test functions for weak convergence in numerical discretization of SPDEs
    J. Complex. (IF 0.888) Pub Date : 2019-08-12
    Charles-Edouard Bréhier

    This article investigates the role of the regularity of the test function when considering the weak error for standard spatial and temporal discretizations of SPDEs of the form dX(t)=AX(t)dt+dW(t), driven by space–time white noise. In previous results, test functions are assumed (at least) of class C2 with bounded derivatives, and the weak order is twice the strong order. We prove that to quantify the speed of convergence, it is crucial to control some derivatives of the test functions, even if the noise is non-degenerate. First, the supremum of the weak error over all bounded continuous functions, which are bounded by 1, does not converge to 0 as the discretization parameter vanishes. Second, when considering bounded Lipschitz test functions, the weak order of convergence is divided by 2, i.e. it is not better than the strong order. This is in contrast with the finite dimensional case, where the Euler–Maruyama discretization of elliptic SDEs dY(t)=f(Y(t))dt+dBt has weak order of convergence 1 even for bounded continuous functions.

    更新日期:2020-01-04
  • Optimal learning rates for distribution regression
    J. Complex. (IF 0.888) Pub Date : 2019-08-20
    Zhiying Fang; Zheng-Chu Guo; Ding-Xuan Zhou

    We study a learning algorithm for distribution regression with regularized least squares. This algorithm, which contains two stages of sampling, aims at regressing from distributions to real valued outputs. The first stage sample consists of distributions and the second stage sample is obtained from these distributions. To extract information from samples, we embed distributions to a reproducing kernel Hilbert space (RKHS) and use the second stage sample to form the regressor by a tool of mean embedding. We show error bounds in the L2-norm and prove that the regressor is a good approximation to the regression function. We derive a learning rate which is optimal in the setting of standard least squares regression and improve the existing work. Our analysis is achieved by using a novel second order decomposition to bound operator norms.

    更新日期:2020-01-04
  • Absolute value information for IBC problems
    J. Complex. (IF 0.888) Pub Date : 2019-08-24
    Leszek Plaskota; Paweł Siedlecki; Henryk Woźniakowski

    Two classes of information have been mainly considered in Information-Based Complexity (IBC) for approximate solutions of continuous problems. The first class is Λall and consists of all linear functionals, whereas the second class is Λstd and consists of only function evaluations. A different class of information has been studied in the context of phase retrieval, where it is assumed that only absolute values of linear functionals from Λ⊆Λall are available. We denote this class |Λ| and call it the absolute value information class. For |Λ| we need to modify the algorithm error to compensate the missing phase in information values. The purpose of this paper is to establish the powers of |Λall| and |Λstd| in comparison to Λall and Λstd for various IBC problems in the worst case setting. Our main result is that |Λall| is roughly of the same power as Λall for linear IBC problems. On the other hand, |Λstd| is usually too weak to solve linear problems.

    更新日期:2020-01-04
  • Stolarsky’s invariance principle for projective spaces
    J. Complex. (IF 0.888) Pub Date : 2019-09-05
    M.M. Skriganov

    We show that Stolarsky’s invariance principle, known for point distributions on the Euclidean spheres, can be extended to the real, complex, and quaternionic projective spaces and the octonionic projective plane.

    更新日期:2020-01-04
  • On the optimality of the trigonometric system
    J. Complex. (IF 0.888) Pub Date : 2019-09-07
    F. Jarad; A. Kushpel; K. Taş

    We study a new phenomenon of the behaviour of widths with respect to the optimality of trigonometric system. It is shown that the trigonometric system is optimal in the sense of Kolmogorov widths in the case of “super-high” and “super-small” smoothness but is not optimal in the intermediate cases. Bernstein’s widths behave differently when compared with Kolmogorov in the case of “super-small” smoothness. However, in the case of “super-high” smoothness Kolmogorov and Bernstein widths behave similarly, i.e. are realized by trigonometric polynomials.

    更新日期:2020-01-04
  • Sampling schemes and recovery algorithms for functions of few coordinate variables
    J. Complex. (IF 0.888) Pub Date : 2019-12-27
    Simon Foucart

    When a multivariate function does not depend on all of its variables, it can be approximated from fewer point evaluations than otherwise required. This has been previously quantified e.g. in the case where the target function is Lipschitz. This note examines the same problem under other assumptions on the target function. If it is linear or quadratic, then connections to compressive sensing are exploited in order to determine the number of point evaluations needed for recovering it exactly. If it is coordinatewise increasing, then connections to group testing are exploited in order to determine the number of point evaluations needed for recovering the set of active variables. A particular emphasis is put on explicit sets of evaluation points and on practical recovery methods. The results presented here also add a new contribution to the field of group testing.

    更新日期:2020-01-04
  • A note on the complexity of a phaseless polynomial interpolation
    J. Complex. (IF 0.888) Pub Date : 2019-11-29
    Michał R. Przybyłek; Paweł Siedlecki

    In this paper we revisit the classical problem of polynomial interpolation, with a slight twist; namely, polynomial evaluations are available up to a group action of the unit circle on the complex plane. It turns out that this new setting allows for a phaseless recovery of a polynomial in a polynomial time.

    更新日期:2020-01-04
  • Optimal approximation order of piecewise constants on convex partitions
    J. Complex. (IF 0.888) Pub Date : 2019-11-06
    Oleg Davydov; Oleksandr Kozynenko; Dmytro Skorokhodov

    We prove that the error of the best nonlinear Lp-approximation by piecewise constants on convex partitions is O(N−2d+1), where N is the number of cells, for all functions in the Sobolev space Wq2(Ω) on a cube Ω⊂Rd, d⩾2, as soon as 2d+1+1p−1q⩾0. The approximation order O(N−2d+1) is achieved on a polyhedral partition obtained by anisotropic refinement of an adaptive dyadic partition. Further estimates of the approximation order from the above and below are given for various Sobolev and Sobolev–Slobodeckij spaces Wqr(Ω) embedded in Lp(Ω), some of which also improve the standard estimate O(N−1d) known to be optimal on isotropic partitions.

    更新日期:2020-01-04
  • Information based complexity for high dimensional sparse functions
    J. Complex. (IF 0.888) Pub Date : 2019-11-02
    Cuize Han; Ming Yuan

    We investigate optimal algorithms for optimizing and approximating a general high dimensional smooth and sparse function from the perspective of information based complexity. Our algorithms and analyses reveal several interesting characteristics for these tasks. In particular, somewhat surprisingly, we show that the optimal sample complexity for optimization or high precision approximation is independent of the ambient dimension. In addition, we show that the benefit of randomization could be substantial for these problems.

    更新日期:2020-01-04
  • On local analysis
    J. Complex. (IF 0.888) Pub Date : 2019-10-28
    Felipe Cucker; Teresa Krick

    We extend to Gaussian distributions a result providing smoothed analysis estimates for condition numbers given as relativized distances to ill-posedness. We also introduce a notion of local analysis meant to capture the behavior of these condition numbers around a point.

    更新日期:2020-01-04
  • Counting points on hyperelliptic curves with explicit real multiplication in arbitrary genus
    J. Complex. (IF 0.888) Pub Date : 2019-10-21
    Simon Abelard

    We present a probabilistic Las Vegas algorithm for computing the local zeta function of a genus-g hyperelliptic curve defined over Fq with explicit real multiplication (RM) by an order Z[η] in a degree-g totally real number field. It is based on the approaches by Schoof and Pila in a more favourable case where we can split the ℓ-torsion into g kernels of endomorphisms, as introduced by Gaudry, Kohel, and Smith in genus 2. To deal with these kernels in any genus, we adapt a technique that the author, Gaudry, and Spaenlehauer introduced to model the ℓ-torsion by structured polynomial systems. Applying this technique to the kernels, the systems we obtain are much smaller and so is the complexity of solving them. Our main result is that there exists a constant c>0 such that, for any fixed g, this algorithm has expected time and space complexity O((logq)c) as q grows and the characteristic is large enough. We prove that c≤9 and we also conjecture that the result still holds for c=7.

    更新日期:2020-01-04
  • A note on isotropic discrepancy and spectral test of lattice point sets
    J. Complex. (IF 0.888) Pub Date : 2019-10-18
    Friedrich Pillichshammer; Mathias Sonnleitner

    We show that the isotropic discrepancy of a lattice point set can be bounded from below and from above in terms of the spectral test of the corresponding integration lattice. From this result we deduce that the isotropic discrepancy of any N-element lattice point set in [0,1)d is at least of order N−1∕d. This order of magnitude is best possible for lattice point sets in dimension d.

    更新日期:2020-01-04
  • ε-superposition and truncation dimensions in average and probabilistic settings for ∞-variate linear problems
    J. Complex. (IF 0.888) Pub Date : 2019-10-08
    J. Dingess; G.W. Wasilkowski

    The paper deals with linear problems defined on γ-weighted Hilbert spaces of functions with infinitely many variables. The spaces are endowed with zero-mean Gaussian measures which allows to define and study ε-truncation and ε-superposition dimensions in the average case and probabilistic settings. Roughly speaking, these ε-dimensions quantify the smallest number k=k(ε) of variables that allow to approximate the ∞-variate functions by special ones that depend on at most k-variables with the average error bounded by ε. In the probabilistic setting, given δ∈(0,1), we want the error ≤ε with probability ≥1−δ. We show that the ε-dimensions are surprisingly small which, for anchored spaces, leads to very efficient algorithms, including the Multivariate Decomposition Methods.

    更新日期:2020-01-04
  • Lower error bounds for the stochastic gradient descent optimization algorithm: Sharp convergence rates for slowly and fast decaying learning rates
    J. Complex. (IF 0.888) Pub Date : 2019-09-27
    Arnulf Jentzen; Philippe von Wurstemberger

    The stochastic gradient descent (SGD) optimization algorithm is one of the central tools used to approximate solutions of stochastic optimization problems arising in machine learning and, in particular, deep learning applications. It is therefore important to analyze the convergence behavior of SGD. In this article we consider a simple quadratic stochastic optimization problem and establish for every γ,ν∈(0,∞) essentially matching lower and upper bounds for the mean square error of the associated SGD process with learning rates (γnν)n∈N. This allows us to precisely quantify the mean square convergence rate of the SGD method in dependence on the choice of the learning rates.

    更新日期:2020-01-04
  • An Effective Algorithm for Generation of Factorial Designs with Generalized Minimum Aberration.
    J. Complex. (IF 0.888) Pub Date : 2007-01-01
    Kai-Tai Fang,Aijun Zhang,Runze Li

    Fractional factorial designs are popular and widely used for industrial experiments. Generalized minimum aberration is an important criterion recently proposed for both regular and non-regular designs. This paper provides a formal optimization treatment on optimal designs with generalized minimum aberration. New lower bounds and optimality results are developed for resolution-III designs. Based on these results, an effective computer search algorithm is provided for sub-design selection, and new optimal designs are reported.

    更新日期:2019-11-01
  • Fast orthogonal transforms and generation of Brownian paths.
    J. Complex. (IF 0.888) Pub Date : 2013-03-09
    Gunther Leobacher

    We present a number of fast constructions of discrete Brownian paths that can be used as alternatives to principal component analysis and Brownian bridge for stratified Monte Carlo and quasi-Monte Carlo. By fast we mean that a path of length [Formula: see text] can be generated in [Formula: see text] floating point operations. We highlight some of the connections between the different constructions and we provide some numerical examples.

    更新日期:2019-11-01
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
2020新春特辑
限时免费阅读临床医学内容
ACS材料视界
科学报告最新纳米科学与技术研究
清华大学化学系段昊泓
自然科研论文编辑服务
加州大学洛杉矶分校
上海纽约大学William Glover
南开大学化学院周其林
课题组网站
X-MOL
北京大学分子工程苏南研究院
华东师范大学分子机器及功能材料
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug